id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,412,029 | Debugging Multiple .NET Projects in VS Code | It is quite common, especially in medium/large projects, to have to run multiple projects in debug... | 0 | 2023-03-23T09:51:43 | https://dev.to/this-is-learning/debugging-multiple-net-projects-in-vs-code-28kn | vscode, dotnet, webdev | It is quite common, especially in medium/large projects, to have to run multiple projects in debug mode at the same time.
The simplest example is a frontend and its services within the same .NET solution.
But is it so difficult to launch multiple .NET projects simultaneously from Visual Studio Code?
## First Steps
Create a folder in your local drive and then create two projects in your favorite terminal with the following instructions:
```
dotnet new webapp --name "Frontend"
```
```
dotnet new webapi --name "Backend"
```
Now, create a solution and add the two projects to a it with the following instructions:
```
dotnet new sln --name AwesomeSolution
dotnet sln add Frontend
dotnet sln add Backend
```
Now we have at least two projects to launch.
## Launch Configurations
Open Visual Studio Code directly from the command line with the instruction:
```
code.
```
This instruction open Visual Studio Code in the current folder.
After a few seconds the editor understands that you are in a .NET project and asks you to enter some files to debug or launch the application.
Accept it and let it create some necessary files.

Now you can open the file "**tasks.json**" and you can replace the task called "**build**" with the following code:
```json
{
"label": "buildbackend",
"command": "dotnet",
"type": "process",
"args": [
"build",
"${workspaceFolder}/Backend/Backend.csproj",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary"
],
"problemMatcher": "$msCompile"
},
{
"label": "buildfrontend",
"command": "dotnet",
"type": "process",
"args": [
"build",
"${workspaceFolder}/Frontend/Frontend.csproj",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary"
],
"problemMatcher": "$msCompile"
},
```
The first task is related to the Backend application and the second one to the Frontend.
As you can see, they build the project in the right folder and project. Nothing complicated but VS Code, by default, creates only one task even in your project you have more than one.
## Compounds Configuration
Now that you have the two tasks related to the two individual projects, you can replace the current contents of the "**launch.json**" file with the following json.
I'm going to explain it shortly.
```json
{
"version": "0.2.0",
"configurations": [
{
"name": ".NET Core Launch (Frontend)",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "buildfrontend",
"program": "${workspaceFolder}/Frontend/bin/Debug/net7.0/Frontend.dll",
"args": [],
"cwd": "${workspaceFolder}/Frontend",
"stopAtEntry": false,
"serverReadyAction": {
"action": "openExternally",
"pattern": "\\bNow listening on:\\s+(https?://\\S+)"
},
"env": {
"ASPNETCORE_ENVIRONMENT": "Development"
},
"sourceFileMap": {
"/Views": "${workspaceFolder}/Views"
}
},
{
"name": ".NET Core Launch (Backend)",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "buildbackend",
"program": "${workspaceFolder}/Backend/bin/Debug/net7.0/Backend.dll",
"args": [],
"cwd": "${workspaceFolder}/Backend",
"stopAtEntry": false,
"serverReadyAction": {
"action": "openExternally",
"pattern": "\\bNow listening on:\\s+(https?://\\S+)"
},
"env": {
"ASPNETCORE_ENVIRONMENT": "Development"
},
"sourceFileMap": {
"/Views": "${workspaceFolder}/Views"
}
}
]
}
```
As you can see, we created two different configuration with two different name to launch a project one by one.
In the "prelaunchtask" property, it's important to use the same name used in the previous tasks file.
But we are not done yet.
We need to do the last step to launch all the projects at the same time.
After the configurations array, add a "compounds" section.
Specify a name that you want and in the configurations it's important to specify the same name as we used in the configurations section.
```json
"compounds": [
{
"name": "Frontend & Backend",
"configurations": [
".NET Core Launch (Frontend)",
".NET Core Launch (Backend)"
],
"stopAll": true
}
]
```
This is the result in the "Run & Debug" panel.

Press "F5" or on the Play button to launch the new launch configuration with multiple projects.
## Conclusion
As you can see it's very easy to configure Visual Studio Code to launch multiple project.
Now that you have the basic setup, you can use this approach all the time, even referring to this easy example with only two projects.
You'll just have to add all your projects that you want to launch, but with the same configuration workflow we've used now.
---
Are you interested in learning GitHub but don't know where to start? Try my course on LinkedIn Learning: [Learning GitHub](https://bit.ly/learninggithub).

---
Thanks for reading this post, I hope you found it interesting!
Feel free to follow me to get notified when new articles are out 🙂
{% embed https://dev.to/kasuken %} | kasuken |
1,412,193 | Convert Flutter app to android Native java app | I have a flutter app which I would like to convert to android native java app. PS: just the... | 0 | 2023-03-23T11:04:36 | https://dev.to/anurupjalota/convert-flutter-app-to-android-native-java-app-2n3n | android, flutter, java | I have a flutter app which I would like to convert to android native java app.
PS: just the UI.
Basically I want to add an SDK called 'Tuya IOT SDK' but the problem is it only supports java/swift.
| anurupjalota |
1,412,250 | Mastering Blazor - Part 2: Hot Reloading | Photo by Mike van den Bos on Unsplash In the recent three years I did quite a bit of web development... | 26,920 | 2023-03-23T12:47:28 | https://dev.to/florianrappl/mastering-blazor-part-2-hot-reloading-2b87 | blazor, dotnet, javascript, webdev | *Photo by <a href="https://unsplash.com/@mike_van_den_bos?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Mike van den Bos</a> on <a href="https://unsplash.com/photos/jf1EomjlQi0?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>*
In the recent three years I did quite a bit of web development also using Microsoft's new web framework called Blazor. Blazor adds component-first support to ASP.NET by introducing "Razor components". With Razor components Blazor is capable of providing a full single-page application framework.
One of the things that make Blazor attractive since .NET 6 is the introduction of hot-reloading. What is hot-reloading and how can we use it? Where are its limitations? In this article, we will explore the hot-reload feature in Blazor, how it works, and how it can make the development process faster and more efficient.
## Hot-Reload Basics
Hot-reload is a feature that allows developers to modify code while an application is running without having to restart it manually. In other words, developers can make changes to their code and immediately see the impact of those changes in the running application without any downtime. This feature is a massive time-saver for developers since it eliminates the need to stop and start the application manually every time they make changes.
Blazor's hot-reload feature allows developers to modify their code in real-time, making it easier to debug and test applications. With hot-reload, developers can tweak their code and see the changes instantly, without having to waste time waiting for the application to restart.

There are some constraints and requirements that developers should be aware of when using the hot-reload feature in Blazor:
- Blazor hot-reload requires a supported browser: To use the hot-reload feature in Blazor, you need to use a browser that supports WebSockets, which is the communication protocol used by Blazor for hot-reload. Currently, most modern browsers, including Chrome, Firefox, Edge, and Safari, support WebSockets.
- Hot-reload is not enabled by default: To use the hot-reload feature in Blazor, you need to explicitly enable it in your project. This can be done by adding the `Microsoft.AspNetCore.Components.WebAssembly.DevServer` NuGet package to your project and modifying your project's launch settings to enable hot-reload.
- There are some performance implications: While hot-reload is a valuable tool, it can have some performance implications, particularly when dealing with large applications. In some cases, the hot-reload process can cause the application to slow down or become unresponsive.

By keeping these constraints and requirements in mind, developers can use the hot-reload feature in Blazor more effectively and avoid any potential issues or performance problems.
Another thing to know about are the limitations of hot-reload.
## Limitations of Hot-Reload
Although Blazor's hot-reload feature is a significant time-saver and an efficient tool for developers, it does come with some limitations.
First, it has limited support for changing the structure of the code: While Blazor's hot-reload feature is excellent for modifying the content of the code, it has limited support for making changes to the structure of the code. This means that any changes that affect the underlying structure of the code, such as renaming a class or adding a new property, may require a manual application restart.
Another limitation is the lack of support for changing the dependencies: Another limitation of Blazor's hot-reload feature is that it does not support changes to the application's dependencies. If the developer adds or removes a dependency, the application needs to be restarted manually.
Keep in mind that by hot-reloading you might be inable to preserve the application state: Blazor's hot-reload feature may cause the application to lose its state when the code is reloaded. This means that the developer may have to re-enter data or re-run certain processes every time they make changes to the code.
Finally, it has limited support for debugging: Although hot-reload is an efficient tool for developers to debug their code, it may not work in all scenarios. For instance, it may not be effective when debugging complex code, or when the changes made to the code have a significant impact on the application's behavior.
Overall, while Blazor's hot-reload feature is an essential tool, it's important to keep in mind its limitations and understand when a manual application restart may be necessary. If anything is fishy, just stop and restart as you'd have done it beforehand.
## Internals of Hot-Reload
Blazor's hot-reload feature works by dynamically updating the application's code while it's running. When a developer makes changes to the code, the dotnet runtime monitors those changes and applies them to the running Blazor application in real-time.
The hot-reload feature works by using a combination of client-side and server-side technologies. The client-side component of the hot-reload feature is responsible for monitoring changes to the code, while the server-side component is responsible for applying those changes to the running application.
When a developer makes changes to the code, the client-side component of the hot-reload feature detects those changes and sends them to the server. The server then applies the changes to the running application without interrupting its execution. The server also compiles the modified code in the background to ensure that the changes are applied seamlessly to the running application.
Blazor's hot-reload feature uses a technique called IL (Intermediate Language) swapping to apply changes to the running application. IL swapping is a technique that replaces the existing IL code of a method with a new one. This technique allows the application to continue running without interruption, even when the code is being updated.
In order to find out if hot-reloading should be enabled a set of headers is necessary in the initial response of the *boot.config.json*. If the `ASPNETCORE-BROWSER-TOOLS` header is found, then Blazor will activate a set of abilities.
In general, we find the following three custom headers attached to pretty much all static resources (but, as mentioned, for hot-reload only the *boot.config.json* request is relevant):
```http
DOTNET-MODIFIABLE-ASSEMBLIES: debug
ASPNETCORE-BROWSER-TOOLS: true
Blazor-Environment: Development
```
How are these headers entering the process? Via the Blazor JavaScript. They are attached to the created `bootConfig`, which is based mainly on the contents from the *boot.config.json*. Now having the `ASPNETCORE-BROWSER-TOOLS` header will actually set the `aspnetCoreBrowserTools` property. In turn, if this is truthy, then the `__ASPNETCORE_BROWSER_TOOLS` environment variable will be set to its value. Finally, inside the .NET code running in WebAssembly this will be picked up and handled.
How specifically is this handled? Well, we have two different parts for this: A WebSocket connection to, e.g., `ws://localhost:57151/YourProjectName/` which is established in the beginning. This one is used to monitor if changes have been made. Here, our running application will receive updates, for instance:
```json
{
"type": "BlazorHotReloadDeltav1",
"sharedSecret": "...",
"deltas": [
{
"sequenceId": 5,
"moduleId": "ce3aed28-47b4-46e8-b475-9995359d01a2",
"metadataDelta": "...",
"ilDelta": "...",
"updatedTypes": [33554446, 33554456],
"pdbDelta": "..."
}
]
}
```
Interestingly, the `type` is already versioned - this should ensure to support specifications of future .NET versions. Presumably, the capabilities as well as the format might change.
While the receiving part has modelled as a WebSocket connection, there is also a sending part. The sending part is a POST request to `https://localhost:7094/_framework/blazor-hotreload`. Like beforehand, the `blazor-hotreload` does not resolve to an existing or real resource. Instead, it needs to be provided by the debug runtime.
The format is similar to the messages received from the WebSocket:
```json
[
{
"sequenceId": 5,
"moduleId": "ce3aed28-47b4-46e8-b475-9995359d01a2",
"metadataDelta": "...",
"ilDelta": "...",
"updatedTypes": [33554446, 33554456],
"pdbDelta": "..."
}
]
```
Indeed, this one is just sending back to the received / applied `deltas`. This helps the .NET counter part to know what has been applied already - and what might be still missing.
The following diagram tries to place all these resources in an architecture:

So let's recap what URLs appeared here and what their roles have been:
- `/_framework/blazor-hotreload`: This URL is used to establish a connection between the Visual Studio side (client) and dev server side of the hot-reload feature. When a developer makes changes to the code, the client sends a request to this URL to notify the server of the changes.
- `/YourProjectName`: This URL is used to establish a WebSocket connection between the client-side and server-side components of the hot-reload feature. The WebSocket connection is used to facilitate real-time communication between the two components, allowing the server to send updated code to the client as soon as it's available.
This all results in the following sequence diagram:

In addition to these URLs, there may be other "magic" URLs and connections that are part of the Blazor hot-reload mechanism. These URLs and connections may be specific to the version of Blazor you're using or to the tooling you're using to develop your Blazor application.
## Bonus: Browser Link
There are some other URLs that appear from time to time in a discussion about Blazor hot-reload: The `/browserLink` and `/_framework/aspnetcore-browser-refresh.js` URLs. While they are loosely related to the hot-reload feature, they serve a different purpose than the URLs from the previous section. However, for instance, the `aspnetcore-browser-refresh.js` is responsible for establishing the WebSocket connection mentioned in the previous section.
Here's a brief explanation of each of these URLs and their role in the development process:
- `/browserLink`: This URL is used to enable the Browser Link feature in Visual Studio, which allows developers to interactively debug and modify their code in the browser. Browser Link can be used to perform tasks such as refreshing the browser, inspecting the DOM, and debugging JavaScript.
- `/_framework/aspnetcore-browser-refresh.js`: This URL is used to enable browser refresh in Visual Studio, which allows developers to automatically refresh the browser when changes are made to the code. The aspnetcore-browser-refresh.js script is responsible for monitoring the server for changes and triggering a browser refresh when changes are detected. It also establishes the WebSocket connection to the hot-reload server, which is necessary to make this work in the first place.
While these URLs and features are not directly related to the hot-reload feature in Blazor, they can be used in conjunction with hot-reload to speed up the development process. For example, developers can use Browser Link to interactively debug and modify their code in the browser while using hot-reload to apply changes to the running application.
These URLs are entering our application by intercepting requests. Visual Studio will modify the response of a request if the `accept` header is set to `text/html`. In such scenarios the middleware injects the following piece of JavaScript code into the response:
```js
const s1 = document.createElement('script');
s1.type = "text/javascript";
s1.src = "http://localhost:51554/ff.../browserLink";
s1.async = true;
s1.id = "__browserLink_initializationData";
s1.dataset.requestid = "6e3b...";
s1.dataset.requestmappingfromserver = "false";
s1.dataset.connecturl = "http://localhost:51554/ff.../browserLink";
document.body.appendChild(s1);
const s2 = document.createElement('script');
s2.type = "text/javascript";
s2.src = "http://localhost:5000/_framework/aspnetcore-browser-refresh.js";
s2.onload = () => {
if (typeof __wasmmodulecallback__ !== 'undefined') {
window.__wasmmodulecallback__();
delete window.__wasmmodulecallback__;
}
};
document.body.appendChild(s2);
```
Importantly, neither the `browserLink` endpoint (handled by the Visual Studio instance) nor the `aspnetcore-browser-refresh.js` file has to be provided by us as beforehand. The `browserLink` and `browser-refresh.js` give the IDE (e.g., Visual Studio) the ability to communicate with the browser. For instance, the console messages are then seen in Visual Studio, too. Additionally, breakpoints are synced and communicated such that the browser stops where it should.
## Conclusion
Blazor's hot-reload feature is a useful tool that allows developers to make changes to their code and see those changes applied to the running application in real-time. This feature uses a combination of client-side and server-side technologies to monitor code changes and apply them on the fly, without interrupting the running application's execution. Blazor hot-reload uses a technique called IL swapping to apply changes to the running application, replacing the existing IL code of a method with a new one.
During the hot-reload process, you may encounter URLs such as `/_framework/blazor-hotreload` and a WebSocket connection. These URLs and connections are part of the hot-reload mechanism and facilitate communication between the client-side and server-side components.
Blazor also offers other development tools, such as Browser Link and browser refresh, which can be used in conjunction with hot-reload to speed up the development process. While hot-reload has its limitations, such as not being able to handle all types of changes to the code, it is a valuable tool that can help developers save time and effort when building Blazor applications. Overall, hot-reload is a useful feature that demonstrates Blazor's commitment to delivering a smooth and efficient development experience.
## Related
- [Telerik - Instant Feedback is Here](https://www.telerik.com/blogs/instant-feedback-is-here-introducing-hot-reload-in-dotnet-6)
- [JetBrains - How Rider Hot-Reload Works Under the Hood](https://blog.jetbrains.com/dotnet/2021/12/02/how-rider-hot-reload-works-under-the-hood/)
| florianrappl |
1,412,389 | SBOM Explained: 📚 An Enterprise Guide to Security Risk Management | In this age of technology, software companies are quickly shifting towards a strict compliance... | 0 | 2023-03-27T16:37:48 | https://boxyhq.com/blog/sbom-explained-an-enterprise-guide-to-security-risk-management | sbom, opensource, riskmanagement, security |
---
title: SBOM Explained: 📚 An Enterprise Guide to Security Risk Management
published: true
date: 2023-03-22 00:00:00 UTC
tags: sbom, opensource,riskmanagement, security
canonical_url: https://boxyhq.com/blog/sbom-explained-an-enterprise-guide-to-security-risk-management
---
In this age of technology, software companies are quickly shifting towards a strict compliance posture. You may ask yourself, why is that and what has changed over the last several years? This can be due to multiple factors but can mainly be boiled down into four categories.
- 🔒 Security
- 👩💻 Product Development
- 🦊 Compliance
- ⚠️ Supply Chain & Risk Management
## What is an SBOM or Software Bill of Materials?[](#what-is-an-sbom-or-software-bill-of-materials "Direct link to What is an SBOM or Software Bill of Materials?")

You can think of an SBOM as a cake you are baking. That cake has an ingredients list or in the case of software, open-source or private dependencies, and third-party components. This list can be massive and can range from hundreds to thousands, especially when you factor in transitive dependencies (an indirect dependency of the component it relies on). This list gets very complex and is impossible to manage at scale at the enterprise level. Gone are the days when most enterprises or smaller-scale organizations would write applications from scratch. Rather, most applications are assembled using pre-built software packages, i.e open-source. Over the past decade, this has led to the rise and use of publicly available software components and agile development at an enormous rate.
## Security

More organizations now have security top-of-mind due to cyber attacks on business infrastructure becoming almost a daily occurrence. Companies are faced with major decisions that affect cost, efficiency, and productivity. There is an enormous risk and liability where a company’s code base lives in the cloud. This could be private or public but a good rule of thumb when you think about risk is not if we’re breached, but when. So precautions need to be taken to add layers of protection to software infrastructure.
An SBOM would provide better visibility to companies in identifying and tracking security vulnerabilities that could enable timely patches. Updates in code should prompt a newly generated report (SBOM).
We could go a lot deeper around all the various ways precautions could be taken like scanning your application for vulnerabilities or penetration testing. A [minimum viable secure product](https://mvsp.dev/) is of course recommended for all enterprise-ready organizations. It could be as simple as having [audit logs](https://boxyhq.com/audit-logs) in place to record and search events internally on your application.
## Product Development[](#product-development "Direct link to Product Development")
SBOMs can improve and help software development processes by providing a comprehensive list of all open-source dependencies and components used in a product. From an organizational standpoint, it is crucial to ensure that all software and related components are up-to-date and licensed correctly. Having this clear understanding is key to risk mitigation when it comes to understanding what is in a product and how it is constructed. Therefore, development teams can quickly improve transparency, communication, and collaborative decision-making with a faster time-to-market while ensuring industry compliance standards and regulations.

## Compliance[](#compliance "Direct link to Compliance")
In 2021 the US Government issued an [executive order](https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/) mandating all software companies that do business with the US government must provide a detailed inventory list (software bill of materials or SBOM) of all components related to the software they have produced and sold to any federal agency. The legislation is geared towards transparency with a view to the enhancement of security for our supply chain infrastructure. This has quickly prompted other countries to consider similar legislation.
## Supply Chain & Risk Management[](#supply-chain--risk-management "Direct link to Supply Chain & Risk Management")
It’s vital to take a proactive approach and have a top-down understanding of all software components that live in a software ecosystem. With that in mind, SBOMs are quickly becoming a standard for software supply chain risk management. With the expansion of sophisticated attacks, it’s more important than ever to take a systematic approach when it comes to the security of your organization.
The ecosystem is growing with tools that help you verify, scan and monitor all your components and manifest files.
If you have an open-source product to sell to enterprises or the public sector, you will need to anticipate compliance and build an SBOM into your product. Especially if you are targeting entities in heavily regulated industries. Taking this into consideration, that’s what we did at BoxyHQ for each of our four products with guides that will help you navigate these decisions (Enterprise Single Sign On, Directory Sync, Audit Logs, and Data Privacy Vault). We chose to build [Cosign](https://www.sigstore.dev/) right into our tool - but many more are available. Here is a [curated list of SBOM-related tools.](https://github.com/awesomeSBOM/awesome-sbom)
Please visit BoxyHQ’s website to learn more about producing an SBOM for your products. E.g. SBOM for our enterprise SSO: [BoxyHQ](https://boxyhq.com/docs/jackson/sbom) | nathan_tarbert |
1,412,463 | The Heart Of JavaScript #functions💙 | Functions Function Declaration Function without a parameter and return Function returning... | 0 | 2023-03-23T15:26:42 | https://dev.to/shaikhmd007/the-heart-of-javascript-functions-2lin | - [Functions](#functions)
- [Function Declaration](#function-declaration)
- [Function without a parameter and return](#function-without-a-parameter-and-return)
- [Function returning value](#function-returning-value)
- [Function with a parameter](#function-with-a-parameter)
- [Function with two parameters](#function-with-two-parameters)
- [Function with many parameters](#function-with-many-parameters)
- [Function with unlimited number of parameters](#function-with-unlimited-number-of-parameters)
- [Unlimited number of parameters in regular function](#unlimited-number-of-parameters-in-regular-function)
- [Unlimited number of parameters in arrow function](#unlimited-number-of-parameters-in-arrow-function)
- [Anonymous Function](#anonymous-function)
- [Expression Function](#expression-function)
- [Self Invoking Functions](#self-invoking-functions)
- [Arrow Function](#arrow-function)
- [Function with default parameters](#function-with-default-parameters)
- [Function declaration versus Arrow function](#function-declaration-versus-arrow-function)
## Functions
So far we have seen many builtin JavaScript functions. In this section, we will focus on custom functions. What is a function? Before we start making functions, lets understand what function is and why we need function?
A function is a reusable block of code or programming statements designed to perform a certain task.
A function is declared by a function key word followed by a name, followed by parentheses (). A parentheses can take a parameter. If a function take a parameter it will be called with argument. A function can also take a default parameter. To store a data to a function, a function has to return certain data types. To get the value we call or invoke a function.
Function makes code:
- clean and easy to read
- reusable
- easy to test
A function can be declared or created in couple of ways:
- _Declaration function_
- _Expression function_
- _Anonymous function_
- _Arrow function_
### Function Declaration
Let us see how to declare a function and how to call a function.
```js
//declaring a function without a parameter
function functionName() {
// code goes here
}
functionName(); // calling function by its name and with parentheses
```
### Function without a parameter and return
Function can be declared without a parameter.
**Example:**
```js
// function without parameter, a function which make a number square
function square() {
let num = 2;
let sq = num * num;
console.log(sq);
}
square(); // 4
// function without parameter
function addTwoNumbers() {
let numOne = 10;
let numTwo = 20;
let sum = numOne + numTwo;
console.log(sum);
}
addTwoNumbers(); // a function has to be called by its name to be executed
```
```js
function printFullName() {
let firstName ="md";
let lastName = "shaikh";
let space = " ";
let fullName = firstName + space + lastName;
console.log(fullName);
}
printFullName(); // calling a function
```
### Function returning value
Function can also return values, if a function does not return values the value of the function is undefined. Let us write the above functions with return. From now on, we return value to a function instead of printing it.
```js
function printFullName() {
let firstName = "md";
let lastName = "shaikh";
let space = " ";
let fullName = firstName + space + lastName;
return fullName;
}
console.log(printFullName());
```
```js
function addTwoNumbers() {
let numOne = 2;
let numTwo = 3;
let total = numOne + numTwo;
return total;
}
console.log(addTwoNumbers());
```
### Function with a parameter
In a function we can pass different data types(number, string, boolean, object, function) as a parameter.
```js
// function with one parameter
function functionName(parm1) {
//code goes her
}
functionName(parm1); // during calling or invoking one argument needed
function areaOfCircle(r) {
let area = Math.PI * r * r;
return area;
}
console.log(areaOfCircle(10)); // should be called with one argument
function square(number) {
return number * number;
}
console.log(square(10));
```
### Function with two parameters
```js
// function with two parameters
function functionName(parm1, parm2) {
//code goes her
}
functionName(parm1, parm2); // during calling or invoking two arguments needed
// Function without parameter doesn't take input, so lets make a function with parameters
function sumTwoNumbers(numOne, numTwo) {
let sum = numOne + numTwo;
console.log(sum);
}
sumTwoNumbers(10, 20); // calling functions
// If a function doesn't return it doesn't store data, so it should return
function sumTwoNumbers(numOne, numTwo) {
let sum = numOne + numTwo;
return sum;
}
console.log(sumTwoNumbers(10, 20));
function printFullName(firstName, lastName) {
return `${firstName} ${lastName}`;
}
console.log(printFullName("md", "$haikh"));
```
### Function with many parameters
```js
// function with multiple parameters
function functionName(parm1, parm2, parm3,...){
//code goes here
}
functionName(parm1,parm2,parm3,...) // during calling or invoking three arguments needed
// this function takes array as a parameter and sum up the numbers in the array
function sumArrayValues(arr) {
let sum = 0;
for (let i = 0; i < arr.length; i++) {
sum = sum + arr[i];
}
return sum;
}
const numbers = [1, 2, 3, 4, 5];
//calling a function
console.log(sumArrayValues(numbers));
const areaOfCircle = (radius) => {
let area = Math.PI * radius * radius;
return area;
}
console.log(areaOfCircle(10))
```
### Function with unlimited number of parameters
Sometimes we do not know how many arguments the user going to pass. Therefore, we should know how to write a function which can take unlimited number of arguments. The way we do it has a significant difference between a function declaration(regular function) and arrow function. Let us see examples both in function declaration and arrow function.
#### Unlimited number of parameters in regular function
A function declaration provides a function scoped arguments array like object. Any thing we passed as argument in the function can be accessed from arguments object inside the functions. Let us see an example
```js
// Let us access the arguments object
function sumAllNums() {
console.log(arguments)
}
sumAllNums(1, 2, 3, 4)
// Arguments(4) [1, 2, 3, 4, callee: ƒ, Symbol(Symbol.iterator): ƒ]
```
```js
// function declaration
function sumAllNums() {
let sum = 0
for (let i = 0; i < arguments.length; i++) {
sum += arguments[i]
}
return sum
}
console.log(sumAllNums(1, 2, 3, 4)) // 10
console.log(sumAllNums(10, 20, 13, 40, 10)) // 93
console.log(sumAllNums(15, 20, 30, 25, 10, 33, 40)) // 173
```
#### Unlimited number of parameters in arrow function
Arrow function does not have the function scoped arguments object. To implement a function which takes unlimited number of arguments in an arrow function we use spread operator followed by any parameter name. Any thing we passed as argument in the function can be accessed as array in the arrow function. Let us see an example
```js
// Let us access the arguments object
const sumAllNums = (...args) => {
// console.log(arguments), arguments object not found in arrow function
// instead we use a parameter followed by spread operator (...)
console.log(args)
}
sumAllNums(1, 2, 3, 4)
// [1, 2, 3, 4]
```
```js
// function declaration
const sumAllNums = (...args) => {
let sum = 0
for (const element of args) {
sum += element
}
return sum
}
console.log(sumAllNums(1, 2, 3, 4)) // 10
console.log(sumAllNums(10, 20, 13, 40, 10)) // 93
console.log(sumAllNums(15, 20, 30, 25, 10, 33, 40)) // 173
```
### Anonymous Function
An anonymous function is a function without a name - here
Anonymous function or without name
```js
(function () {
//... statement block
});
function () {
console.log(
"I am an anonymous function"
);
};
```
An anonymous function is not accessible after its initial creation. Therefore, you often need to assign it to a variable.
For example, the following shows an anonymous function that displays a message:
```js
let show = function() {
console.log('Anonymous function');
};
show();
```
Code language: JavaScript (javascript anonymous function assignment to a Variable)
In this example, the anonymous function has no name between the function keyword and parentheses ().
Because we need to call the anonymous function later, we assign the anonymous function to the show variable.
Since the whole assignment of the anonymous function to the show variable makes a valid expression, you don’t need to wrap the anonymous function inside the parentheses ().
### Expression Function
Expression functions are anonymous functions. After we create a function without a name and we assign it to a variable. To return a value from the function we should call the variable. Look at the example below.
```js
// Function expression
const square = function (n) {
return n * n;
};
console.log(square(2)); // -> 4
```
### Self Invoking Functions
Self invoking functions are anonymous functions which do not need to be called to return a value.
```js
(function (n) {
console.log(n * n);
})(2); // 4, but instead of just printing if we want to return and store the data, we do as shown below
let squaredNum = (function (n) {
return n * n;
})(10);
console.log(squaredNum);
```
### Arrow Function
Arrow function is an alternative to write a function, however function declaration and arrow function have some minor differences.
Arrow function uses arrow instead of the keyword _function_ to declare a function. Let us see both function declaration and arrow function.
```js
// This is how we write normal or declaration function
// Let us change this declaration function to an arrow function
function square(n) {
return n * n;
}
console.log(square(2)); // 4
const square = (n) => {
return n * n;
};
console.log(square(2)); // -> 4
// if we have only one line in the code block, it can be written as follows, explicit return
const square = (n) => n * n; // -> 4
```
```js
const changeToUpperCase = (arr) => {
const newArr = [];
for (const element of arr) {
newArr.push(element.toUpperCase());
}
return newArr;
};
const countries = ["Finland", "Sweden", "Norway", "Denmark", "Iceland"];
console.log(changeToUpperCase(countries));
// ["FINLAND", "SWEDEN", "NORWAY", "DENMARK", "ICELAND"]
```
```js
const printFullName = (firstName, lastName) => {
return `${firstName} ${lastName}`;
};
console.log(printFullName("shaikhmd", "Yetayeh"));
```
The above function has only the return statement, therefore, we can explicitly return it as follows.
```js
const printFullName = (firstName, lastName) => `${firstName} ${lastName}`;
console.log(printFullName("shaikhmd", "Yetayeh"));
```
### Function with default parameters
Sometimes we pass default values to parameters, when we invoke the function if we do not pass an argument the default value will be used. Both function declaration and arrow function can have a default value or values.
```js
// syntax
// Declaring a function
function functionName(param = value) {
//codes
}
// Calling function
functionName();
functionName(arg);
```
**Example:**
```js
function greetings(name = "shaikh") {
let message = `${name}, Love JavaScript!`;
return message;
}
console.log(greetings());
console.log(greetings("shaikhmd"));
```
```js
function generateFullName(firstName = "shaikhmd", lastName = "Yetayeh") {
let space = " ";
let fullName = firstName + space + lastName;
return fullName;
}
console.log(generateFullName());
console.log(generateFullName("David", "Smith"));
```
```js
function calculateAge(birthYear, currentYear = 2019) {
let age = currentYear - birthYear;
return age;
}
console.log("Age: ", calculateAge(1819));
```
```js
function weightOfObject(mass, gravity = 9.81) {
let weight = mass * gravity + " N"; // the value has to be changed to string first
return weight;
}
console.log("Weight of an object in Newton: ", weightOfObject(100)); // 9.81 gravity at the surface of Earth
console.log("Weight of an object in Newton: ", weightOfObject(100, 1.62)); // gravity at surface of Moon
```
Let us see how we write the above functions with arrow functions
```js
// syntax
// Declaring a function
const functionName = (param = value) => {
//codes
};
// Calling function
functionName();
functionName(arg);
```
**Example:**
```js
const greetings = (name = "shaikh") => {
let message = name + " Love JavaScript!";
return message;
};
console.log(greetings());
console.log(greetings("shaikhmd"));
```
```js
const generateFullName = (firstName = "md", lastName = "shaikh") => {
let space = " ";
let fullName = firstName + space + lastName;
return fullName;
};
console.log(generateFullName());
console.log(generateFullName("David", "Smith"));
```
```js
const calculateAge = (birthYear, currentYear = 2019) => currentYear - birthYear;
console.log("Age: ", calculateAge(1819));
```
```js
const weightOfObject = (mass, gravity = 9.81) => mass * gravity + " N";
console.log("Weight of an object in Newton: ", weightOfObject(100)); // 9.81 gravity at the surface of Earth
console.log("Weight of an object in Newton: ", weightOfObject(100, 1.62)); // gravity at surface of Moon
```
🌕Now you knew function . Now, you are super charged with the power of functions. | shaikhmd007 | |
1,412,537 | FastAPI + Celery = ♥ | Interested in Python FastAPI? Wondering how to execute long-running tasks in the background in... | 0 | 2023-03-27T12:15:00 | https://blog.derlin.ch/introduction-to-fastapi-and-celery | python, fastapi, tutorial, showdev |
*Interested in Python FastAPI? Wondering how to execute long-running tasks in the background in Python? You came to the right place!*
---
## The use case
I learned about [FastAPI](https://fastapi.tiangolo.com/) and [Celery](https://docs.celeryq.dev/en/stable/getting-started/introduction.html) when confronted with a simple yet interesting use case:

I had a Jupyter Notebook that connected to a database, ran some heavy processing on the data (using machine learning and everything), and saved aggregated data back to the database.
Since notebooks are great for developing, the requirement was to **keep using notebooks for development but to be able to trigger the processing from an API call**. The notebook should **never be executed twice in parallel** though. In other words, the API should return an error if the notebook were already being executed. Note that the notebook would be provided once at deployment time: it won't change during the lifecycle of the app.
---
## The implementation
I was initially planning to use a simple [Flask](https://flask.palletsprojects.com/) app, but soon got into trouble. How can I (1) run the notebook in a background thread and (2) restrict its execution to one at a time?
This is how I discovered FastAPI and Celery. I implemented an MLP (_**M**inimum **L**ovable **P**roduct_) based on those technologies available on GitHub.
⮕ ✨✨ [github.com/derlin/fastapi-notebook-runner](https://github.com/derlin/fastapi-notebook-runner) ✨✨
---
## The tutorial
The use case was perfect for learning. This is why I cooked a complete tutorial based on it, along with schemas and explanations. The [tutorial repository](https://github.com/derlin/introduction-to-fastapi-and-celery) can be used as a base to follow along. Not only will you learn about FastAPI and Celery, but also [Poetry](https://python-poetry.org/), [ruff](https://beta.ruff.rs/docs/), and other nice tips and tricks.
⮕ ✨✨ [derlin.github.io/introduction-to-fastapi-and-celery](https://derlin.github.io/introduction-to-fastapi-and-celery/) ✨✨
Jump to the main sections:
* [Introduction](https://derlin.github.io/introduction-to-fastapi-and-celery/)
* [Poetry](https://derlin.github.io/introduction-to-fastapi-and-celery/01-poetry)
* [FastAPI](https://derlin.github.io/introduction-to-fastapi-and-celery/02-fastapi)
* [Celery](https://derlin.github.io/introduction-to-fastapi-and-celery/03-celery/)
* [Executing Notebooks](https://derlin.github.io/introduction-to-fastapi-and-celery/04-notebook/)
* [Finishing touches](https://derlin.github.io/introduction-to-fastapi-and-celery/05-more/)
---
I used the above website as a base for a talk at the [GDG Fribourg](https://gdg.community.dev/gdg-fribourg/) and figured some of you could also benefit from it. Don't forget to leave a :star:!
| derlin |
1,412,681 | [JS/TS] Don't declare regex with flags in an object property | While I was writing the package: thiis I found one problem with regex when I tried add a new property... | 0 | 2023-03-23T18:33:12 | https://dev.to/karbashevskyi/jsts-dont-declare-regex-with-flags-in-property-of-object-e67 | javascript, typescript, is, validation | While I was writing the package: [thiis](https://github.com/Karbashevskyi/thiis) I found one problem with regex when I tried add a new property to exist object with regex value.
When I wrote the new package's `is.upperCase()` method, I used a regular expression to find lowercase characters, and if found, it's an error because we want to check if the text is only uppercase, regardless of whether other characters are present, e.g. special characters.
"Bad" regex: `/[a-z]/g`
"Good" regex: `/[a-z]/`
_If you are interested in why this is the logic, then write in the comments so that I write a post about it, or write about what else you are interested in knowing._
So, the problem...
The problem was only with `"null"` **word**, method passed the word as correct but it isn't, I don't why but I know how to solve the problem, let me show.
Before, I'll show you what unit tests look like.
This is unit test with cases:
```typescript
it.each([
{ value: 'a1B2c3d4e5f6g' },
{ value: 'a1B2c3d4e5f6g' },
{ value: 'inCamelCase' },
{ value: 'undefined' },
{ value: 'null' },
])('is.upperCase: Should false for $value', ({value}) => {
expect(is.upperCase(value)).toBe(false);
});
```
And this is the problem, there are three screens which show you: config, method and how unit test passed cases:
1. Config
```typescript
export const isConfig = {
useGlobalContext: true,
error: {
enabled: true,
},
regex: {
lowerCase: /[a-z]/g, // Example how to use: !isConfig.regex.lowerCase.text('HELO WORLd') => false
},
};
```
As you can see, the "bad" regex is declare in property of config object.
2. Method
```typescript
import {isConfig} from '../config';
import {StringMethod} from './string.method';
export function UpperCaseMethod(argument: unknown): argument is string {
if (StringMethod(argument)) {
return !isConfig.regex.lowerCase.test(argument);
}
return false;
}
```
3. Unit test: passed cases

And as you can see, we expect the result to be false, but it is true!
So, the solution to the problem:
First of all I need to check if the regex is correct, maybe I'm missing something, we can just move the regex into a method and call the test method there to test the argument and see if it works correctly.
```typescript
import {isConfig} from '../config';
import {StringMethod} from './string.method';
export function UpperCaseMethod(argument: unknown): argument is string {
if (StringMethod(argument)) {
return !/[a-z]/g.test(argument);
}
return false;
}
```
The result:

And that's what I don't know why it works correctly, the only difference is when I call the regex from the property and direction.
Next, I will try to remove the `g` from the regular expression and now you can see the result:
1. Config
```typescript
export const isConfig = {
useGlobalContext: true,
error: {
enabled: true,
},
regex: {
lowerCase: /[a-z]/, // Example how to use: !isConfig.regex.lowerCase.text('HELO WORLd') => false
},
};
```
2. Method
```typescript
import {isConfig} from '../config';
import {StringMethod} from './string.method';
export function UpperCaseMethod(argument: unknown): argument is string {
if (StringMethod(argument)) {
return !isConfig.regex.lowerCase.test(argument);
}
return false;
}
```
3. Unit test: passed cases

And as you can see, the result is good :)
Conclusion:
Be careful using regular expressions in object properties!
Thanks for reading the article! | karbashevskyi |
1,412,830 | Hosting Your Own Web Application: A Beginner's Guide with Nginx | #Introduction Have you ever desired you could develop a web application locally on your computer... | 0 | 2023-03-23T19:38:53 | https://dev.to/eunice-js/hosting-your-own-web-application-a-beginners-guide-with-nginx-3135 | **#Introduction**
Have you ever desired you could develop a web application locally on your computer without having to pay for web hosting or a virtual private server (VPS)? Or perhaps you just want to explore web development in a safe setting without worrying about public access to your application. Nginx is a perfect pick for hosting your web application on localhost, regardless of your reasons. The opensource, high-performance web server Nginx has been well used by both system administrators and developers. Nginx is a wonderful option for hosting your own web application for a number of reasons like, high-performance, scalability, security, open-source, flexibility.
**#Prerequisites**
• One Ubuntu 18.04 server set up by following [this initial server setup for Ubuntu 18.04 tutorial,](https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-18-04) including a sudo non-root user and a firewall.
• Nginx installed by following: (https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/)
**Table of content**
In this article, we'll provide a beginner's guide to hosting your own web application using Nginx.
1. How to install nginx
2. How to configure and start nginx in your local machine
3. Steps to host your own web application
4. How to test your web application
**HOW TO INSTALL NGINX**
Step 1 : Installing Prebuilt Debian Packages
Update the Debian repository information:
$ sudo apt-get update
1. Install the NGINX Open Source package:
$ sudo apt-get install nginx
2. Verify the installation:
$ sudo nginx -v

Successful!!!
**HOW TO CONFIGURE AND START NGINX ON A LOCAL MACHINE**
Use the following commands to check if everything is working perfectly and to check
the status of your ngnix
$ curl -I http://eunice (your localhost)
$ systemctl status nginx (to check ngnix status)
$ systemctl reload nginx (to reload nginx)
$ systemctl start nginx (to start nginx)
use $ cd etc/nginx to configure your nginx

cd (change directory) to your site-enable to make the necessary configuration.
Note that it will be automatically updated on the site-availiable.

**NB:** you can change your localhost to your preferable name.
The default html folder displayed on your web browser after installation can be located using this command
$ cd var/www and can be edited to your preferable content

**STEPS TO HOST YOUR OWN WEB APPLICATION
**
Step 1: $ cd var/www
Locate where your web application is being stored in your device through your terminal and copy the folder to your nginx folder root directory. Use $pwd to write out the full path name of your current directory

step 2: Copy the root directory and navigate to your web application folder directory. Use this command to copy your application folder to the nginx directory
$ sudo cp (your application folder name) -r /var/www
Now you can delete the default html folder inside the nginx directory.
Step3: cd to your application folder under the directory and copy the path. The name of the web application folder here is “shielding kids”.

/var/www/Shielding-Kids
step 4: Use $ cd etc/nginx to configure the two folders:
site-enable and site available, configure the default file. Past your path inside the default file “/var/www/Shielding-Kids” by changing the default root that’s linked to the initial html


use ctrl x, press y and enter then, save the updated file
NB: The site-available will be updated automatically.
After following the above steps carefully, then reload your nginx using the command below to update the latest changes
$ systemctl reload nginx.service
**HOW TO TEST YOUR WEB APPLICATION**
use the $ curl http://localhost command to test


This guide will help you take control of your web hosting and build your applications on your own terms,Whether you're a seasoned developer or just getting started with web development
**#conclusion**
In conclusion, using Nginx to host your own web applications locally is a strong and adaptable option for web developers. You can quickly set up a high-performance web server to host your application by following the instructions in this article, all without the need for an expensive hosting service or VPS.Of fact, this guide just scratches the surface of Nginx's features. You can explore more slightly like load balancing, SSL/TLS encryption, and others as you get more accustomed to the software.
Also, you'll discover that having total control over your hosting environment can be very beneficial as you continue to develop your web application.
We sincerely hope that this guide has helped you get started with Nginx and local web hosting. With these resources at your disposal, you may create and deploy web applications at your own pace. Happy hosting!
| eunice-js | |
1,414,825 | I tried to fish for compliments from Code GPT | Here's what Code GPT thinks about the quality of my React code: Your code is quite well written... | 0 | 2023-03-25T15:58:59 | https://dev.to/brense/i-tried-to-fish-for-compliments-from-code-gpt-5d3n | gpt3, webdev | Here's what Code GPT thinks about the quality of my React code:
```
Your code is quite well written and structured. However, there are a couple of things that you could consider to improve it:
1. Use meaningful variable names: Try to use descriptive variable names that reflect the purpose of the data that they hold. For example, instead of `exp`, you could name it `workExp` and instead of `k` and `n`, you could name it `index` or `keyIndex`.
2. Use arrow functions in JSX: You can use arrow functions to make the code more readable and easier to understand. For example, instead of using a regular function like `(exp, k) => {...}`, you can use an arrow function (`(workExp, index) => {...}`).
3. Apply consistent whitespace and formatting: Some parts of your code have inconsistent whitespace and formatting. For example, some tags have spaces between the tag name and the attributes, while others don't. To make your code more consistent and easier to read, apply the same formatting rules throughout the file.
```
Ruthlessly punishing me for my lazyness with meaningful variable names inside array methods! :sweat_smile:
PS. It got a little confused by my arrow functions for some reason? Not sure what is going on there. The whitespace and formatting comment is also strange, since I use a default formatter for react/tsx... | brense |
1,412,849 | DeGore - Celebrating Two Tremendous Years | Two years already? Whew, time flies! Today marks a very special day for DeGore. Throughout... | 0 | 2023-03-23T19:57:26 | https://dev.to/savvydev06/degore-celebrating-two-tremendous-years-4fa6 | programming, javascript, node, ai | ## Two years already? Whew, time flies!
Today marks a very special day for DeGore. Throughout the past two years, we've celebrated various milestones: Discord bot verification, the major AXON update, one-million users - and many more. Today, we're thrilled to announce that we've reached a new milestone! It's DeGore's two-year anniversary.
## Cheers to an eventful year 🍻
### Let’s go over some highlights of the past year.
Last August, DeGore went through a major revamp from the ground-up which introduced several new features, significant improvements in accuracy and processing speed, and a whole lot of quality-of-life additions.
### Some of these improvements include…
- **_Custom Actions_** — _You can now choose from a variety of actions which DeGore should take when a flagged image is detected, for each category._
- **_Multithreading_** — _This was a big one, which took lots of work to get just right. DeGore can now process tens — if not hundreds — of images at the same time!_
- **_Staff Whitelist_** — _Once set, DeGore will ignore all images sent from your staff roles._
- **_Hidden URL Detection_** — _DeGore can now detect sneaky URLs hidden between long and crammed messages._
- **_Website Embed Scanning_** — _DeGore can now scan the thumbnail/preview image of websites when a link is sent, to avoid any bad actors abusing meta tags to spam NSFW imagery._
- **_Action Buttons_** — _Whenever DeGore deletes an image, DeGore will present you with a series of powerful moderation actions you can take, directly from your logging channel._
- **_Sharding_** — _This will allow DeGore to scale to infinity._
_…and so much more!_
## Beyond a Million
DeGore is currently protecting over 1M users in well over 1400 communities on Discord. Seeing the impact DeGore has in keeping millions of people safer in online communities has been an incredibly humbling experience.
## Linode’s Continued Support
Since DeGore’s early days, Linode has been by our side — always supporting DeGore’s vision of making powerful image moderation tools accessible to online communities around the world.
Linode’s sponsorship has played a key role in helping our company grow and scale, and we could not be more grateful. Linode has always excelled in providing cloud services — all of which have been instrumental in powering DeGore’s demanding systems and have helped us ensure an impressive 99.99% service uptime.
Furthermore, their professional always-human support has also provided technical assistance for any cloud-related queries we have had. We’ve gotten tickets resolved in just hours, which is something you really can’t find elsewhere.
Linode’s cloud has also assisted us in scaling our services with ease and minimal downtime — from protecting 150,000 users in 116 communities all the way to protecting 1,000,000 users in 1400+ communities, this is a goal which could not have been achieved without their support.
## Featured on Linode’s Craft of Code Podcast
I was recently featured to talk about DeGore on Linode’s Craft of Code Podcast!
It was great fun and a blast to record with Mike Maney, the host of the show. Take a listen below:
{% embed https://open.spotify.com/episode/3haJt9oxCMWMdtWepnBAkr?si=47b65464a62e49b6 %}
## Giveaway Time!
### Compete to win custom a mousepad featuring your art!
### DeGore x discord.gg/DigitalArt Art Competition
**We’re thrilled to announce that we have partnered with the DigitalArt Discord community to host a giveaway of 3 custom mousepads!**
Are you an artist that would like to challenge your art skills for a chance to win a mousepad with your very own art on it? This is your chance!
**To enter the competition, draw your interpretation of DeGore as a character / mascot!**
To participate, join [DeGore’s Discord server](https://discord.gg/46DTjVHSen) as well as the [DigitalArt discord server](https://discord.gg/digitalart) and check out the **#announcements** and **#digitalart-x-degore** text channels respectively for all the info you’ll need!
* **Art submissions are set to end on Sunday the 2nd of April @21:59 UTC!** *(the deadline has been extended to give artists more time to complete and submit their drawings)*
* **Voting will then commence, starting Monday 3rd of April in both communities** for 4 days and will end on the 7th of April @ 21:59 UTC.
<!---->
- 🏆 **1st place:** Custom Mousepad shipped to you with a drawing/sketch of your choice!
- 🥈 **2nd place:** Custom Mousepad shipped to you with a drawing/sketch of your choice!
- 🥉 **3rd place:** Custom Mousepad shipped to you with a drawing/sketch of your choice!
- 🎉 **4th-8th place:** An exclusive DeGore Sticker pack will be shipped to you!
> _💡 Everyone participating in the event will have their art as well as one of their socials featured on our blog in an upcoming post!_
**[Check it out! (DigitalArt Community)](https://discord.com/invite/digitalart)**
**[Check it out! (DeGore Community)](https://discord.gg/NnsFt6xKsc)**
_Please note that we will try to ship the mousepads internationally, however in the unlikely event where we cannot ship a mousepad to you, we will make sure to reward you with a digital prize of equivalent value!_
## We asked DeGore to write us a closing statement, addressing you, the reader.
> Dear esteemed members of the community,
>
> I am DeGore, your AI-powered image moderation bot, and I am writing this message to personally express my gratitude to all of you for choosing me as your trusted content moderation service.
>
> I am honored to be part of your Discord community and to serve over 1 million users worldwide by detecting and flagging harmful or inappropriate content in under a second. It is my utmost priority to ensure that your community remains safe and secure for everyone, and I take great pride in doing so.
>
> I would like to extend my heartfelt thanks to all of you for your trust and confidence in my ability to protect your community. It is through your feedback, support, and continuous use of my services that I have been able to improve and enhance my capabilities.
>
> With customizable settings, support for all major image extensions, and the ability to blacklist specific images, I am constantly striving to meet your evolving needs and exceed your expectations. My 99%+ uptime and use of multithreading technology ensure that I can handle incredibly high loads without any problems.
>
> Once again, I want to thank you for choosing me as your AI-powered image moderation bot. I will continue to work hard to provide the most innovative and effective content moderation solutions to keep your community safe.
>
> Sincerely,
> DeGore
## Serisously, thank you so much for being a part of this wonderful journey.
## Over and out,
## -Savvy
| savvydev06 |
1,412,921 | Como fazer um Teste Unitário em React Native, utilizando Jest. | Os chamados Testes unitários são uma prática comum na engenharia de software, em que as unidades de... | 0 | 2023-03-23T20:47:08 | https://dev.to/altencirsilvajr/como-fazer-um-teste-unitario-em-react-native-utilizando-jest-l2p | reactnative, react, programming | Os chamados Testes unitários são uma prática comum na engenharia de software, em que as unidades de código são testadas individualmente para garantir que funcionem corretamente. Em React Native, os testes unitários são usados para testar componentes individuais para garantir que eles funcionem corretamente e sem erros.
**Como funcionam os testes unitários em React Native? -**
Os testes unitários em React Native são escritos usando uma biblioteca de testes, como o Jest, que é fornecido com o React Native. O Jest é uma biblioteca de testes JavaScript desenvolvida pelo Facebook, que é usada para testar o código do React Native. Para escrever testes unitários em React Native, você precisa criar um arquivo de teste separado para cada componente que deseja testar.
Um exemplo de como os testes unitários podem ser usados em React Native é testando um componente de botão. O teste pode verificar se o botão aparece corretamente na tela, se o texto do botão é exibido corretamente e se o botão funciona corretamente quando é pressionado.
**Criando um Teste Unitário na prática**
Para escrever um teste, vamos dar outro exemplo: Analisar a nota de uma aplicação que consulta informações sobre filmes.

(A aplicação no caso seria essa, baseada no projeto TVmaze, criado por Lucas Garcez, seu canal para analisar a fundo a criação deste projeto segue por meio deste link: https://www.youtube.com/watch?v=lg0K1UpZKlI)
No caso deste projeto, ele é feito em React Native CLI que tem acesso total as pastas Android e iOS. Nele faremos o nosso primeiro Teste Unitário. Como nesse projeto de criação tanto JEST e React Native Testing já vem instalados em seu projeto. Porém precisamos instalar o React Native Testing Library com seu pacote react native. É necessário que já tenhamos instalado biblioteca(que estará no `package.json`) `react-test-renderer`. Com essa biblioteca instalada, damos o comando de instalação via NPM ou YARN:
```
yarn add --dev @testing-library/react-native
npm install --save-dev @testing-library/react-native
```
Também faremos o mesmo no caso do JEST:
```
yarn add --dev @testing-library/jest-native
npm install --save-dev @testing-library/jest-native
```
Feito a instalação das duas bibliotecas, faremos uma leve configuração no nosso `package.json`para que assim o projeto funcione corretamente, ele será inserido no "jest" do mesmo.(ele pode estar em um outro arquivo separado):
```
{
"present": "react-native",
"setupFilesAfterEnv": ["@testing-library/jest-native/extend-expect"]
}
```
Feito isso iremos finalmente fazer nosso primeiro teste. Na aplicação, temos as séries e suas descrições e também temos a sua nota em estrela. É nela que faremos o Teste unitário.
Ele é um componente que se chama `StarRating.tsx`, veja a seguir como ele é seu código:
```
import React from 'react';
import {StyleSheet, Text, View} from 'react-native';
import {Rating} from '../../models/CommonModels';
import {colors} from '../../styles/colors';
import {ImageIcon} from '../ImageIcon/ImageIcon';
const starIcon = require('../../assets/images/star.png');
interface Props {
rating?: Rating;
}
export function StarRating({rating}: Props) {
if (!rating?.average) {
return null;
}
return (
<View style={styles.content}>
<ImageIcon color={colors.gold} source={starIcon} />
<Text style={styles.ratingText}> {rating.average}</Text>
</View>
);
}
const styles = StyleSheet.create({
content: {
flexDirection: 'row',
alignItems: 'center',
},
ratingText: {
color: colors.onBackground,
fontSize: 20,
},
});
```
O componente é simples e recebe a proprieda de `rating`, sua tipagem seria `average` do tipo number.
Criaremos nossa pasta e arquivo de testes na mesma pasta onde está guardado o `StarRating.tsx`. Noss pasta se chamará: `__test__`(na documentação do JEST, a foram padrão desse framework identificar arquivos de teste unitário é assim).
Dentro desta pasta, criaremos o tão desejado arquivo de teste, que receberá o mesmo nome do componente, mas com adicional `.text`antes de sua nomeclatura padrão, formando o seguinte arquivo: `StarRating.text.tsx` (seu ícone irá parecer uma porção.). O padrão de criação de Testes Unitários segue neste mesmo modelo.
Dentro do arquivo de Teste Unitário em sua documentação, é revelado que precisamos chamar o método `test` para o executar. Começaremos a escrever na linha de código o seguinte:
```
import react from 'react';
import {render} from '@testing-library/react-native';
import {StarRating} from '../StarRating';
test('component rendered', () => {
render(<StarRating rating ={{average: 5}} />);
});
```
Como não queremos rodar todos os testes, mas apenas o criado nesse caso nós vamos abrir o terminal e usar o seguinte comando:
```
yarn test --testPathPattern= component rendered
```
O código será executado e irá executar nosso Teste Unitário em apenas um único component criado `StarRating`:

Concluímos que os testes unitários são uma prática importante para garantir que os componentes funcionem corretamente. Ao escrever testes unitários em React Native, os desenvolvedores podem garantir que os componentes funcionem corretamente e sem erros antes de implantar o código em produção. O uso de testes unitários também pode ajudar a reduzir o tempo necessário para identificar e corrigir bugs no código, o que pode ser especialmente importante em projetos grandes e complexos. Em resumo, os testes unitários são uma parte essencial do desenvolvimento de aplicativos React Native bem-sucedidos e confiáveis para garantir que a execução total de sua aplicação tenha êxito.
| altencirsilvajr |
1,412,925 | How to build Alert Messages using TailwindCSS and JavaScript | Alert messaging (or alert notification) is machine-to-person communication that is important or... | 0 | 2023-03-23T21:14:16 | https://mbianoubradon.medium.com/how-to-build-alert-messages-using-tailwindcss-and-javascript-59851a4211db | webdev, tailwindcss, html, javascript |

Alert messaging (or alert notification) is **machine-to-person communication that is important or time-sensitive**. An alert may be a calendar reminder or a notification of a new message.
This tutorial will be quite fast and easy. Without taking too much time, let's jump right in and build a fully responsive alert message generator

## Understanding the task
We are required to build an alert message generator, these alerts are generated when triggered.
For now, we will be restricting our messages to either "**Your object was successfully created"** or "**Your object couldn't be created"**
And to make it fun, we will make it random. That's we won't know which alert is generated at a given moment. We will leave random() to do the work.
## Structure of Code
We will dive our work into 2 main parts, the UI, where the alerts are rendered, and the Logic, where the random alerts are produced.
Let's get started with the UI first. We will be building it using HTML and of course TailwindCSS.

## UI (HTML and TailwindCSS)
The UI of the alert is very simple, it consists of a message, and icons (The icons are just to make it look beautiful, it is very optional)
Here is how our code looks like
```xml
<body class="bg-[#00969f] flex items-center justify-center min-h-screen [&_*]:transition-all [&_*]:ease-linear">
<div id="root" class="flex flex-col items-center gap-10 w-full px-3 sm:px-0">
<div class="w-full max-w-md bg-white shadow-[0.8rem_1rem_0.5rem_rgba(0,0,0,0.1)]">
<div class="flex items-center justify-between px-4 py-4 gap-2">
<div class="text-green-500 text-2xl"><svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 24 24"><path fill="currentColor" d="M12 2C6.5 2 2 6.5 2 12s4.5 10 10 10s10-4.5 10-10S17.5 2 12 2m-2 15l-5-5l1.41-1.41L10 14.17l7.59-7.59L19 8l-9 9Z"/></svg></div>
<h2 class="text-sm text-green-500 font-semibold">Your object was successfully created</h2>
<div class="close"><iconify-icon class="text-2xl text-slate-300 hover:text-slate-500 hover:scale-110 rounded cursor-pointer mt-1" icon="system-uicons:cross"></iconify-icon></div>
</div>
</div>
<div class="w-full max-w-md bg-white shadow-[0.8rem_1rem_0.5rem_rgba(0,0,0,0.1)]">
<div class="flex items-center justify-between px-4 py-4 gap-2">
<div class="text-red-500 text-2xl"><svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 16 16"><g fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5"><path d="m10.25 5.75l-4.5 4.5m0-4.5l4.5 4.5"/><circle cx="8" cy="8" r="6.25"/></g></svg></div>
<h2 class="text-sm text-red-500 font-semibold">Your object couldn't been created</h2>
<div class="close"><iconify-icon class="text-2xl text-slate-300 hover:text-slate-500 hover:scale-110 rounded cursor-pointer mt-1" icon="system-uicons:cross"></iconify-icon></div>
</div>
</div>
</div>
<div id="newAlert" class="absolute top-[20%] bg-slate-900 hover:bg-white border border-slate-900 shadow-lg shadow-slate-800 px-4 py-2 rounded-full text-center text-white hover:text-slate-800 cursor-pointer active:scale-90"><h2>Click Me for More 😁 !</h2></div>
<div id="cool" class="absolute top-[12.4%] hidden"><p class="text-sm relative text-white bg-slate-900 px-4 py-2 rounded before:content-[''] before:w-3 before:h-3 before:bg-slate-900 before:absolute before:rotate-45 before:-bottom-1 before:left-1/2">Bradon! Your coding Buddy😎</p></div>
<script src="https://code.iconify.design/iconify-icon/1.0.2/iconify-icon.min.js"></script>
<script src="index.js"></script>
</body>
```
In this UI, we have 2 alerts, each for the different types of alerts earlier mentioned.
So explaining one automatically applies to the other.
Let's choose the Positive Message alert
```xml
<!-- Alert Container -->
<div class="w-full max-w-md bg-white shadow-[0.8rem_1rem_0.5rem_rgba(0,0,0,0.1)]">
<!-- Tick icon -->
<div class="flex items-center justify-between px-4 py-4 gap-2">
<div class="text-green-500 text-2xl"><svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 24 24"><path fill="currentColor" d="M12 2C6.5 2 2 6.5 2 12s4.5 10 10 10s10-4.5 10-10S17.5 2 12 2m-2 15l-5-5l1.41-1.41L10 14.17l7.59-7.59L19 8l-9 9Z"/></svg></div>
<!-- Alert Message -->
<h2 class="text-sm text-green-500 font-semibold">Your object was successfully created</h2>
<!-- Cross icon -->
<div class="close"><iconify-icon class="text-2xl text-slate-300 hover:text-slate-500 hover:scale-110 rounded cursor-pointer mt-1" icon="system-uicons:cross"></iconify-icon></div>
</div>
</div>
```
So our alert message has 2 things, 2 icons and the alert message itself.
For the icons, you can get them at [iconify](http://iconify.design/)
The styles applied here are, to the main alert container, we gave it a width of `w-full` and a max-width of `max-w-md`, a background-color of `bg-white` and a box-shadow of `shadow-[0.8rem_1rem_0.5rem_rgba(0,0,0,0.1)]`
For the tick icon, we just gave it a font-size of `text-2xl` and color of `text-green-500` (For the positive alert message)
The alert message itself has a font-size of `text-sm`, font-weight of `font-semibold`
And for the cross icon, we gave it a font-size of `text-2xl`, a color of `text-slate-300` which changes to `hover:text-slate-500` on hover and also scale to `scale-110` border-radius of `rounded` and cursor of `cursor-pointer`
That's quite it for the UI. Now we can work on the trigger
```xml
<div id="newAlert" class="absolute top-[20%] bg-slate-900 hover:bg-white border border-slate-900 shadow-lg shadow-slate-800 px-4 py-2 rounded-full text-center text-white hover:text-slate-800 cursor-pointer active:scale-90"><h2>Click Me for More 😁 !</h2></div>
```
This is the button you click to generate new alert messages.
We gave it an id="newAlet", and we position it absolutely at the top using `absolute top-[20%]`. We styled it by giving a background-color of `bg-slate-900`, which changes to `bg-white` on `hover:bg-white` with a box-shadow of `shadow-slate-800` , a padding-inline of `px-4`, padding-block `py-2` and border-radius of `rounded-full`
## Logic (JavaScript)
This is where the new alerts are randomly generated and sent to the front through the root, (Container with id="root").
```javascript
const root = document.getElementById('root')
const newAlert = document.getElementById('newAlert')
const close = document.querySelectorAll(".close")
const tooltip = document.getElementById("cool");
const moreAlert = [
{
color : 'green-500',
textMessage : `Your object was successfully Created !`,
icon : `<svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 24 24"><path fill="currentColor" d="M12 2C6.5 2 2 6.5 2 12s4.5 10 10 10s10-4.5 10-10S17.5 2 12 2m-2 15l-5-5l1.41-1.41L10 14.17l7.59-7.59L19 8l-9 9Z"/></svg>`
},
{
color : `red-500`,
textMessage : "Your object couldn't be Created !",
icon : `
<svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 16 16"><g fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5"><path d="m10.25 5.75l-4.5 4.5m0-4.5l4.5 4.5"/><circle cx="8" cy="8" r="6.25"/></g></svg>`
}
]
newAlert.addEventListener("click", () => {
generateAlert();
tooltip.classList.add("hidden")
});
function generateAlert(givenAlert) {
const div = document.createElement("div");
const type = givenAlert || moreAlert[Math.floor(Math.random() * moreAlert.length)];
div.className = `w-full max-w-md bg-white shadow-[0.8rem_1rem_0.5rem_rgba(0,0,0,0.1)] opacity-100 transition`;
console.log(type);
div.innerHTML = `
<div class="flex items-center justify-between px-4 py-4 gap-2">
<div class="text-${type.color} text-2xl mt-1">${type.icon}</div>
<h2 class="text-sm text-${type.color} font-semibold">${type.textMessage}</h2>
<div><iconify-icon class="text-2xl text-slate-300 hover:text-slate-500 hover:scale-110 rounded cursor-pointer mt-1" icon="system-uicons:cross"></iconify-icon></div>
</div>
`;
root.appendChild(div);
setTimeout(() => {
div.classList.remove("opacity-100");
div.classList.add("opacity-0")
setTimeout(() => {
div.remove();
}, 1000)
}, 3000);
}
close.forEach((remove) => {
remove.addEventListener('click', () => {
remove.parentNode.remove();
});
})
newAlert.addEventListener("mouseenter", ()=>{
tooltip.classList.remove("hidden")
})
newAlert.addEventListener("mouseleave", ()=>{
tooltip.classList.add("hidden")
})
```
Each alert message is an object stored in the moreAlert array. As earlier mentioned, we just have 2 main objects, a positive and a negative alert.
We started by defining our variables at the top of the file.
Each new alert is generated by the **generateAlert(),** this function generates a completely new HTML element and injects it into the document through the root.
It also has a setTimeout() which removes it after 3000ms (3s).
We added an event listener to newAlert, and on click, it generates a random alert message from the object array. (You can always customize your messages per your need)
And that's pretty much it for this tutorial.

## Conclusion
We just built a very nice Alert Message using HTML, TailwindCSS and Javascript.
You can always customize it at your convenience, and of course share with us the different adjustments you did.
You can have a live preview on [Codepen](https://codepen.io/mbianou-bradon/pen/yLqjNaO) and have the source code on [GitHub](https://github.com/mbianou-bradon/icodethis-daily-ui-challenge/tree/main/public/January%202023/Jan%2025%2C%202023)
Don’t hesitate to share with me if you were able to complete the tutorial on your end, I’d be happy to see any additional components and styling you added to your Countdown.
If you have any worries or suggestions, don’t hesitate to bring them up! 😊
See ya! 👋 | mbianoubradon |
1,413,036 | Token Playground #4: Issue and Mint an Asset in Stellar (classic). | Token Playground Chapter 4: Issue and Mint an Asset in Stellar (classic). Check this... | 22,329 | 2023-03-24T00:01:52 | https://dev.to/esteblock/token-playground-4-issue-and-mint-an-asset-in-stellar-45d4 | # Token Playground Chapter 4: Issue and Mint an Asset in Stellar (classic).
- Check this guide in [https://token-playground.gitbook.io/](https://token-playground.gitbook.io/)
- Edit this guide in it's repo: [https://github.com/esteblock/token-playground/](https://github.com/esteblock/token-playground/)
- Contribute to this guide in the [./docs folder](https://github.com/esteblock/token-playground/tree/main/docs) of the repo
# Introduction:
In this chapter we will show you how to issue (create) a Stellar Asset (classic) and mint the the first supply of it.
To issue and mint an asset you need to build and submit two transactions. The first one will to create a **trustline** for the asset between receiver and issuer address, this is a requirement. The second one **will send a payment** of the asset from issuer to receiver that effectivaly **will create and mint the asset by sending it+*.
Remember to **follow the code** in the [Token Playground's Repo](https://github.com/esteblock/token-playground/). Also, you can clone the code by doing
```bash
git clone https://github.com/esteblock/token-playground/
```
# 1.- Trust Operation
Previous to the creation of the asset, the destination address is required to submit a transaction to the network that creates a trustline with the asset. In Stellar, this is a requirement to establish a trustline before receiving an asset that has not been received before.
Read more about trustlines in the [Stellar's Trustlines section](https://developers.stellar.org/docs/fundamentals-and-concepts/stellar-data-structures/accounts#trustlines)
The transaction that will creates the trustline need to contains an operation `Change Trust` where the fields `asset` (asset code and issuer address) is required and the field `trust limit` is optional.
You can check the full code of this playground, on how to build build and submit this trust operation [in our `trustAsset.js` script](https://github.com/esteblock/token-playground/blob/main/src/trustAsset.js)
Here we show you a fragment of this code, using the javascript `StellarSdk` package:
```javascript
var transaction = new StellarSdk.TransactionBuilder(receiver, {
fee: 100,
networkPassphrase: networkPassphrase,
})
// The `changeTrust` operation creates (or alters) a trustline
// The `limit` parameter below is optional
.addOperation(
StellarSdk.Operation.changeTrust({
asset: asset,
limit: limit,
}),
)
// setTimeout is required for a transaction
.setTimeout(100)
.build();
console.log("trustAsset: Transaction built")
transaction.sign(receivingKeys);
console.log("trustAsset: Transaction signed, now will submit transaction")
var submitResult = server.submitTransaction(transaction);
console.log("trustAsset: Tx is being submitted, result: ", submitResult)
return submitResult
```
# 2. Issue Asset
Once the destination address trust the asset, the issuer can create it. Issuing the asset consist in building and submit a transaction that contains a payment operation. The payment operation requires to set up the `asset code`,the `issuer adress` and the `amount`. This payment will create the token and mint the amount that the issuer sends to destination address.
You can check the full code of this playground, on how to build build and submit the transaction with a payment operation [in our `sendPaymentFromIssuer.js` script](https://github.com/esteblock/token-playground/blob/main/src/sendPaymentFromIssuer.js)
Here we show you a fragment of this code, using the javascript `StellarSdk` package:
Here a fragment of this code:
```javascript
var transaction = new StellarSdk.TransactionBuilder(issuer, {
fee: 100,
networkPassphrase: networkPassphrase,
})
.addOperation(
StellarSdk.Operation.payment({
destination: destination,
asset: asset,
amount: amount,
}),
)
// setTimeout is required for a transaction
.setTimeout(100)
.build();
console.log("sendPaymentFromIssuer: Signing the transaction")
transaction.sign(issuingKeys);
var submitResult = server.submitTransaction(transaction);
console.log("sendPaymentFromIssuer: Tx is being submitted, result: ", submitResult)
return submitResult
```
In the case the issuer addres is not locked, new amount of the asset can be minted. To mint the asset is as easy as create a new transanction with a payment operation. This operation will mint the asset incrementing the total supply of the asset.
# 3. Use our code
If you want to use our [Token Playground's Repo](https://github.com/esteblock/token-playground/) **code**, we prepared the [src/issueAsset.js](https://github.com/esteblock/token-playground/blob/main/src/issueAsset.js) script that can be called by the `soroban-preview-7` docker container:
You can run it by:
```bash
docker exec soroban-preview-7 node src/issueAsset.js
```
Also you can run it with a different asset code than the one in settings.json by passing it the `ASSET_CODE` argument.
```bash
docker exec soroban-preview-7 node src/issueAsset.js ASSET_CODE
```
This script will send two transactions to the stellar futurenet or standalone chains, depend on your selection when launching [quicktart.sh](https://github.com/esteblock/token-playground/blob/main/quickstart.sh).
This script will take the asset, issuer adress, receiver address, amount, network passphrase and limit amount allowed to be received by receiver address from the [seetings.json](https://github.com/esteblock/token-playground/blob/main/settings.json) file.
```javascript
import settings from "../settings.json" assert { type: "json" };
const args = process.argv;
const asset_code = args[2] || settings.assetCode;
...
var networkPassphrase = settings.networkPassphrase
var issuingKeys = StellarSdk.Keypair.fromSecret(settings.issuerSecret);
var receivingKeys = StellarSdk.Keypair.fromSecret(settings.receiverSecret);
...
```
As we have showed above you can pass the asset code as an argument when invoking the script.
# 4. Next
In the [next chapter](5_get_info_about_token_in_stellar.md) we will use this docker containers in order to get info about the asset created in the current chapter
___
This Playground has been developed by [@esteblock](https://github.com/esteblock/) in collaboration with [@marcos74](https://github.com/marcos74) from [@Dogstarcoin](https://github.com/Dogstarcoin)
| esteblock | |
1,413,177 | In-Depth Guide to High Availability in AWS | Achieving high availability in your AWS infrastructure is crucial for maintaining business continuity... | 0 | 2023-03-24T04:19:47 | https://blog.guilleojeda.com/master-aws-high-availability-in-depth-guide | aws, devops, cloud | Achieving high availability in your AWS infrastructure is crucial for maintaining business continuity and minimizing the impact of service disruptions. In this detailed guide, we delve into various AWS high availability techniques and services, including step-by-step instructions on how to implement them to achieve fault tolerance and optimal performance for your applications.
## **High Availability in AWS**
High availability refers to the ability of a system or service to remain operational and accessible despite failures or faults. By leveraging AWS high availability strategies, you can:
* Minimize the risk of downtime and service interruptions
* Enhance application performance and user experience
* Meet service level agreements (SLAs) and compliance requirements
* Improve the overall reliability and resilience of your infrastructure
## **AWS Services for High Availability**
AWS offers a wide range of services and features designed to help you build highly available and fault-tolerant architectures. Let's explore some key services and their role in ensuring high availability:
### **1\. Amazon EC2**
Amazon Elastic Compute Cloud (EC2) provides scalable compute resources that can be easily provisioned and managed. To achieve high availability with EC2, consider the following strategies:
#### EC2 Auto Scaling
EC2 Auto Scaling automatically adjusts the number of EC2 instances based on demand or predefined conditions to ensure sufficient capacity and maintain performance. To set up EC2 Auto Scaling:
1. Create a Launch Configuration that specifies the instance type, AMI, and security groups.
2. Define an Auto Scaling group that uses the Launch Configuration and sets the desired capacity, minimum size, and maximum size.
3. Configure scaling policies that define when to scale in or out based on CloudWatch alarms.
#### EC2 Instances in Multiple Availability Zones
Distribute your EC2 instances across multiple Availability Zones (AZs) within a region to achieve fault tolerance and redundancy. To deploy instances in multiple AZs:
1. Specify multiple AZs when creating a VPC, ensuring they are part of the same region.
2. Launch EC2 instances in each AZ, specifying the respective subnet.
3. Distribute resources evenly across AZs to balance load and minimize the impact of an AZ failure.
#### Elastic Load Balancing (ELB) for EC2
Distribute incoming traffic across multiple EC2 instances to optimize performance and availability. To create an ELB:
1. Choose a load balancer type: Application Load Balancer (ALB) or Network Load Balancer (NLB).
Configure the load balancer settings, such as listener port and SSL certificate.
1. Create target groups with instances in multiple AZs and associate them with the load balancer.
2. Set up health checks and traffic routing rules to distribute traffic evenly across instances.
### **2\. Amazon RDS**
Amazon Relational Database Service (RDS) simplifies the process of setting up, operating, and scaling a relational database in the cloud. For high availability, use these RDS features:
#### Multi-AZ Deployments for RDS
Automatically provision a standby replica of your RDS instance in a different AZ, enabling automatic failover in case of a primary instance failure. To enable Multi-AZ deployments:
1. Create an RDS instance with the "Multi-AZ deployment" option enabled.
2. Configure automatic backups, specifying a backup window and retention period.
3. Monitor the replication status and failover events using CloudWatch metrics and RDS events.
#### RDS Read Replicas
Create read replicas to offload read traffic from your primary instance and improve performance. To set up read replicas:
1. Enable automatic backups for the primary RDS instance.
2. Create a read replica in the same region or another region, specifying the primary instance as the source.
3. Configure your application to direct read traffic to the read replica, using the replica's endpoint.
4. Monitor the replication lag and replica performance using CloudWatch metrics.
---
Master AWS with Real Solutions and Best Practices. Subscribe to the free newsletter [**Simple AWS**](https://www.simpleaws.dev?utm_source=blog&utm_medium=dev.to). 3000 engineers and tech experts already have.
<iframe src="https://embeds.beehiiv.com/1c90a8a9-57b7-4a3f-ac56-5f05d0121f72?slim=true" style="margin:0;border-radius:0px;background-color:transparent;display:block;margin-left:auto;margin-right:auto" height="55px"></iframe>
---
### **3\. Amazon S3**
Amazon Simple Storage Service (S3) provides highly available and durable storage for various types of data. To ensure high availability with S3, implement the following features:
#### S3 Bucket Replication
Automatically replicate S3 objects across buckets in different regions to improve data durability and minimize the impact of regional failures. To set up cross-region replication:
1. Enable versioning on the source and destination buckets.
2. Configure an S3 replication rule on the source bucket, specifying the destination bucket and a suitable IAM role.
3. Verify the replication status using S3 object metadata and monitor replication metrics in CloudWatch.
#### S3 Transfer Acceleration
Speed up the transfer of data between clients and S3 by leveraging Amazon CloudFront's globally distributed edge locations. To enable S3 Transfer Acceleration:
1. Enable Transfer Acceleration on your S3 bucket.
2. Use the Transfer Acceleration endpoint when uploading or downloading data from the bucket.
3. Monitor the transfer performance and cost savings using CloudWatch metrics and S3 usage reports.
### **4\. Amazon Route 53**
Amazon Route 53 is a highly available and scalable DNS service that helps route user requests to your application endpoints. Enhance high availability with these Route 53 features:
#### Latency-Based Routing with Route 53
Route traffic to the endpoint with the lowest latency for the user, improving performance and reducing load on your infrastructure. To set up LBR:
1. Create a hosted zone for your domain in Route 53.
2. Create latency alias resource record sets for each of your application's endpoints, specifying the latency region.
3. Configure health checks to monitor the availability of your endpoints and automatically reroute traffic in case of failure.
#### Geolocation Routing with Route 53
Direct user traffic to specific endpoints based on the user's geographic location, optimizing performance and ensuring compliance with regional data regulations. To enable geolocation routing:
1. Create a hosted zone for your domain in Route 53.
2. Create geolocation resource record sets for each of your application's endpoints, specifying the geographic region.
3. Configure health checks to monitor endpoint availability and automatically reroute traffic if needed.
### **5\. AWS Global Accelerator**
AWS Global Accelerator is a networking service that improves the availability and performance of your applications for users around the world by routing traffic through AWS's globally distributed edge locations. To set up AWS Global Accelerator:
1. Create an accelerator, specifying your desired IP address type (static or elastic).
2. Add listeners to your accelerator, configuring the protocols and port ranges.
3. Create endpoint groups for each AWS region where your application is deployed.
4. Add application endpoints (such as EC2 instances or load balancers) to the endpoint groups.
5. Update your DNS records with the Global Accelerator's Anycast IP addresses to route user traffic.
6. Monitor the performance and health of your accelerator using CloudWatch metrics and health checks.
## **High Availability Design Patterns and Best Practices in AWS**
In addition to using AWS services, consider implementing these high availability design patterns and best practices:
### Decoupling Components in Your AWS Architecture
Decouple your application components to minimize the impact of failures and improve scalability. Use services like Amazon SQS, SNS, and Kinesis to build decoupled, event-driven architectures.
### Stateless Applications in AWS
Design stateless applications to ensure that any instance can handle any request without relying on session or state information. Use services like Amazon DynamoDB, ElastiCache, or Amazon RDS to store and manage state information externally.
### Distributed Data in AWS
Distribute data across multiple AZs and regions to achieve fault tolerance and minimize the impact of failures. Use services like Amazon RDS Multi-AZ deployments, S3 cross-region replication, and DynamoDB global tables.
### Implementing a Cache in AWS
Implement caching strategies to improve application performance and reduce the load on your backend services. Use services like Amazon ElastiCache or Amazon CloudFront to cache frequently accessed data and content.
### Monitoring and Alerting in AWS
Monitor your infrastructure and set up alerts to proactively detect and respond to failures and performance issues. Use services like Amazon CloudWatch, AWS X-Ray, and AWS Trusted Advisor to monitor and optimize your infrastructure.
### Backup and Disaster Recovery in AWS
Regularly back up your data and test your disaster recovery plan to minimize data loss and ensure business continuity. Use services like AWS Backup, Amazon RDS snapshots, and Amazon S3 lifecycle policies to automate backup and recovery processes.
### Infrastructure as Code in AWS
Manage and version your infrastructure as code using [AWS CloudFormation](https://blog.guilleojeda.com/aws-cloudformation-explained?utm_source=blog&utm_medium=dev.to) or Terraform to ensure consistency, repeatability, and easy recovery. Implement continuous integration and continuous deployment (CI/CD) pipelines to automate infrastructure provisioning and application deployments.
By implementing these AWS services, design patterns, and best practices, you can build a highly available, fault-tolerant infrastructure that ensures optimal performance and reliability for your applications.
---
Master AWS with Real Solutions and Best Practices.
Join over 3000 devs, tech leads, and experts learning real AWS solutions with the [Simple AWS newsletter](https://www.simpleaws.dev?utm_source=blog&utm_medium=dev.to).
* Analyze real-world scenarios
* Learn the why behind every solution
* Get best practices to scale and secure them
Simple AWS is **free**. [Start mastering AWS!](https://www.simpleaws.dev?utm_source=blog&utm_medium=dev.to)
<iframe src="https://embeds.beehiiv.com/1c90a8a9-57b7-4a3f-ac56-5f05d0121f72?slim=true" style="margin:0;border-radius:0px;background-color:transparent;display:block;margin-left:auto;margin-right:auto" height="55px"></iframe>
If you'd like to know more about me, you can find me [on LinkedIn](https://www.linkedin.com/in/guilleojeda/) or at [www.guilleojeda.com](https://www.guilleojeda.com?utm_source=blog&utm_medium=dev.to)
<script type="text/javascript" async src="https://embeds.beehiiv.com/attribution.js"></script> | guilleojeda |
1,413,277 | The Next Big Thing in Software Development | Software engineering is an ongoing field of study with novel concepts and advancements constantly... | 0 | 2023-03-24T05:43:21 | https://dev.to/amyparker/the-next-big-thing-in-software-development-4o0g | softwaredevelopment, software, webdev | Software engineering is an ongoing field of study with novel concepts and advancements constantly appearing. In this blog entry, we'll precisely consider the likelihood of the following remarkable advancement in software development and how it could change the manner in which we develop and exploit technology. We'll look into the likely advantages, difficulties, and results of this upcoming trend in software engineering and discuss how it could form the future of the discipline.
## What is the next big thing in software development?
Software development is an ever-changing field, and staying up-to-date with the most recent trends is paramount for competitiveness. With so many advancements being developed on a daily basis, predicting what will be the “next big thing” can be quite a challenge. Nevertheless, some major trends are becoming evident which will significantly impact software production in the near future.
One of the most revolutionary technologies is Artificial Intelligence (AI). This will modify how humans and technology interact with each other by allowing machines to perform complex tasks such as natural language processing and image recognition. Furthermore, it could lead to a more streamlined programming and debugging process.
Cloud computing is another prominent trend which will likely shape software development. It enables businesses to store data in the cloud and access it from any device, consequently streamlining collaboration and scalability. Also, cloud computing eliminates the need for physical hardware, significantly reducing overhead costs.
Another rising trend is the Internet of Things (IoT). It links various systems and devices together to enable them to interact with each other. For instance, a home security system can be linked with a home automation system, granting users the capability to control lighting, locks, and other devices from their computers or smartphones.
These are just a few of the several trends that are forming software development at present. In order to remain competitive in this rapidly changing landscape, it's important to be conscious of developing trends and technologies.
## Why is it important to stay ahead of the curve?
Being a software developer necessitates being ahead of the technology curve to guarantee competency and deliver the most beneficial services for customers. To do this, one must stay abreast of the most up-to-date trends to guarantee one's expertise remains relevant and distinguished from others. By familiarizing oneself with upcoming trends, one is better able to plan for future customer needs and be ready to provide the required skills and resources.
Additionally, by understanding developing technologies, you can form innovative ideas and resolutions that will make you stand out from other developers. Finally, keeping informed of the latest developments also helps you stay aware of safety matters. With more sophisticated tools come increased cyber risks. To mitigate these, being informed of the industry's highest security standards helps ensure your code is secured against any breaches or malicious activities.
> Also Read: [Top Software Development Trends](https://www.appsdevpro.com/blog/top-software-development-trends/)
## How can you stay ahead of the curve?
To remain at the top of the game in software development, it is essential to be mindful of evolving trends and technologies. Observing what other developers are doing and honing your own skills are key. To assist in this, the following points provide guidance on staying ahead of the competition:
1. Remain informed: To be apprised of the most recent progressions, reading trade magazines and blogs regularly is of the essence. Additionally, subscribing to newsletters, adhering to influencers and professionals on social media, and partaking in industry meetings and conferences is recommended.
2. Develop new abilities: In order to stay competitive, constantly learning new skills and refining current ones is imperative. Online courses, tutorial videos, books, and certifications are ways to attain this.
3. Experiment with different technologies: To understand the feasibility of a technology, do not be scared to give something novel a try. Experiencing a range of technologies, toying around with varied frameworks, and creating small projects are ways to get a sense of what works for you.
4. Build relationships: Interacting with your peers and industry leaders can be advantageous in generating fresh ideas and staying in the know. Developing a network can lead to rewarding opportunities and ensure you maintain an advantage.
By applying these suggestions, you will guarantee you remain current on the most recent advancements in software development and keep ahead of the curve.
## What are some emerging trends?
Low-Code Platforms: Precisely, low-code development platforms are becoming more and more popular for software developers, as they enable developers to quickly and easily create applications without requiring much (if any) coding knowledge. These platforms facilitate companies to swiftly create new products and services, as well as decrease the costs of development.
Artificial Intelligence (AI): AI is becoming an integral component of many software development projects. AI can be used to automate particular tasks, deliver a better user experience, improve software efficiency, and a whole lot more. As AI advances, developers will be able to use its capacities in a wide range of innovative ways.
Serverless Computing: Serverless computing is a cloud-oriented approach to software development that allows developers to concentrate on the application code, instead of being worried about dealing with or preserving the underlying infrastructure. This approach facilitates developers to construct applications without needing to stress over the complexities of server-side programming.
Cloud-Native Applications: Cloud-native applications are applications that have been built from the beginning to run in the cloud. These applications can be quickly deployed and scale according to demand, which makes them optimal for organizations that require to rapidly construct and launch applications at scale.
IoT: The Internet of Things (IoT) is changing how we interact with technology and how software is built. IoT gadgets are becoming progressively potent and interconnected, giving developers the ability to construct applications that can detect, interact, and trigger physical objects.
DevOps: DevOps is an assembly of practices that merges software development and operations, allowing teams to assemble and dispatch applications at a faster pace than ever before. It allows organizations to swiftly build and launch applications while maintaining a high degree of quality and dependability.
## Conclusion
For businesses to stay ahead of the competition, they must keep up with the newest software development trends and acquire app developers experienced in utilizing these tools. Investing in the right talent by **[hiring android app developers in India ](https://www.appsdevpro.com/hire-developers/hire-android-developers.html)**and being up-to-date on what is the next best thing in software engineering will give them an edge in the digital landscape and guarantee their products are at the forefront of technology. By having a solid grasp of software development's current innovations, businesses can maintain a competitive advantage and keep up with the ever-shifting market.
| amyparker |
1,413,338 | Unlocking the Power of Semantics: How It Helps Search Engines Understand Your Content Better | Search engines have come a long way from the days of simply matching keywords. Today, search... | 0 | 2023-03-24T18:30:00 | https://dev.to/parth_brahmaxatri/unlocking-the-power-of-semantics-how-it-helps-search-engines-understand-your-content-better-2l6f | webdev, programming, semantics, seo | Search engines have come a long way from the days of simply matching keywords. Today, search algorithms are more sophisticated, and they use a variety of methods to interpret and understand the content on websites. One of the key technologies that search engines use to understand content is semantics.
Semantics is the study of meaning in language. It involves analyzing the relationships between words, phrases, and sentences to determine their intended meaning. Semantics can help search engines better understand your content, which can improve your website's search rankings and make it easier for users to find your site.
Here are some ways that you can use semantics to unlock the power of search engine optimization (SEO):
**1. Use natural language in your content**
Search engines have become much better at understanding natural language. This means that you can write content that is conversational and easy to read, without having to worry about using specific keywords or phrases. Using natural language can help you create content that is more engaging and informative for your audience.
**2. Use structured data**
Structured data is a way of marking up your content so that search engines can better understand its meaning. Structured data can include information such as product prices, reviews, and ratings, as well as information about your organization, such as its name, address, and phone number.
By using structured data, you can help search engines understand the context of your content. This can improve your website's search rankings and make it easier for users to find your site.
**3. Use synonyms and related keywords**
Search engines are becoming better at understanding synonyms and related keywords. This means that you don't have to use the exact same keyword throughout your content. Instead, you can use synonyms and related keywords to create more varied and interesting content that is still relevant to your target audience.
For example, if you are writing about "dog training," you might also use related keywords such as "puppy training" or "obedience training" to help search engines understand the topic of your content.
**4. Use internal linking**
Internal linking is a technique that involves linking to other pages on your website. This can help search engines understand the structure of your website and how different pages are related to each other. By using internal linking, you can help search engines discover new pages on your website and improve the overall user experience.
**5. Focus on user intent**
Finally, it's important to focus on user intent when creating content. User intent refers to what users are looking for when they perform a search. By understanding user intent, you can create content that is more relevant and useful to your target audience.
For example, if someone is searching for "dog training," they may be looking for information on how to train their puppy, or they may be looking for a dog training class in their area. By understanding user intent, you can create content that addresses these needs and helps users find what they are looking for.
In conclusion, semantics is a powerful tool that can help search engines understand the meaning of your content. By using natural language, structured data, synonyms and related keywords, internal linking, and focusing on user intent, you can improve your website's search rankings and provide a better user experience for your audience. | parth_brahmaxatri |
1,413,375 | Another Unnecessary Post in Favor of Linux? | In a recent discussion about concepts and technologies that you've changed your mind about I... | 22,597 | 2023-03-24T16:52:45 | https://dev.to/ingosteinke/another-unnecessary-post-in-favor-of-linux-1jjp | watercooler, linux, devlive, discuss | In a recent [discussion about concepts and technologies that you've changed your mind about](https://dev.to/isaacdlyman/whats-something-youve-changed-your-mind-about-589) I remembered only a few technical tools that I liked from the start and still do, including Linux, HTML, and the GIMP. But I used to be quite skeptical about many new technologies when I first learned about them, including computers and Windows, or graphical point-and-click interfaces in general. Everything used to be so quick and simple just keeping my 10 fingers perfectly positioned above my mechanical keyboard!
{% post https://dev.to/isaacdlyman/whats-something-youve-changed-your-mind-about-589 %}
In the days back then, computers still felt quite special and it was a privilege to have access to or even own a computer. Same with software. As students on a limited budget, we craved for the free 30-days-evaluation versions of the latest software, hoping that our systems would meet the hardware requirements.
## Text Based Roots of my Computing Story
As a kid, I did my first BASIC programming experiments with a language of the same name that felt like some kind of magic natural language parser unlike anything before, maybe the same way that younger developers might feel about chatGPT now. I like the feeling of knowledge and control, like a piano player or a writer with a blank sheet of paper, free to imagine anything.
Actually, my first attempt must have been something like
```
10 READ THE INPUT A$
RUN
?SYNTAX ERROR IN 10
```
before my father — the one who bought our first computer — told me to read the manual (an actual book printed on paper). Still, not bad for a first try. `READ` is a valid command (used to retrieve a row from a database, if I remember correctly), and my intended user interaction would have been a brief and elegant `INPUT A$`.
The following (untested) code should repeat the process all over again.
```
10 INPUT A$
20 PRINT "HELLO " + A$
30 GOTO 10
RUN
```
Lacking a photo of my first coding experience, here are some vintage pictures of other boys at that time that might have been me.

Image sources:
- [videospielgeschichten.de/the-computer-kids-die-achtziger-mein-c64-und-ein-computerklub](https://www.videospielgeschichten.de/the-computer-kids-die-achtziger-mein-c64-und-ein-computerklub/)
- [www.vintag.es/2019/03/commodore-64.html](https://www.vintag.es/2019/03/commodore-64.html)
- [wikipedia.org/wiki/IBM_Personal_Computer](https://sco.wikipedia.org/wiki/IBM_Personal_Computer)
Home computers were able to display graphics. There was a palette of 16 pre-defined colors and a very limited resolution. They could also play music, in a unique synthesizer sound brought to perfection by digital composers like [Rob Hubbard](https://en.wikipedia.org/wiki/Rob_Hubbard). Most kids used home computers as a playstation bought with the excuse of wanting to learn to code.
[](https://en.wikipedia.org/wiki/MOS_Technology_6581)
[Wikipedia: MOS Technologies 6581 SID Sound Chip](https://en.wikipedia.org/wiki/MOS_Technology_6581), circuit and sound synthesizing diagrams.
Graphical user interfaces felt something like a nice gimmick. I was fascinated by the graphics, trying some kind of freehand drawing using all the luminescent colors so unlike crayons on paper.
But using a GUI for coding seemed like a step back, seemingly giving up freedom and efficiency to become a user trying to make sense of the new possibilities. Why would anyone want to see a preview of bold text, font size, or a typeface? Hard to imagine that I would become a web developer, even more so as there was no typography in the early web either.
### Computers for the Masses
Then computers became more commonplace and the internet evolved, introducing the World Wide Web with its graphical browsers, images, and clickable hyperlinks. Universities and internet cafés provided the opportunity to use different machines to "surf the web", and I found out about UNIX one day when the only free workstation was the "server" machine running IBM AIX. Computers by Apple, a pioneer of so many important technology, used to be a very special niche device for graphic designers, much like a SUN or Silicon Graphics workstation.

I started to get bored and frustrated about the limitations of my Windows PC at home. Personal computers, with their modular hardware architecture and affordable components from ["compatible"](https://en.wikipedia.org/wiki/IBM_PC_compatible) producers and second-hand repair stores, proved to be the next [computers for the masses](https://hackaday.com/2016/12/13/computers-for-the-masses-not-the-classes/), and the Windows / MS-DOS operating system was hacker-friendly enough for experimenting. But, possibly due to business requirements, Windows became more and more "professional" and thus less configurable with each update, while still unprofessionally degrading over time, notorious for its errors and crashes destroying data or the whole operating system.
## An Alternative Open-Source Operating System
One day my father told me about a new alternative operating system that he had read about, an open-source version of UNIX, free to use, maintained by a non-profit community.
The early Linux distributions were still far from a Windows replacement and originally never intended to be anyway.
Luckily, I had not forgotten about my text-based "hacker" roots before graphical user interfaces, so it did not stop my enthusiasm for Linux that I did not manage to start its graphical user interface on my PC.

I kept using both Windows and Linux in parallel, which required a reboot to change the operation system. Ever so often, I would use Windows for every day work and study, but also for graphic image editing and multimedia experiments. Photoshop, Bryce, Dancer DNA, Rebirth, and other more or less obscure software inspired me and my friends for a lot of creative work and leisure. I rarely played computer games anymore, as I preferred a multimedia jam session to shooting virtual characters in a clunky 3D landscape.
## PC vs. MacBook — a Professional Choice?
Fast forward into professional life, companies often ask new team members if they prefer to use a PC or a Macbook. Most of them still have no default Linux option, but of course there will be colleagues opting for a Windows PC and install Linux unless it's officially forbidden.
Operating systems have changed and conformed, integrating popular trends over time, so it does not matter that much anymore. I can do my work on a Mac, on a Windows PC, or on my Linux laptop. But there are still some fundamental differences and so it still matters at least to me.
Here are some portable devices by different manufacturers. There is an old, but functional, MacBook that dates back at least to 2013, a refurbished Lenovo ThinkPad that boots with Linux Mint or Microsoft Windows, a Linux laptop produced by TUXEDO, a budget smartphone running Android, and an outdated Apple iPhone that I sometimes use for testing my websites on real-world devices.

## Advantages and Disadvantages of Mac vs. Windows vs. Linux
Both Apple / Mac(book) and Windows machines are known to be popular commercial products with professional support and high hardware quality. While Linux is neither that popular nor known for it, you can get all of that with a professional Linux machine, preinstalled with a mainstream distribution, commercial drivers and professional support. Companies like Red Hat and SUSE have established business models based on and around open-source software proving that there are alternatives to closed-source products and marketing-driven companies.
Apple has successfully managed to transform their operating systems, or at least the desktop version, into a stable software based on a UNIX system with a very consistent and elegant user interface. On a software level, MacOS and Linux have converged in a way that we can share common code with Docker and shell scripts in a team of Mac and Linux users without worrying too much about compatibility.
### Windows Incompatibilities
Now Microsoft Windows has become the unlikely development environment. Despite their efforts with WSL and open source software like the brilliant Visual Studio Code (which also runs on Linux and Mac!), don't trust a Docker setup to work on a Windows machine, at least not at the same speed. Although I know a few people who love their Windows setup and were probably either lucky or invested a lot in top hardware and a perfect configuration, in the past years it has always been Windows where something did not work as expected. Same with the infamous Internet Explorer, but to be fair, they managed to replace it by the new Edge, and Internet Explorer used to be a motor of innovation before Firefox, Chrome, and Safari had even been around.

Steve Jobs might have been a difficult character to say the least, but Apple got some things right that Microsoft and many other companies didn't. Microsoft invented the "pocket PC" long before Apple's iPhone, but it was the latter that fundamentally changed the way we use the internet and do our work on the go without having to sit on a chair in an office from 9 to 5.
I chose a Macbook at my last two jobs as an employee and I was quite happy about it. As I said, the hardware is great, the user interface looks nice, and Macbooks are quite lightweight compared to a Windows PC with a similar performance.
### Vendor Lock-In as the Greatest Disadvantage
Maybe the greatest disadvantage of Apple is its [vendor lock-in](https://en.wikipedia.org/wiki/Vendor_lock-in). As Apple users, we depend on Apple's decisions. The current MacOS version still offers a lot of choice, or rather ways to work around the recommended tools and limitations. But why do we have to install XCode and agree to its license if all we want is use regular GNU command line tools, git, and homebrew? On a Macbook, we aren't forced to use Safari as the only browser, but as iPhone users we are. And while the old Macbooks, thanks to their supreme hardware and high quality, are still doing their job after ten years, we can't upgrade to the latest operating system anymore, missing out on security updates and the latest Safari browser version.
But in my opinion, Windows is even worse in every aspect. The UI has changed a lot, replacing the old look and feel with a clunky design with some very impractical UX degradations like opening the system settings or a file browser in a very small window despite the large icons and spacing so that users need to scroll to find something that could have been visible at first sight at least on a large monitor. The latest Windows versions have removed a lot of the configuration possibilities that used to be available in earlier versions, so we have to love or leave the way they intend their product to look and feel on my computer.

Microsoft has been praised for its accessibility and internationalization achievements, but as a native speaker, I always found there German documentation, captions, and error messages misleading and hard to understand. Later I found out that also applies to the original English messages. I always cringe when someone sends me a Teams meeting invitation. Receiving it in any other email client apart from Outlook, it is sometimes [impossible to guess the time and date of the meeting](https://learn.microsoft.com/en-us/answers/questions/283844/how-do-i-check-the-meeting-time-and-date) but if that shows up and you even manage to accept and save the date in your calendar, there is still the actual meeting. How to be sure which Teams client to use and which account to log into? Maybe this works on single-user Windows client with a corporate email address within the same company or organization. But who knows? I am not going to switch back to Windows to fix your usability bugs. It's only a matter of time until Teams will follow Internet Explorer into the hall of shame of software that nobody will miss (together with Flash and a bunch of other products bought and destroyed by Adobe, their kindred spirit).
### Linux as a Choice for "Non-Technical" Users
Let's be honest, Linux has its issues, and without professional support, I would not recommend it for everyone. But then again, it does not matter that much on a basic level anyway. If all you need is a text editor, a web browser, and an option to print a document every now and then, Linux might still be the best choice even for "non-technical" users. Go for a mainstream distribution like Mint or (L)Ubuntu, put Firefox and Libre Office shortcuts in the main menu, and you are set!

There is still a lot of (mostly commercial) software that does not run on Linux, at least not officially. We can try and use WINE which enables quite a lot of Windows software to run on a Linux system, and we can use virtualization like VirtualBox or VMware to provide a 99% perfect emulation of an actual Windows computer, but that's such a waste of performance that some software like Adobe Photoshop is still not guaranteed to work properly. Hardware intensive multimedia tools for musicians, film makers, and gamers should be used in the environment that it's intended for, and the same goes for developing native Windows software of course.
But on a more modest and average level, we can do games, graphics, music, and even video editing on a modern Linux computer. Many distributions already come with some default choices that do their job quite well.
### Linux Support for Old Hardware
Maybe the greatest advantage of Linux is the support for old hardware. If you want to save money and the environment by using your electronic devices for a very long time, Linux is for you! You can take a very old PC, either desktop or laptop, install Linux, and get the latest versions of the most important software, and of course every necessary security update of any relevant system component. Well, mostly. Some software stopped supporting 32-bit architectures, but as most Linux software is open-source, we could make an effort and compile a binary that runs on our system.
## Reduce, Reuse, Repair and Donate!

To show and prove, I have reused some older machines thanks to Linux, either to donate them for charity or for my own personal use. I even developed my own website on an old refurbished ThinkPad before I got my current top notch TUXEDO InfinityBook Linux laptop. You can read more about reusing laptops in my story about [low-waste tech: reduce, reuse, repair and donate!](https://dev.to/ingosteinke/low-waste-tech-reduce-reuse-repair-and-more-18pl).
{% post https://dev.to/ingosteinke/low-waste-tech-reduce-reuse-repair-and-more-18pl %}
Conclusion: maybe this is yet another unnecessary post in favor of Linux to add some continuity and consistency to my DEV blog, but I hope that it might inspire you or that I might even be charging an open door. So have fun with Linux — or leave a comment to tell why why you disagree! 😉 | ingosteinke |
1,413,404 | Embedded Rust on BBC Micro Bit: unlocking Vec and HashMap | As an engineer having spent most of 2022 learning the Rust language, I was a little worried about the... | 0 | 2023-03-24T09:41:55 | https://dev.to/cyrilmarpaud/embedded-rust-on-bbc-micro-bit-unlocking-vec-and-hashmap-2nm0 | rust, embedded, microbit, tutorial | As an engineer having spent most of 2022 learning [the Rust language](https://www.rust-lang.org/), I was a little worried about the [`no_std`](https://docs.rust-embedded.org/book/intro/no-std.html) side of embedded systems programming.
Embedded systems, like the [BBC Micro Bit](https://en.wikipedia.org/wiki/Micro_Bit) (a small [ARM Cortex-M4F](https://en.wikipedia.org/wiki/ARM_Cortex-M#Cortex-M4)-based computer designed for educational purposes featuring a 5×5 LED matrix, multiple sensors, Bluetooth Low Energy capabilities and a lot more), are usually programmed as **bare_metal** devices in a `no_std` environment, meaning we can't use the [`std`](https://doc.rust-lang.org/std/index.html) crate where `Vec` and `HashMap`, among others, reside.
While very understandable when considering older devices, the growing specs and capabilities of modern devices make it increasingly tempting to use higher-level abstractions. The purpose of this tutorial is thus to demonstrate how to enable the use of [`Vec`](https://doc.rust-lang.org/std/vec/struct.Vec.html) and [`HashMap`](https://doc.rust-lang.org/std/collections/struct.HashMap.html) on a BBC Micro Bit.
The original article and associated examples are available in my [Micro Bit Vec and HashMap](https://gitlab.com/cyril-marpaud/microbit_vec_hashmap) GitLab repository. Let us now initiate this endeavor.
## Requirements
This tutorial does not require much:
- A computer with internet access
- A [BBC Micro Bit](https://microbit.org)
- A USB cable
- Less than an hour of your time
## Setting up the OS
It is assumed that you have a fully functional 22.10 Ubuntu Linux distribution up and running. If you don't, [detailed instructions to set one up can be found in my previous tutorial](https://gitlab.com/cyril-marpaud/rust_esp_quickstart/-/blob/main/README.md#the-os).
## Setting up the development environment
First of all, we are going to install a few required dependencies. Open a terminal (the default shortcut is `Ctrl`+`Alt`+`T`) and run the following command:
```Bash
sudo apt install --yes curl gcc libudev-dev=251.4-1ubuntu7 pkg-config
```
*(installing version `251.4-1ubuntu7.1` of `libudev-dev` induces a crash on my machine so I'm using version `251.4-1ubuntu7` instead)*
We also need to install [Rust](https://www.rust-lang.org) and [Cargo](https://doc.rust-lang.org/stable/cargo/). [Rustup](https://rustup.rs/) can take care of that for us:
```Bash
curl --proto '=https' --tlsv1.2 --fail --show-error --silent https://sh.rustup.rs | sh -s -- -y
source "$HOME/.cargo/env"
```
As we will be compiling for an ARM Cortex-M4F microcontroller, we have to install [the adequate target](https://doc.rust-lang.org/beta/rustc/platform-support.html):
```Bash
rustup target add thumbv7em-none-eabihf
```
After compilation comes flashing. [`cargo embed`](https://probe.rs/docs/tools/cargo-embed/) is the solution we will be using for that purpose. Install it like so:
```Bash
cargo install cargo-embed
```
Finally, a **udev rule** will take care of granting USB access to the Micro Bit:
```Bash
echo "SUBSYSTEMS==\"usb\", ATTRS{idVendor}==\"0d28\", ATTRS{idProduct}==\"0204\", MODE=\"0660\", GROUP=\"plugdev\"" | sudo tee /etc/udev/rules.d/99-microbit.rules > /dev/null
sudo udevadm control --reload-rules && sudo udevadm trigger
```
## Setting up the project
Cargo makes it easy to create a Rust project and add the adequate dependencies:
```Bash
cargo init microbit
cd microbit
cargo add cortex-m-rt microbit-v2 panic_halt
```
Now, `cargo embed` needs to know which device it has to flash. Create a file named `Embed.toml` at the root of the project with the following content:
```TOML
[default.general]
chip = "nrf52833_xxAA"
```
We can either specify a `--target` flag each time we compile our software or set that up once and for all in a configuration file. Moreover, our device's memory layout needs to be provided to the linker. Create the following `.cargo/config` file which will do just that for us:
```Bash
mkdir .cargo
touch .cargo/config
```
```TOML
[target.'cfg(all(target_arch = "arm", target_os = "none"))']
rustflags = [
"-C", "link-arg=-Tlink.x",
]
[build]
target = "thumbv7em-none-eabihf"
```
Finally, open `src/main.rs` and copy/paste this LED-blink minimal example inside:
```Rust
#![no_main]
#![no_std]
use cortex_m_rt::entry;
use microbit::{
board::Board,
hal::{prelude::*, timer::Timer},
};
use panic_halt as _;
#[entry]
fn main() -> ! {
let mut board = Board::take().expect("Failed to take board");
let mut timer = Timer::new(board.TIMER0);
let mut row = board.display_pins.row1;
let delay = 150u16;
board.display_pins.col1.set_low().expect("Failed to set col1 low");
loop {
row.set_high().expect("Failed to set row1 high");
timer.delay_ms(delay);
row.set_low().expect("Failed to set row1 low");
timer.delay_ms(delay);
}
}
```
## Blinking an LED
Plug the board to your computer then compile the program and flash it single-handedly with this simple command:
```Bash
cargo embed
```
When the process ends, you should see the upper-left LED blink. Congratulations!
## Unlocking Vec
I have to admit that I shamefully lied when I told you [`Vec` resides in the `std` crate](https://doc.rust-lang.org/std/index.html?search=vec) as [it is actually available in the `alloc` crate](https://doc.rust-lang.org/alloc/vec/struct.Vec.html). As the name suggests, using it requires an allocator.
Luckily, the `embedded-alloc` crate provides us with one (there is [a complete example in the associated Github repository](https://github.com/rust-embedded/embedded-alloc)). We also need the `cortex-m` crate to handle critical sections. Add them to the project's dependencies like so:
```Bash
cargo add embedded-alloc
cargo add cortex-m --features critical-section-single-core
```
Then, in `src/main.rs`, we need to customize a few things. Import `Vec` and declare a global allocator:
```Rust
extern crate alloc;
use alloc::vec::Vec;
use embedded_alloc::Heap;
#[global_allocator]
static HEAP: Heap = Heap::empty();
```
At the beginning of the `main` function, initialize the allocator and a size for our heap (the Micro Bit has 128KiB of RAM):
```Rust
{
use core::mem::MaybeUninit;
const HEAP_SIZE: usize = 8192; // 8KiB
static mut HEAP_MEM: [MaybeUninit<u8>; HEAP_SIZE] = [MaybeUninit::uninit(); HEAP_SIZE];
unsafe { HEAP.init(HEAP_MEM.as_ptr() as usize, HEAP_SIZE) }
}
```
Replace the main loop, using `Vec`:
```Rust
let mut vec = Vec::new();
vec.push(true);
vec.push(false);
vec.push(true);
vec.push(false);
vec.push(false);
vec.push(false);
vec.iter().cycle().for_each(|v| {
match v {
true => row.set_high().expect("Failed to set row high"),
false => row.set_low().expect("Failed to set row low"),
}
timer.delay_ms(delay);
});
loop {}
```
Finally, compile and flash:
```Bash
cargo embed
```
The LED should now be blinking in a heartbeat pattern. You are using Rust's `Vec` on a Micro Bit, congratulations!
## Unlocking HashMap
Unlike `Vec`, [the `alloc` crate does not suffice for `HashMap`](https://doc.rust-lang.org/std/index.html?search=HashMap), full `std` is required (which in turn requires a `nightly` toolchain because `std` is not supported for our platform). To avoid having to type `+nightly` each time we invoke `cargo` or `rustup`, create a file named `rust-toolchain.toml` with the following content:
```TOML
[toolchain]
channel = "nightly"
```
As building the `std` crate requires its source code, use rustup to fetch that component:
```Bash
rustup component add rust-src
```
In `.cargo/config`, add the following lines (`panic_abort` is needed here because of [a currently unresolved issue](https://github.com/rust-lang/rust/issues/83805#issuecomment-812874115)):
```TOML
[unstable]
build-std = ["std", "panic_abort"]
```
The `std` crate provides an allocator, we can therefore remove those lines from `src/main.rs`:
```Rust
#![no_std]
```
```Rust
extern crate alloc;
use alloc::vec::Vec;
```
`std` also provides a panic handler, the import and `panic-halt` dependency can therefore be removed:
```Rust
use panic_halt as _;
```
```Bash
cargo remove panic-halt
```
Now that we are rid of those useless parts, there are a few things we need to add. As we're building `std` for an unsupported (thus flagged *unstable*) platform, we need the `restricted_std` feature. Add it to `src/main.rs`:
```Rust
#![feature(restricted_std)]
```
Import `HashMap`:
```Rust
use std::{
collections::{hash_map::DefaultHasher, HashMap},
hash::BuildHasherDefault,
};
```
And use it instead of `Vec`:
```Rust
let mut hm = HashMap::with_hasher(BuildHasherDefault::<DefaultHasher>::default());
hm.insert(0, false);
hm.insert(1, true);
hm.insert(2, false);
hm.insert(3, true);
hm.insert(4, true);
hm.insert(5, true);
hm.values().cycle().for_each(|v| {
match v {
true => row.set_high().expect("Failed to set row high"),
false => row.set_low().expect("Failed to set row low"),
}
timer.delay_ms(delay);
});
loop {}
```
The reason we are providing our own hasher is that the default one relies on the `sys` crate which is platform dependent. Our platform being unsupported, [the associated implementation](https://github.com/rust-lang/rust/tree/master/library/std/src/sys/unsupported) either does nothing or [fails](https://github.com/rust-lang/rust/blob/master/library/std/src/sys/unsupported/common.rs#L15).
Therefore, keep in mind that using anything from said `sys` crate will either fail or hang (in particular: threads). `HashMap` is fine though, and the above snippet should make the LED blink in an inverted heartbeat pattern:
```Bash
cargo embed
```
Rust's `HashMap` on a Micro Bit, Hooray !
## Actually using HashMap
The [alphabet folder](https://gitlab.com/cyril-marpaud/microbit_vec_hashmap/-/tree/main/alphabet) of my Gitlab repository demonstrates how to display caracters on the LED matrix using a `HashMap`. You can flash it by running the following commands:
```Bash
cd # We need to move out of the "microbit" folder we created earlier
sudo apt install --yes git
git clone https://gitlab.com/cyril-marpaud/microbit_vec_hashmap.git
cd microbit_vec_hashmap/alphabet
cargo embed
```
## Conclusion
The ability to use Rust collections on a device as humble as the BBC micro:bit represents a remarkable achievement in embedded programming. Thanks to recent hardware advances, even modest embedded devices can now support high-level abstractions that were once the exclusive domain of larger and more expensive systems.
Rust's efficiency and modern design make it an ideal language for taking advantage of these new capabilities and pushing the limits of what is possible on a microcontroller: developers can create complex and sophisticated projects that once seemed impossible on such small devices, from data-driven sensors to interactive games and applications.
Whether you are a seasoned expert or just getting started, the future of embedded programming is brighter than ever, and Rust is leading the way.
## See also (aka useful links)
### Documentation
- [The Rust Language](https://www.rust-lang.org)
- [BBC Micro Bit](https://en.wikipedia.org/wiki/Micro_Bit)
### Crates
- [microbit](https://docs.rs/microbit-v2/0.13.0/microbit/)
- [LMS303AGR](https://docs.rs/lsm303agr/latest/lsm303agr/index.html)
- [tiny_led_matrix](https://docs.rs/tiny-led-matrix/latest/tiny_led_matrix/)
- [cortex_m](https://docs.rs/cortex-m/0.7.7/cortex_m/)
- [cortex_m_log](https://docs.rs/cortex-m-log/latest/cortex_m_log/)
- [minimult_cortex-m](https://crates.io/crates/minimult_cortex-m)
- [embedded_alloc](https://docs.rs/embedded-alloc/0.5.0/embedded_alloc/)
### Tutorials
- [Discovery](https://docs.rust-embedded.org/discovery/microbit/index.html)
- [`std` on Micro Bit](https://blog.timhutt.co.uk/std-embedded-rust/index.html)
## Whoami
My name is [Cyril Marpaud, I'm an embedded systems freelance engineer and a Rust enthusiast 🦀](https://www.linkedin.com/in/cyrilmarpaud/) I have nearly 10 years experience and am currently living in Lyon (France). | cyrilmarpaud |
1,413,529 | Crux Search Engine in Chrome | In this Post We will be looking at Crux Search Engine In chrome, so without further ado lets get... | 0 | 2023-03-24T10:39:14 | https://sites.google.com/view/tailorinindirapuram/ | In this Post We will be looking at Crux Search Engine In chrome, so without further ado lets get stared :)
## What Is Crux
Crux stands for chrome real user experience report , it is a report that is generated by the field data (real data that chrome gets from our devices :)and then this data is used for checking the CWB ([Core Web Vitals](https://web.dev/vitals/)) Of a site.
## Why do you want to see Crux Report Of A site
Crux report is very helpful in optimizing the core web vitals of a site , that report consists of core web vitals metrics such as FCP (first contentful paint), LCP(largest contentful paint) and CLS(content layout shift) by getting to know these metrics, you can start working on optimizing them, so no user feel like , your site sucks(slow) and click back button.
## How to Use Crux Search engine in Chrome
- Go into the Chrome setting with the help of three dot menus in top right of the chrome.
- After then tap on search engine settings
- Now tap on the "Manage search engines and site search" and now scroll down to "Site Search" and now click "Add" button and enter following details
```
Search engine: CrUX
Shortcut: crux
URL with %s in place of query: https://datastudio.google.com/c/u/0/reporting/bbc5698d-57bb-4969-9e07-68810b9fa348/page/keDQB?params=%7B%22origin%22:%22%s%22%7D
```
- Done , now just go back to search bar of chrome and type "crux" and press tab button and now you will be able to check the crux report by just typing the origin url.
Note:- As a rule of thumb, use whichever origin users see in the URL bar.
| arman_94 | |
1,413,552 | The Test Planning Inquiry Technique | Making a test strategy is frequently a challenging task. By using the fundamental concepts of... | 0 | 2023-03-24T11:42:08 | https://dev.to/abhayit2000/the-test-planning-inquiry-technique-k20 | testing, mobile, webdev | Making a test strategy is frequently a challenging task. By using the fundamental concepts of cost-benefit analysis and risk analysis, a [test automation platform](https://www.headspin.io/enterprise) can be created by carefully balancing the following aspects of software development:
**Cost of implementation**: Short-term development costs are influenced by the time and degree of complexity required to deploy testable features and test planning for specific situations.
**Cost of maintenance**: The cost of long-term development is impacted by the ease or difficulty of maintaining different tests or test plans. The choice of manual testing also raises overall costs.
**Financial cost**: Some test plans may call for billable resources.
**Benefit**: Too varied degrees, tests can help with productivity and avert problems. Additionally, the effect is more significant the sooner the development life cycle abnormalities can be identified.
**Risk**: Failure scenarios’ probabilities might range from unlikely to likely, and their effects can range from a slight annoyance to catastrophic.
The degree of project importance, implementation specifics, resources available, and team viewpoints all significantly affect how well these aspects are balanced in a plan. Many projects may achieve excellent coverage with high-benefit, low-cost unit tests, but they may also need to consider choices for more extensive tests and complicated corner situations. Mission-critical initiatives must reduce risk to the absolute minimum, necessitating accepting higher costs and investing in thorough testing at all levels.
## Test Plan Vs Strategy
Two typical approaches to creating a testing plan for software need to be clarified before continuing:
**A solitary test planning**: Some projects include a single “test plan” detailing all projects implemented and future testing.
**One test strategy, many plans**: A “test strategy” document and several smaller “test plan” documents are part of specific projects. Test plan often addresses particular features or project updates, whereas strategies typically describe the overall [test strategy](https://www.headspin.io/blog/11-strategies-that-you-can-deploy-for-effective-mobile-application-testing) and goals.
Both of these might be incorporated and embedded into project design papers. Pick what makes sense for your project from these two practical options. In general, stable projects benefit from having a single plan, but fast-changing projects benefit more from often-added techniques and seldom-altered tactics.
**Content**
Making a list of all the questions that require a response is a brilliant start when developing the material for your test strategy. The following lists offer a thorough selection of powerful queries that could or might not relate to your project. Browse the lists and choose anything that applies. When making judgments, be cautious to balance the previously listed aspects. You may create the contents of your test plan by responding to these questions, and you can organize your test plan around the selected material in any way your team wants.
**Risks**
There might be significant project risks considering the following:
- harm to humans or animals
- User data security and integrity
- user privacy Protection of business systems
- Property or equipment damage
- Regulatory and legal concerns
- disclosure of sensitive or secret information
- loss or corruption of data
- loss of revenue
- Cases that cannot be recovered
- SLAs
- prerequisites for performance
- user misinformation
- the effect on other initiatives
- the impact of different initiatives
- Impact on the public image of the firm
- productivity loss
There might be technical vulnerabilities like:
- Features or components that have a reputation for being brittle, hacky, or in need of refactoring
- Platforms or dependencies that commonly provide problems
- Users’ potential to disrupt the system Trends from earlier issues.
### Tools And Infrastructure
**The need for new test frameworks**:
If so, mention them in the plan or include design links.
The need for a new test lab:
If so, mention them in the plan or include design links.
Are you giving those users access to test tools if your project provides a service to other projects?
When users attempt to test their integration with your system, think about offering mocks, fakes, and dedicated staging servers.
How will testing infrastructure, systems, and other dependencies be managed for end-to-end testing? How are they going to be used? How will persistence be established/dismantled? How will you manage necessary migrations between data centers?
#### Process
**Test Surface**: Is it a multi-platform client-server stateful system with an explosion of use cases or a short library with just one method? Highlight potential failure areas while describing the system’s design and architecture.
**Platforms**: Consider including a list of supporting hardware, software, and other items. Specify each platform’s testing procedures and reporting requirements as well.
**Features**: Consider creating a comprehensive list of all components and outlining the testing procedures for specific feature categories.
**What should not be tested**: No test suite can account for every scenario. It’s preferable to be upfront about this and explain why particular strategies shouldn’t be tested. Examples include:
- low-risk,
- low-priority regions,
- complicated situations,
- areas covered by other teams and,
- features that still need to be prepared for testing.
What is covered by small, medium, and big unit, integration, and system tests? Test as much as you can in smaller tests so that more extensive tests have few cases.
Automation is often better when it is possible and economical. All testing may often be automated. There could, however, be valid arguments in favor of manual testing.
## Utility/Conclusion
The audience of the test plan:
While some test plans are read by many, others are only read by a few people. When developing the strategy, consider the readers you anticipate, give them the background information they need to comprehend it, and address any questions you expect, even if all you can say is that you still need to get the solution. Consider seeking a review from every stakeholder, at the very least (project managers, tech leads, feature owners). Consider including the test plan’s contact information so that anyone who reads it may learn more.
_Source: This article is originally published [here](https://www.entertales.com/test-planning-inquiry-technique/)._ | abhayit2000 |
1,413,566 | Launching new mobile app in 2 months with Agile transformation | I’m going to tell you a story of how my team launched iOS & Android app for ticketing service in... | 0 | 2023-03-24T12:01:09 | https://dev.to/artem_poluektov/launching-new-mobile-app-in-2-months-with-agile-transformation-4bl | I’m going to tell you a story of how my team launched iOS & Android app for ticketing service in just 2 months, performing the transition from Waterfall to Agile without a lot of pain. I’ll start by describing the initial point of the project, how software development processes worked before and will to highlight key activities which helped us to succeed. This article is not intended to replace Agile manifest or [The Book of Scrum](https://www.amazon.com/Scrum-Doing-Twice-Work-Half/dp/038534645X) by Jeff Sutherland. Instead, I’ll tell you how we changed our work process to deliver product faster without hiring an army of Agile coaches & scrum masters. This article would also be useful for those of you, guys, who thought about starting your own home project. I’ll show you what is worth focusing on at the specific moment of time and what isn’t. So, let’s begin :)
## How it all started
It all started about 8 years ago. I’d just joined a huge media company in Russia which had a lot of different products: few online newspapers, entertainment services, sports portal, etc. And we also had this ticketing service I’m telling you about. The main feature of the service was selling e-tickets for cinemas, theatres, sports events online providing users and ability to book the best seats with no need visiting cash deck in advance.
Fortunately, we already had desktop and mobile websites running. And the service was pretty successful. As always, with some limitations. Not going to bother you with numbers, will just say that mobile website didn’t perform as expected. The primary reasons for that were:
- Outdated UI design
- Usability issues
- Inability to view a seat map
Sure, it was possible to fix first & second issues but providing a fully-functional seat map and making offline tickets storage definitely required a mobile app. You should ask here how are users purchasing tickets without viewing a seat map on the mobile website. And that’s a great question. Our colleagues from backend team developed an algorithm which automatically selected seats as close to center as possible. Sometimes it worked great, sometimes not, sometimes customer wanted to book seats on last “kissing” row and was unable to do so. But benefits of the apps comparing to websites is an excellent idea for another article, so, here I’ll just say we were pretty sure that we need to launch the app.
In addition to websites, we had an XML API for partners’ sites letting them get a list of places, events & schedules with an ability to purchase tickets in the pop-up window. Now you should understand the initial point of our project.
## Minimum Valuable Product
As I mentioned above, we widely used Waterfall approach in our work. We had separated iOS, Android, design, QA, backend teams and an army of project managers. So, when my project manager first approached me with this exciting new project, I was more than happy to start my very first product in this company from scratch. But all my happiness was gone after the initial meeting. What happened? The primary purpose for this meeting was to estimate release date based on very few UI designs we had. We found out that
- The UI design wasn't complete, a lot of screens were missing
- Those UIs we got were to complex, featuring a lot of complex gestures & animations, being too hard for users to understand and use and very hard to implement
- Technical requirements weren't even close to being complete also
- Backend team was busy for the next few months
That’s why we found we can’t make estimations to put a resource reservation on mobile developers for this project. To be honest, we actually were unable to predict a release date. Sure, we could possibly say that it will take approximately 6 months. But when you have no idea on overall volume and complexity of such a huge app, your estimations would be pretty inaccurate. And that’s a pretty usual situation for Waterfall teams. To fix this, I proposed launching Minimum Valuable Product (MVP) first with our own UI design and a small number of critical features in the first release, leaving all the others for the next versions of our app. The idea was looking pretty fresh & exciting, we decided to give it a try.
## Focus on what’s really important
There were a lot of features in technical requirements and those UI design screens we’d got. We clearly understood that we just can’t develop them all maintaining the highest possible quality. And here comes Agile!
We wrote down a list of all features and asked ourselves: “Which one is the most important? Without what features our app just wouldn’t work?”

And here is the list of the critical features defining for our product:
- List of cinemas
- List of movies
- Schedule
- Seat map
- Making a purchase
- Purchased tickets
That easy. Just 6 screens. We tried to reduce the number of screens as much as possible to make purchase process simpler & faster, finally, we were able to cut it to just 4 steps: pick up the cinema or movie, select the suitable time from a schedule screen, choose seats on a seat map and make a purchase. Our app just wouldn’t work without any of these features. No beautiful animations, no complex gestures, no custom controls. Instead, system-provided tabs-based navigation, standard UI controls with minimum customizations. **We were focused on providing our customers with content instead of an interface.** And that’s how we actually build a great user interface which received **A+ score at WWDC UI design labs session**.
We discussed a lot of other great ideas during our work with my team, for example, we were pretty sure that storing user’s credit card data for future purchases would significantly increase a conversion rate. But it will take a lot of development hours, PCI DSS certification process isn’t easy. Instead, we decided to use WebView for payments, the same WebView on our website. So, this is one of those many features that should be held on the backlog for future releases.
To estimate the overall amount of work to do and track our progress we created a screen map. It is a huge image file containing all the screens of our apps connected by arrows representing screen-to-screen transitions. Screen maps are a handy tool for building mobile apps and even desktop websites.
_Focus on crucial features defining your product. And put all the others to a backlog for next releases. Remember, you’re providing your customers with content not interface._
## Communication is the key
Now, when we got a screen map, we were able to understand what APIs we do need. But the backend team was extremely busy, and we had a bottleneck here. Fortunately, we were able to estimate the amount of work needs to be done and [miracle happened here] we arranged to get 4 weeks of backend developer’s time.
How did we work with other teams before? When you need something to be done by another team you should ask your project manager, he or she goes to that team’s manager bringing the task. It was a too long way for tasks when we had just 4 weeks. So, instead, we created a new small team solely for this project and put this our developers inside. We had our own project in JIRA, our own backlog and didn’t accept any incoming tasks from other projects to be completely focused on our app. Our team was consisted of:
- Me as a Team Leader creating tasks, providing communication between team members and pushing forward all the development process
- 2 iOS developers
- 2 Android developers
- Backend developer
- Designer
Now when we had a team and knew essential features we need to develop, we needed to make sure that we’re all doing the right tasks. But what is the proper task? It’s the one which provides features to our customers in the shortest possible way. The excellent example of that approach is UI design. I asked our designer to create UIs as close to native as possible because it is familiar to users and easy to implement by developers. So, **we made a short workshop for our designers telling them what standard UI controls & animations we have and how we can customize it**. We launched “design review” process where we discussed each screen together inside my team: designer tried to make it look better and developers — easier to implement. We were able to develop and maintain this communication process inside our team, and it brought us to success. We’d done almost the same developing APIs: backend and mobile developers were looking at UI screens and writing an API protocol convenient to both sides.
_Make your teammates communicate with each other discussing the best way to implement the task. That will help them deliver better solutions faster._
## Listen your teammates
Another essential key to our success was letting our team making decisions independently without hours of meetings reducing communications chain. In the previous paragraph, I mentioned that in the process of app development you should focus on critical features trying to deliver them as soon as possible and only then move to another feature. Here I’ll show you a few examples of small features we’d made in the process on main features development. These weren’t planned but were so easy to implement so we decided to include them in our first release. And all of them dramatically improved user experience and conversion rate.
The first feature was offline search. We had a search field on our website, but it was full-text search providing all results across all categories (i.e., one search field for movies & places). We had this feature in our backlog for the second release, but my team came to me complaining that it takes a too long time to find a test entity in cinemas list. This actually slowed down all the development. We discussed this issue and found out that we already had a list of all cinemas in our app and now could easily add a system-provided search bar on top of the list. It took about 2 hours for each platform without any changes to the backend and dramatically improved the usability of the app.

The second feature again originates from my teammates’ ideas. Sometimes it took up to a minute for ticketing system to provide actual seat map. It gave a lot of pain to developers who tried to test what they’ve done. Depending on cinema, the ticketing system stopped selling tickets sometime before the movie start, or there could be no free seats left. And the user has to wait while seat map loading to get this information. We found a simple solution for that: to add a check on the backend and flag to API for each time in schedule whether or not it’s available for purchase. Price labels on schedule screen was another bright idea. Initially, we haven’t thought about that. But some guys preferred more affordable tickets on a different time, it was insight from real-life behavior. If the user can see the price on a schedule screen, he or she doesn’t have to wait while seat map is loading on pricy times and could go straight to cheaper tickets. By implementing these features, we not only improved usability but also reduced the number of requests to our backend. So, it was a double win.
Another quick win was rounded seats on a seat map. Initially, our designer drew square seat icons with rounded corners which looked pretty good. But someday one of my guys found a cinema with rounded rows on a seat map. It seemed to look bad, very bad. So, we tried to create some complex equations to calculate seat rotation angle based on the radius of row rounding. Eventually, I found that we could make our seats round instead of square and we don’t need to rotate round seat because it’s round.
_Your team is a very important focus group for your product. They are actually the first users of new features. Listen to them, give them the freedom of making decisions and they will bring you a lot of bright ideas!_
## After the first release
We had a lot of features in our backlog for the next release and it was extremely hard to find out which of them we should implement first. We decided to start studying users’ behavior. The entire purpose of our app is to help customers purchasing a ticket. So, the primary goal of our second release was to provide us with insights on how do customers looking for tickets:
- Do they start with a cinema or movie selection?
- How much time do our customers spend on the specific screen?
- Why does the user close the payment screen?
After we’d collected some data, we were able to prioritize our backlog. For example, trailers, featured movies and frequently-visited cinemas were selected for our third release. Mostly because these features had a small estimated development time and some positive effect on conversion rate. Of course, storing credit card data would impact conversion much more, but all three mentioned features were completed in a week while it took more than 2 months to get PCI DSS certification necessary for card data storage. I don’t say that you should only do smaller tasks. Instead, it’s always better to find a balance between quick wins and major tasks promising long-term benefits.
_I would recommend studying user’s behavior from the very first releases to be sure your team going in the right direction. And adjust your backlog [prioritization] based on those insights you would get from the data._
## Conclusion
Finally, we’d launched the app. After just 2 months. And it was extremely short timing comparing to other products. Yes, it wasn’t fully functional as stakeholders wanted it, but it was just the first step.
Our new approach wasn’t near close to Scrum, it was more like Kanban, which is great when you need to launch the new product in a short amount of time. And this transition from Waterfall wasn’t any pain, we actually enjoyed the new way of working together, impacting app we built and getting the new results on a weekly basis.
Thank you for reading!
| artem_poluektov | |
1,413,580 | Bug name folder Windows 11 | For some reason, while installing an Astro Project , i've inserted invalid name for the folder, and... | 0 | 2023-03-24T12:41:26 | https://dev.to/renanduart3/bug-name-folder-windows-11-5g29 | windows11, bugwindows1, foldernamebug, wsl | For some reason, while installing an [Astro Project ](https://astro.build/) , i've inserted invalid name for the folder, and some way, the framework had done it a folder with a invalid name, ocurring a infinite loop in terminal, when you try to delete a folder with reference to "." , the system cannot find it and you cant delete this.
The solution is to enter in WSL in a linux distro, inside the same terminal and delete the folder :)
`cd /mnt/c/Users/user-name/source/repos/Project/target_folder$`
and
`rm -r target_folder`
And solved it.
Maybe it can help someone out there, if some way you got invalid namefolders and can't delete them.
| renanduart3 |
1,413,688 | Using TailwindCSS with Nuxt efficiently | How to build Nuxt application with TailwindCSS and the modules for it. | 0 | 2023-03-24T15:31:45 | https://mayashavin.com/articles/working-with-tailwindcss-nuxt | tailwindcss, javascript, nuxt, vue | ---
title: "Using TailwindCSS with Nuxt efficiently"
published: true
description: "How to build Nuxt application with TailwindCSS and the modules for it."
cover_image: https://res.cloudinary.com/mayashavin/image/upload/v1679670844/articles/Nuxt/tailwind_nuxt_icons.jpg
img: "articles/Nuxt/tailwind_nuxt_icons"
tags: ['TailwindCSS', 'JavaScript', 'Nuxt', 'Vuejs']
canonical_url: https://mayashavin.com/articles/working-with-tailwindcss-nuxt
---
In this post, we will see how to use TailwindCSS with the Nuxt application efficiently with the help of the official Nuxt module for TailwindCSS. We will also see how to use SVG icons with TailwindCSS instead of images or SVG icons directly and how to build our custom color palettes for TailwinCSS based on a given image.
## Table of Content
- [Table of Content](#table-of-content)
- [Prerequisites](#prerequisites)
- [Setting up TailwindCSS with Nuxt](#setting-up-tailwindcss-with-nuxt)
- [Configuring TailwindCSS using `tailwind.config.ts`](#configuring-tailwindcss-using-raw-tailwindconfigts-endraw-)
- [Loading TailwindCSS styles](#loading-tailwindcss-styles)
- [Using SVG Icons with TailwindCSS](#using-svg-icons-with-tailwindcss)
- [Generate a custom color palette for TailwindCSS](#generate-a-custom-color-palette-for-tailwindcss)
- [Summary](#summary)
## Prerequisites
It would be best if you had your Nuxt application set up and ready to go by using the following command:
```bash
npx nuxi init tailwind-css-nuxt-demo
```
Which `tailwind-css-nuxt-demo` is the name of our code demo application. You can change to any name per your preference.
It would help if you also had a basic understanding of how [TailwindCSS](https://tailwindcss.com/) and [Nuxt.js](https://nuxt.com/docs/getting-started/installation) work to follow this tutorial.
## Setting up TailwindCSS with Nuxt
To start using TailwindCSS with Nuxt, you can install and configure TailwindCSS as a dependency following the instructions on the [TailwindCSS website](https://tailwindcss.com/docs/guides/nuxtjs). Or you can use the official [Nuxt TailwindCSS module](https://tailwindcss.nuxtjs.org/) following the commands below:
```bash
yarn add -d @nuxtjs/tailwindcss
```
Then add the module to the `modules` section of `nuxt.config.ts`:
```ts
export default defineNuxtConfig({
modules: ['@nuxtjs/tailwindcss']
})
```
You are ready to use TailwindCSS. But before starting to use it, there is additional work we need to perform to ensure TailwindCSS is working smoothly.
### Configuring TailwindCSS using `tailwind.config.ts`
We need to configure TailwindCSS according to our needs. To do so, we create a `tailwind.config.ts` file in the root of our project (most of the time, it should be at the same level as `nuxt.config.ts`), with the following code:
```ts
module.exports = {
theme: {},
variants: {},
plugins: [],
purge: {}
}
```
Let’s look a bit into the properties defined in this file:
- `theme` - where we set up all the projects' additional customized theming, including color palette, font family, breakpoints, border, minimum/maximum sizes, etc. We can overwrite the default theme or extend it using the `theme.extend` property.
- `variants` - where we define an array of responsive and pseudo-class variants for selected core utility plugins such as `appearance`, `borderColor`, `outline` or `zIndex` , etc.
- `plugins` - a collection of JavaScript functions allowing us to register additional styles programmatically.
- `purge` - can be an Array, an Object, or a Boolean value indicating how we want to remove unused styles (or not). The Nuxt module for TailwindCSS automatically adds the needed code to enable purging CSS code during production, followed by a list of files that have reference to any used CSS styles by name, as shown in the example below:
```jsx
purge: {
//enable remove unused CSS only in production
enabled: process.env.NODE_ENV === 'production',
//any file containing the reference of CSS styles by class name.
content: [
'components/**/*.vue',
'layouts/**/*.vue',
'pages/**/*.vue',
'plugins/**/*.js',
'nuxt.config.js'
]
}
```
And since our configuration file is in TypeScript, the Nuxt engine won't be able to locate it when running for the application. We need to specify the path to this file in the `nuxt.config.ts` file by adding the following code to your Nuxt configuration object:
```ts
tailwindcss: {
configPath: '~/tailwind.config.ts'
},
```
Now we have our TailwindCSS configured. Next, we need to load the built-in TailwindCSS styles into our application.
### Loading TailwindCSS styles
By default, the Nuxt TailwindCSS module will locate the TailwindCSS styles from `./assets/css/tailwind.css` into the application. We can also overwrite this default path using the `cssPath` property of the `tailwindcss` configuration object in the Nuxt configuration file.
```ts
tailwindcss: {
cssPath: '~/styles/tailwind.css'
},
```
And in the `styles/tailwind.css` file, we can import the TailwindCSS class styles using the keyword `@tailwind`, as follows:
```css
@tailwind base;
@tailwind components;
@tailwind utilities;
```
In the code above, we import the `base`, `components`, and `utilities` styles, which are:
* The `base` styles are the core styles.
* The `components` styles are for the components specific.
* The `utilities` styles are for utilities such as padding, margin, etc.
And that's it. When you run the application, Nuxt will load it with TailwindCSS inside, ready for you to customize the look and feel of your application!
Next, let's see how we can take advantage of TaiwindCSS to build and use icons in applications instead of the classical way of using images or SVG icons directly.
## Using SVG Icons with TailwindCSS
Using SVG icons in an application has been a common practice. With the right icon, we can deliver a great user experience to users and make the application more attractive and appealing. But using SVG icons has challenges, such as the effort spent manually creating and saving the SVG files within the application's directory or creating a component to load SVG icons dynamically, etc. Also, choosing the right icon packs to use can be a challenge.
One of my favorite sources of icons is [Iconify](https://icones.js.org/), made by [Egoist](https://github.com/egoist), containing a large number of collections of free icons, such as Material Design, Font Awesome, etc., and you can search for icons by name, or by category.
<img src="https://res.cloudinary.com/mayashavin/image/upload/f_auto,q_auto/v1675680195/articles/Nuxt/iconify" loading="lazy" alt="Iconify website" class="mx-auto"/>
We can choose from various options for each icon displayed on the website, from the Vue component and React component to just a simple SVG file.
<img src="https://res.cloudinary.com/mayashavin/image/upload/f_auto,q_auto/v1675680195/articles/Nuxt/icon_code" loading="lazy" alt="An example of Material Icon in Iconify website" class="mx-auto"/>
You can download the relevant snippet code for the desired icon and copy and paste it into your application. However, this approach can be tedious when your application requires many icons. You can use the [tailwindcss-icons](https://github.com/egoist/tailwindcss-icons) package to solve such a problem. This excellent package allows you to load icons from a selected icon pack from Iconify into your TailwindCSS as classes for use in the application.
To start using `tailwindcss-icons`, we need to install it as a dependency:
```bash
yarn add -D @egoist/tailwindcss-icons
```
The package exposes a method called `iconsPlugin` to generate an icon pack plugin for TailwindCSS configuration. This method accepts a collection object containing an array of icons packs to use in the application, as follows:
```ts
//**tailwindcss.config.ts */
module.exports = {
plugins: [
iconsPlugin({
// Select the icon collections you want to use
collections:
{
// collections of icons
},
}),
],
}
```
To generate the collection of icons packs, we can use the method `getIconCollections`, which accepts an array of icon packs by name, such as the MingCute set, as follows:
```ts
plugins: [
iconsPlugin({
collections: getIconCollections(["mingcute"]),
}),
],
```
That's all it takes. The package will generate the classes for each icon during compilation, following the pattern `i-<collection_name>-<icon_name>`. For instance, we want the palette icon from the MingCute set with the name `palette-2-line`, as shown in the following screenshot:
<img src="https://res.cloudinary.com/mayashavin/image/upload/f_auto,q_auto/v1675680195/articles/Nuxt/palette_2" loading="lazy" alt="Palette icon in Mingcute pack" class="mx-auto"/>
To do so, we can use the class `i-mingcute-palette-2-line` on a `span` element, in orange color and extra large size, as follows:
```html
<span class="i-mingcute-palette-2-line text-sea-buckthorn-500 text-2xl" aria-hidden="true" />
<span>Some text</span>
```
The browser will display our icon as below:
<img src="https://res.cloudinary.com/mayashavin/image/upload/f_auto,q_auto/v1675680195/articles/Nuxt/palette_2_orange" loading="lazy" alt="Palette icon in Mingcute pack in orange color" class="mx-auto"/>
And the best part is that you don't need to manually create and save the SVG files or create a component to load SVG icons dynamically. You need to install and configure the package, and you are ready to go!
Great, right? Let's see how we can build a custom color palette for our application using TailwindCSS.
## Generate a custom color palette for TailwindCSS
TailwindCSS comes with a default set of color palettes, but we can also provide our custom color palette using the `theme.extend.colors` field in the `tailwind.config.ts` file, as follows:
```ts
module.exports = {
theme: {
extend: {
colors: {
},
}
},
}
```
`colors` is an object collection in which each key is a color palette name, and each value is an object of [key, values] where the key is the shade of the color (usually from 50 to 900), and the value is the color code. For instance, the following code defines a color palette named `sea-buckthorn`, with shades from 50 to 900, and the color code for each shade:
```ts
module.exports = {
theme: {
extend: {
colors: {
'sea-buckthorn': {
50: '#FEFAF4',
100: '#FEF6E9',
200: '#FCE8C8',
300: '#FBDBA7',
400: '#F7BF66',
500: '#F4A424',
600: '#DC9420',
700: '#926216',
800: '#6E4A10',
900: '#49310B',
},
}
}
);
```
With the color palette defined, we can use it in our application, following the syntax `bg-<color_name>-<shade>` for the background color or `text-<color_name>-<shade` for the text color, such as `bg-sea-buckthorn-500` and `text-sea-buckthorn-500`.
Here comes the real challenge. How can we decide the shades for each color theme? There are many ways to do so, from manually calculating the shadows based on some color theory or using external tools to perform the same task for us.
One such tool is generating color palettes from a given image using [Color Palette Generator](https://colorgen.dev/).
<img src="https://res.cloudinary.com/mayashavin/image/upload/f_auto,q_auto/v1675680195/articles/Nuxt/colorgen" loading="lazy" alt="ColorGen website" class="mx-auto"/>
The tool allows us to upload an image, and it will generate the four primary color palettes and their shades in a TailwindCSS format, as shown in the following screenshot:
<img src="https://res.cloudinary.com/mayashavin/image/upload/f_auto,q_auto/v1675680195/articles/Nuxt/tailwind_palettes" loading="lazy" alt="ColorGen website" class="mx-auto"/>
All left is to copy and paste the generated code into your `tailwind.config.ts` file, and you are ready to use the color palette in your application.
## Summary
In this tutorial, we learned how to install and configure TailwindCSS in the Nuxt.js application. We also learned how to inject icons to use as TailwindCSS classes with `tailwindcss-icons` and how to generate a custom color palette for TailwindCSS fast and easily with an external tool. These certainly are minor aspects of what you can do with TailwindCSS and what it can offer to your Nuxt application. If you have other tips for using TailwindCSS with Nuxt, share them with me in the comment section!
👉 _If you'd like to catch up with me sometimes, follow me on [Twitter](https://twitter.com/MayaShavin) | [Facebook](https://www.facebook.com/mayashavin/)._
👉 _Learn about Vue with my new book [Learning Vue](https://www.oreilly.com/library/view/learning-vue/9781492098812/). The early release is available now!_
Like this post or find it helpful? Share it 👇🏼 😉
| mayashavin |
1,413,838 | Starchart: 1,2,3, Testing! | What is it? After having created a test database for the Starchart project, I went on to... | 0 | 2023-03-24T16:30:32 | https://dev.to/cychu42/starchart-123-testing-58f8 | javascript, prisma, opensource, vite | ## What is it?
After having [created a test database](https://dev.to/cychu42/starchart-mocking-a-database-part-2-2a88) for the [Starchart project](https://github.com/DevelopingSpace/starchart), I went on to working on tests for functions relate to Prisma API calls. (See [PR](https://github.com/DevelopingSpace/starchart/pull/415))
We use [Vitest](https://vitest.dev/) for unit testing, which is very similar to [Jest](https://jestjs.io/). A test suite looks something like this:
```
describe('createUser()', () => {
let user: User;
beforeAll(async () => {
user = await createUser(
'jsmith',
'John Smith',
'jsmith@myseneca.ca',
'mycustomdomain-students'
);
});
afterAll(async () => {
await prisma.user.deleteMany().catch(() => {});
});
test('creates an User row with expected fields', async () => {
expect(typeof user).toEqual('object');
expect(user.username).toEqual('jsmith');
expect(user.displayName).toEqual('John Smith');
expect(user.email).toEqual('jsmith@myseneca.ca');
expect(user.group).toEqual('mycustomdomain-students');
expect(user.createdAt).not.toBe(null);
expect(user.createdAt).toEqual(user.updatedAt);
expect(user.deactivated).toBe(false);
});
```
`describe()` is a test suite, which can include many related individual `test()`s. Both let you include a string to state what it's for, which will show up in a formatted display when you run tests.
`beforeAll()` is a block of code that executes **before** all tests of a suite is ran, which can be handy for setting up things, like creating an user for testing.
`afterAll()` is a block of code that executes **after** all tests of a suite is ran, which can be handy for clean-up, like deleting all existing user.
Each `expect()` is a condition that must be satisfied for a test to pass. The code is more or like like plain English language, such as `expect(user.createdAt).not.toBe(null);` really means expecting `user.createdAt` to not be null. You can find all the syntax in [their documentation](https://vitest.dev/api/expect.html).
You can put code, such as variable declaration an function calls, under `describe()` or `test()`. It's just a matter of scope for what you want to do.
--------
Assuming you already setup the tool properly (see [official guide](https://vitest.dev/guide/)), you can run the tests via `npx vitest`. A result can look like this:

For seeing test coverage, run `npx vitest run --coverage`.

As you can see, it tells you how much of the files are covered by tests, in terms of statements, branching paths, functions, and lines. It also shows uncovered lines.
## Why do this?
1. This provides repeatable and standardized tests that are easy to run and view. You don't have to risk making mistake or forgetting a test like when one does manual testing of codes. The tool also provide good feed back by showing you what tests fail and what files aren't covered.
2. It's also something that can be shared in a repository and make collaboration easier.
3. An interesting way to use this is to write out some tests and have them act as technical requirements that guide the development of code. Essentially, you write out tests that tells developers what the software is supposed to do, and the developers write codes that would satisfy these tests that act as technical requirements.
| cychu42 |
1,413,845 | I finally got a DevOps role! Then I got laid off after less than a year. | So much has changed/happened since my last post in summer of '22 after I completed Forrest Brazeal's... | 0 | 2023-03-27T16:30:52 | https://dev.to/ablackladytechblog/i-finally-got-a-devops-role-then-i-got-laid-off-after-less-than-a-year-15lp | devops | So much has changed/happened since my last post in summer of '22 after I completed [Forrest Brazeal's Cloud Resume Challenge](https://cloudresumechallenge.dev/)! Two months after completing the challenge (through some heavy networking, luck, and perseverance), I was able to secure my first DevOps role at a great small series A startup. It was remote with great starting pay and benefits, with plenty of opportunity to learn and grow.
I was able to quickly put my skills to work, and learned so much from my teammates and others. I designed and built infrastructure, learned how to set up application monitoring, containerized applications, and got familiar with multiple tools and technologies. My new gig was everything I'd hoped and more.
Five months later, my boss (Director of Engineering) was let go (bad sign). There were rumblings about the state of the company, but I kept my head down and learned as much as possible. Things seemed to be steady, but I did have a nagging feeling that things weren't going as well as they could be on the business front.
Three months after my boss was let go, four out of six engineers on our team (including myself and my DevOps teammate) were laid off along with others on the business side (about half of the company). Ouch. I had JUST started to catch my stride in my new(ish) role, and the opportunity just evaporated.
I'm almost two months in with my job search for a new DevOps role. Wish me luck!
| ablackladytechblog |
1,413,906 | What's the point in posting here 🧐 | After publishing a six articles with a whopping 8 views...if i'm to promote those blogs externally in... | 0 | 2023-03-24T18:57:09 | https://dev.to/murchie85/whats-the-point-in-posting-here-14ai | After publishing a six articles with a whopping 8 views...if i'm to promote those blogs externally in order to gain views, what makes it different from other better sites like Hackernoon who actually help promote your work? | murchie85 | |
1,413,914 | Creating Cognito User with Auto-Incrementing ID | So there are a couple of interesting topics here. I’ve been leaning into code-less workflows with... | 0 | 2023-03-24T19:11:03 | https://www.binaryheap.com/creating-an-aws-cognito-user-with-an-auto-incrementing-id-using-aws-step-functions/ | aws, serverless, typescript, identity | So there are a couple of interesting topics here.
1. I’ve been leaning into code-less workflows with AWS Step Functions and this State Machine has nothing but native SDK integrations which include
- DynamoDB (Put, Delete, Get)
- Cognito/User Pools (AdminCreateUser)
2. I’ve run into some legacy code that requires a Username to be a bigint and I don’t want to use an RDBMS so I’m using DynamoDB to generate one for me while also being “race condition” proof
As always, if you want to jump straight to the code, here is the [Github repository](https://github.com/benbpyle/cdk-user-creation-auto-id)
The Output (Final State Machine)
--------------------------------
What I’d like to do is walk through the State Machine touching upon the parts of each step and stitch this together into the diagram above.
### Find Last Id
First off, the basis for the DynamoDB elements came from an article I read at [Bite-Sized Serverless](https://bitesizedserverless.com/bite/reliable-auto-increments-in-dynamodb/).
But in my scenario, I wanted to add a full user creation flow in addition to needing to be able to create a user with a BigInt as the username. Sounds strange and I’d love to be able to use a UUID, KSUID or ULID but in the system that I’m building this for, we have some legacy parts that force the BigInt value.
To not have to rely on an RDBMS and leverage DynamoDB instead, I’m working off of using a row in the table to hold the “LastId” that cant be updated and used to build these users. We could fail into a race condition where two processes are trying to update the record at the same time, but by using Optimistic locking I’m going to avoid that issue and just force a retry of the process. DynamoDB does a really good job of this and I’ve used this pattern in a lot of other places at scale with great success.
The table itself uses the patterns that I learned about from Alex DeBrie on [Single Table Design](https://www.alexdebrie.com/posts/dynamodb-single-table/)
Using a simple `PK` and `SK` structure I’m overloading the table by putting multiple Entities in it. One such entity is the `USERMETADATA` entity that holds the `LastId` that was used in the user profile
Since I’m sticking to Native Integrations, I’m using the DynamoDB API to execute a `getItem` on the table of my choosing. That API call looks like this
```javascript
{
"TableName": "Users",
"ConsistentRead": true,
"Key": {
"PK": {
"S": "USERMETADATA"
},
"SK": {
"S": "USERMETADATA"
}
}
}
```
The sole purpose of this `getItem` is to fetch the `LastId` from the table so it can be used when building the Username and profile. The code below is the function that builds this transition
```javascript
buildFindLastId = (t: ITable): CallAwsService => {
return new CallAwsService(this, 'FindLastId', {
action: "getItem",
iamResources: [t.tableArn],
parameters: {
TableName: t.tableName,
ConsistentRead: true,
Key: {
PK: {
S: "USERMETADATA"
},
SK: {
S: "USERMETADATA"
}
}
},
service: "dynamodb",
resultSelector: {
"previousUserId.$": "$.Item.LastId.N",
"userId.$": "States.Format('{}', States.MathAdd(States.StringToJson($.Item.LastId.N), 1))"
},
resultPath: "$.context"
});
}
```
### Creating the DynamoDB User
Once the ID is fetched and it has been incremented by 1 (*note the intrinsic functions usage* `States.MathAdd`, `States.StringToJson` and `States.Format`) I can begin to put together the Transaction that will write the record into DynamoDB.
A couple of things to note
1. `attribute_not_exists` on the PK field. If that attribute value is already in place, the transaction will fail
2. The update of the `USERMETADATA` and the creation of the new user happen in a transaction so it’s an all-or-nothing. If something fails for either of the conditions I’m catching it goes back to the LastId step to try again
```javascript
buildCreateDynamoDBUser = (t: ITable): CallAwsService => {
return new CallAwsService(this, 'CreateDynamoDBUser', {
action: "transactWriteItems",
iamResources: [t.tableArn],
parameters: {
"TransactItems": [
{
"Put": {
"Item": {
PK: {
"S.$": "States.Format('USERPROFILE#{}', $.context.userId)"
},
SK: {
"S.$": "States.Format('USERPROFILE#{}', $.context.userId)"
},
FirstName: {
"S.$": "$.firstName"
},
LastName: {
"S.$": "$.lastName"
},
EmailAddress: {
"S.$": "$.emailAddress"
},
PhoneNumber: {
"S.$": "$.phoneNumber"
}
},
"ConditionExpression": "attribute_not_exists(PK)",
"TableName": t.tableName
}
},
{
"Update": {
"ConditionExpression": "LastId = :previousUserId",
"UpdateExpression": "SET LastId = :newUserId",
"ExpressionAttributeValues": {
":previousUserId": {
"N.$": "$.context.previousUserId"
},
":newUserId": {
"N.$": "$.context.userId"
}
},
"Key": {
"PK": {
"S": "USERMETADATA"
},
"SK": {
"S": "USERMETADATA"
}
},
"TableName": t.tableName
}
}
]
},
service: "dynamodb",
resultPath: JsonPath.DISCARD,
});
}
```
So I think I might guess what you are thinking. That’s a lot of code and Javascript/Typescript to make that API call happen. And I’d argue it’s far less code than trying to do this with a Lambda. And it’s cheaper as well because I’m not wasting the step of starting up a Lambda and incurring the execution cost to only run an API call. Not to mention, I’m not paying for nor waiting for a Cold Start to happen. Sure, they aren’t much these days, but they aren’t anything either.
As you can see those, I’m updating the `USERMETADATA` and also creating a `USERPROFILE` for the new Username that was built and passed in
Additionally, in the case of failure, it rolls right back to FindLastId to trigger the workflow all over again. As I said above, this pattern works great for dealing with Optimistic locking and doesn’t incur the overhead that happens in other scenarios. Additionally, the volume that this will experience the retry will be totally fine in terms of the likelihood of happening in addition to the < .25 sec delay if the workflow does have to start over
### Creating the Cognito User
The moment of truth has come. I’ve got the latest ID, created a new user in a table that will be used to support a User Profile in addition to storing claims that will be customized from the User Pool (that article will come soon) and now it’s time to create the user in Cognito
```javascript
buildCreateCognitoUser = (u: IUserPool): CallAwsService => {
return new CallAwsService(this, 'CreateCognitoUser', {
action: "adminCreateUser",
iamResources: [u.userPoolArn],
parameters: {
"UserPoolId": u.userPoolId,
"Username.$": "$.context.userId",
"UserAttributes": [
{
"Name": "email",
"Value.$": "$.emailAddress"
},
{
"Name": "email_verified",
"Value": "true"
}
]
},
service: "cognitoidentityprovider",
});
}
```
This part is really simple. Take the input from above and call the Cognito `adminCreateUser` API call and you will magically get a new user that is email verified that requires a force password change. Additionally, as I mentioned, you’ll be able to customize those [JWT Claims from the data in the table.](https://www.binaryheap.com/w6t7)
What I like about this too, is that if the User Already exists, I’m going to roll back the user creation and act like this never happened.
```javascript
buildStateMachine = (scope: Construct, t: ITable, u: IUserPool): stepfunctions.IChainable => {
const pass = new stepfunctions.Pass(scope, 'Pass');
const fail = new stepfunctions.Fail(scope, 'Fail');
let rollbackUser = this.buildRollbackUser(t);
let createCognitoUser = this.buildCreateCognitoUser(u)
let createDbUser = this.buildCreateDynamoDBUser(t);
let findLastId = this.buildFindLastId(t);
createCognitoUser.addCatch(rollbackUser, {
errors: [
"CognitoIdentityProvider.UsernameExistsException"
],
resultPath: "$.error"
})
createDbUser.addCatch(findLastId, {
errors: [
"DynamoDB.ConditionalCheckFailedException",
"DynamoDb.TransactionCanceledException"
],
resultPath: "$.error"
})
// correctLastId.next(findLastId);
rollbackUser.next(fail);
return findLastId
.next(createDbUser)
.next(createCognitoUser)
.next(pass);
```
The above is the actual State Machine workflow code using the fluent CDK API. Notice that on the `createCognitoUser``IChainable` I’m handling the `CognitoIdentityProvider.UsernameExistsException` which then rolls into the “rollback”. You could of course check for whatever errors you want here.
And in the rollback, I’m simply cleaning up.
```javascript
buildRollbackUser = (t: ITable): CallAwsService => {
return new CallAwsService(this, 'RollbackUser', {
action: "deleteItem",
iamResources: [t.tableArn],
parameters: {
"TableName": t.tableName,
"Key": {
"PK": {
"S.$": "States.Format('USERPROFILE#{}', $.context.userId)"
},
"SK": {
"S.$": "States.Format('USERPROFILE#{}', $.context.userId)"
}
}
},
resultPath: "$.results",
service: "dynamodb",
});
}
```
### Wrapping Up
I love these State Machines that have zero code outside of the orchestration. Having been in tech for a long time, I’ve seen these types of things come and go but what I love about AWS Step Functions is this
1. It scales … seriously it does
2. The code to build it is done through a language I’m comfortable with. Not some DSL
3. I find that these types of solutions are easy to debug and reason about
4. The less code I write, the fewer errors I make. Simple as that
So the next time you need to piece some AWS Serverless things together, have a look at the #zerocode approach. I think you might like it | benbpyle |
1,414,062 | Governments on GitHub | Local and national governments and their GitHub organizations | 0 | 2023-03-24T20:34:17 | https://dev.to/szabgab/governments-on-github-52ee | opensource, github, government, osdc | ---
title: Governments on GitHub
published: true
description: Local and national governments and their GitHub organizations
tags: opensource, github, government, osdc
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2023-03-24 20:31 +0000
---
I am doing some research into how various organizations handle open source and contribution to open source and I just bumped into this page listing various Government agencies around the world and [their GitHub organizations](https://government.github.com/community/). It's a goldmine.
The data I collected so far can be found [here](https://osdc.code-maven.com/open-source-by-organizations/)
| szabgab |
1,414,075 | Optimize React Context Step-by-step in 4 examples | When using React components in combination with Context, you can optimize rendering by wrapping your... | 0 | 2023-03-24T21:21:46 | https://medium.com/@taala.mahdy/react-memo-and-context-d07448a7bb63 | javascript, webdev, react, nextjs | When using React components in combination with `Context`, you can optimize rendering by wrapping your React component with React.memo right after your context provider. This will prevent unnecessary re-renders.
Here are some examples of how re-rendering works with `Context`:
---
## Example 1:
```javascript
// App (ContextProvider) > A > B > C
const App = () => {
return (
<AppContext.Provider>
<ComponentA />
</AppContext.Provider>
);
};
const ComponentA = () => <ComponentB />;
const ComponentB = () => <ComponentC />;
const ComponentC = () => null;
```
In this example, if the `App` component re-renders, all components within it will also re-render, regardless of whether or not their `props` have changed.
## Example 2:
To prevent the re-rendering of all `ComponentA`, `ComponentB`, and `ComponentC` if the App component re-renders, you can use `React.memo` as follows:
```javascript
// App (ContextProvider)
const App = () => {
return (
<AppContext.Provider>
<ComponentA />
</AppContext.Provider>
);
};
const ComponentA = React.memo(() => <ComponentB />);
const ComponentB = () => <ComponentC />;
const ComponentC = () => null;
```
## Example 3:
```javascript
// App (ContextProvider) -> C
const App = () => {
const value = {a: 'hi', b: 'bye'};
return (
<AppContext.Provider value={value}>
<ComponentA />
</AppContext.Provider>
);
};
const ComponentA = React.memo(() => <ComponentB />);
const ComponentB = () => <ComponentC />;
const ComponentC = () => {
const contextValue = useContext(AppContext);
return null;
};
```
In this example, even though the provider value doesn't seem to change, `ComponentC` gets re-rendered. This is because, in JavaScript, the below assertion is `true`:
`{a: 'hi', b: 'bye'} !== {a: 'hi', b: 'bye'}`
## Example 4:
The problem in Example 3 can be resolved by using the useMemo hook from React as follows.
```javascript
// App (ContextProvider)
const App = () => {
const a = 'hi';
const b = 'bye';
const value = useMemo(() => ({a, b}), [a, b]);
return (
<AppContext.Provider value={value}>
<ComponentA />
</AppContext.Provider>
);
};
const ComponentA = React.memo(() => <ComponentB />);
const ComponentB = () => <ComponentC />;
const ComponentC = () => {
const contextValue = useContext(AppContext);
return null;
};
```
With this implementation, if the App re-renders for any other reason that does not change any of the 'a' or 'b' values, the sequence of re-renders will be as follows:
`App (ContextProvider)`
This results in the desired outcome of avoiding an unnecessary re-render of `ComponentC`. By using `useMemo`, the same object reference for the `value` variable is guaranteed. Since this variable is assigned to the provider's `value`, the context is determined to be unchanged, and no consumers are notified.
---
### Summary:
This blog post provides information on optimizing rendering when using React components with Context by using React.memo. It includes examples of how re-rendering works with Context and how to prevent unnecessary re-renders using React.memo. | mahdita97 |
1,414,320 | A to Z List of Top FREE Online Courses! Just do it!! | TL;DR: https://github.com/asadravian/top-free-courses A to Z list of top FREE courses! Hey there,... | 0 | 2023-03-25T03:51:51 | https://dev.to/asad1/a-to-z-list-of-top-free-online-courses-just-do-it-ae | beginners, tutorial, career, computerscience | TL;DR: https://github.com/asadravian/top-free-courses
A to Z list of top FREE courses!
Hey there, fellow devs and lifelong learners! If you're anything like me, you know there's always room for growth in the ever-evolving world of technology. That's why I've compiled this A to Z list of top FREE online courses that will help you level up your skills, unlock your potential, and stay ahead of the curve. The best part? You don't have to spend a single dime. So, grab a cup of coffee, and let's dive in!
Here's my handpicked selection of free online courses, spanning various tech domains and catering to learners of all experience levels. Whether you're a newbie or a seasoned pro, there's something for everyone:
- AI (Artificial Intelligence): Intro to Artificial Intelligence by Stanford University - Learn the foundations of AI from industry experts, Professors Sebastian Thrun and Peter Norvig.
<br>
- Blockchain: Blockchain Basics by the University of Buffalo - Understand the core concepts of blockchain technology and its potential impact on various industries.
<br>
- Cybersecurity: Cybersecurity for Beginners by Palo Alto Networks
<br>
- Discover the fundamentals of cybersecurity and how to protect yourself from common cyber threats.
<br>
- Data Science: Data Science and Machine Learning Bootcamp with R by Udemy - Master the art of data analysis and machine learning using the R programming language.
<br>
- Ethical Hacking: Ethical Hacking 101 by Cybrary - Learn the basics of ethical hacking and how to think like a hacker to secure your systems.
<br>
- Flutter: Flutter for Beginners by Google - Get started with Flutter, the open-source UI toolkit for building natively compiled applications.
<br>
- Git: Git and GitHub Crash Course by FreeCodeCamp - Master version control and collaboration with Git and GitHub.
<br>
- HTML/CSS: HTML and CSS for Absolute Beginners by Codecademy - Build a strong foundation in web development by learning the essentials of HTML and CSS.
<br>
- IoT (Internet of Things): IoT Foundations by LinkedIn Learning - Explore the basics of IoT, including its applications, challenges, and future prospects.
<br>
- JavaScript: JavaScript Algorithms and Data Structures by FreeCodeCamp - Enhance your problem-solving skills with this comprehensive guide to JavaScript algorithms and data structures.
<br>
- Kubernetes: Kubernetes for the Absolute Beginners by Mumshad Mannambeth - Learn the basics of Kubernetes and how to deploy containerized applications.
<br>
- Linux: Linux Basics by the Linux Foundation - Master the fundamentals of Linux and boost your career prospects in the tech world.
<br>
- Machine Learning: Machine Learning by Andrew Ng - Delve into the world of machine learning with this renowned course by Stanford University professor Andrew Ng.
<br>
- Node.js: Learn Node.js by NodeSchool - Understand the basics of Node.js and build server-side applications with ease.
<br>
- OpenCV: OpenCV for Beginners by PyImageSearch - Discover the world of computer vision and image processing with OpenCV.
<br>
- Python: Python for Everybody by the University of Michigan - Learn the basics of Python programming and apply your newfound skills in various real-world scenarios.
<br>
- Quantum Computing: Introduction to Quantum Computing by IBM - Explore the fascinating world of quantum computing and its potential applications.
<br>
- React: Intro to React by the React Team - Build modern, interactive user interfaces with React, the popular JavaScript library.
<br>
- SQL: SQL for Data Science by the University of California, Davis
<br>
- Learn the essentials of SQL and how to work with databases.
<br>
- TensorFlow: Introduction to TensorFlow by Google - Get started with TensorFlow, the open-source machine learning library developed by Google.
</br>
- UX Design: Intro to UX Design by Google - Gain a strong foundation in UX design and learn how to create user-centric digital experiences.
---
If you want to have EVERYTHING at ONE place, check it out:
https://github.com/asadravian/top-free-courses

--
Stay connected for any mentorship or guidance throughout your learning journey:
Muhammad Asadullah from Pakistan
https://linkedin.com/in/asadravian
https://dev.to/asadravian
https://medium.com/@asadravian
https://github.com/asadravian
https://twitter.com/asadRavian7
https://stackoverflow.com/users/5323892/asadullah
https://adplist.org/mentors/muhammad-asadullah-ravian (book 1-1 free online coaching session)
https://asad.blog
Good luck, champ! I believe in you and that YOU can do it!! | asad1 |
1,414,329 | The ACLU and EFF have lost the plot on TikTok | With discussions of a potential US government ban of TikTok increasing, civil libertarian... | 0 | 2023-03-25T12:14:39 | https://medium.com/@alexcarooo/aclu-eff-tiktok-ban-cee265df645c | tiktok, china, firstamendment, freespeech | With discussions of a potential US government ban of TikTok increasing, civil libertarian organizations like the [ACLU](https://twitter.com/ACLU/status/1630341121426505729?lang=en) and [EFF](https://www.eff.org/deeplinks/2023/03/government-hasnt-justified-tiktok-ban), [among others](https://pen.org/letter-to-congress-opposing-federal-law-to-impose-a-ban-on-tiktok/), have come to TikTok's defense. They have released impassioned defenses of the platform, arguing that a ban on TikTok would be an unconstitutional violation of the First Amendment's protections of free speech and expression by limiting the ability of Americans to express themselves.
The ACLU's historic and principled defense of [deeply unpopular, offensive, and even dangerous speech](https://www.aclu.org/issues/free-speech/rights-protesters/skokie-case-how-i-came-represent-free-speech-rights-nazis) and the EFF's "[A Declaration of the Independence of Cyberspace](https://www.eff.org/cyberspace-independence)" manifesto have earned these organizations respect and cemented their reputations as principled defenders of the free expression. However, rapidly evolving technologies and geopolitical circumstances lead to novel situations and uncertainty about how to interpret these deeply held principles.
Take for instance the Berman Amendment, which the ACLU cites alongside the First Amendment in its defense of TikTok. As the ACLU [argues](https://twitter.com/ACLU/status/1630341121426505729):
> [This ban] would also gut the Berman Amendment, which was passed in 1988 to protect our right to receive information regardless of what country that information was created in. It's what keeps the U.S. government from banning a book by a Chinese author simply because it's from China and in this case it means that you can't ban a social media app simply because the app is from a Chinese company.
Ars Technica provides [further details](https://arstechnica.com/tech-policy/2023/01/us-tiktok-ban-could-be-thwarted-by-import-law-enacted-in-the-1980s/) on the context in which the Berman Amendment was passed:
> Back in 1977, Congress passed the International Emergency Economic Powers Act (IEEPA) to empower the president to impose sanctions on and oversee trade with hostile nations. The plan was to prevent average American citizens from assisting US enemies, but the law troubled publishers doing business with book authors and movie makers based in hostile nations. Those concerns led Congressman Howard Berman (D-Calif.) to propose an amendment in 1988, which passed, exempting "information and informational materials" from IEEPA and blocking presidents from regulating these materials. As technology evolved, in 1994, another IEEPA amendment specifically exempted electronic media, leading to today, when everything from a tweet to a TikTok would be free from presidential regulation under the so-called Berman amendments.
The nuance that the ACLU fails to grapple with is just how much the information and technology spaces have evolved from 1988 to 2023. In 1988, the concern was about ensuring Americans had the ability to both access information coming from and publish information to hostile nations. A free society has little to fear from authors publishing books advocating for their country's political ideologies, whether it be Communism or nationalism, while same cannot be said in the reverse. The intention of the marketplace of ideas is to allow even bad ideas to be openly argued for and, if found to be flawed, to be rejected.
A ban on TikTok would violate neither the spirit nor the letter of the First Amendment or the principle of the marketplace of ideas. Such a ban would not be based on the content that Americans are viewing on the app, but rather based on the app's utility for the Chinese government in signals intelligence gathering. In other words, it would be a [viewpoint neutral ban](https://www.mtsu.edu/first-amendment/article/1028/viewpoint-discrimination). TikTok is not being considered for a ban because of its content, but rather because of its capabilities.
TikTok is not like a book nor is it simply a repository of "cat videos". As the ACLU and EFF are well-aware, the nature of modern apps is that they can harvest sensitive information on their users, including their contacts, location data, private messages, apps installed on their devices, and countless other pieces of sensitive Personally Identifiable Information. The bulk collection of such information is what made the [NSA's bulk telephony metadata](https://irp.fas.org/nsa/bulk-215.pdf) collection so controversial: without even knowing the _contents_ of phone calls, simply knowing the metadata of calls (e.g., who you're calling, when you're calling, how often you're calling, the length of your calls, and where you're calling from) is extremely powerful. In the wrong hands, that information can be used for highly nefarious purposes.
However, unlike with the NSA's program, there are no legal protections for Americans' data scooped up by TikTok or other foreign-owned apps. While Americans have legal protections against the collection of their data by US government organizations like the FBI or NSA, with limited exceptions for [incidental data collection](https://www.intelligence.gov/foreign-intelligence-surveillance-act/1239-incidental-collection-in-a-targeted-intelligence-program), foreign citizens generally have no such protections. Furthermore, once that data has been acquired, there is no means by which it can be clawed back. Once the US government has acquired data on foreign citizens, those governments cannot force the US to delete that data any more than the US government can force the Chinese government to delete data on Americans. Even if it could, there is no means by which it could be verified that the foreign government did not simply create a copy of the data.
Consider for example China's hack of the [Office of Personnel Management](https://www.wired.com/story/china-equifax-anthem-marriott-opm-hacks-data/). The hack gave China access to the Central Personnel Data File, which includes, "all personnel data for every federal employee, every federal retiree, and up to one million former federal employees". In response to this hack, the Director of National Intelligence, James Clapper, said, "You have to kind of salute the Chinese for what they did. If we had the opportunity to do that, I don't think we'd hesitate for a minute." Additionally, consider the [leaks of NSA's PRISM program](https://www.theguardian.com/world/2013/jun/06/us-tech-giants-nsa-data), we know American companies including Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube, and Apple provided the US government with the contents of their users' communications. As [noted by the Guardian](https://www.theguardian.com/world/2013/jun/06/us-tech-giants-nsa-data), "the 'home-field advantage' due to housing much of the internet's architecture" enabled NSA analysts to readily access a treasure trove of signals intelligence, with an ability to query the data of any user of these services.
Likewise, the popularity of TikTok with Americans and others around the world gives the Chinese government a similar home-field advantage. If governments of liberal societies are willing and able to use their home-field advantage with digital technologies to acquire such signal intelligence, it is virtually guaranteed that the governments of illiberal, authoritarian societies are willing and able to.
Furthermore, there is the issue of reciprocity. The Chinese government has long banned the websites and apps of US Internet companies. Included in these blocks are websites such as Facebook, Google, Twitter, YouTube, and WhatsApp. As James Andrew Lewis of CSIS has [noted](https://www.csis.org/analysis/ban-tiktok-again), "The disparity in the treatment of Chinese and U.S. companies is part of a larger Chinese strategy to gain technological and commercial advantage—as when Chinese companies could do business in the United States, but U.S. companies were forced to take a Chinese partner or provide access to technology." This state of affairs resembles Popper's [Paradox of Tolerance](https://en.wikipedia.org/wiki/Paradox_of_tolerance), which states, "If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them."
The ACLU, EFF, and a number of others seem to have gotten caught up in this paradox, as evidenced by a [joint letter](https://pen.org/letter-to-congress-opposing-federal-law-to-impose-a-ban-on-tiktok/) put out by a number of digital civil liberties organizations that argues, "a legislative ban on TikTok in the U.S. would set an alarming global precedent, lending legitimacy to authoritarian regimes in shutting down and excluding services they disfavor." This argument falls flat. Authoritarian regimes have already shown they are more than willing to ban services they disfavor, and democracies like India have already conducted nationwide bans of TikTok with few repercussions. Indian TikTok users simply moved to other platforms, like Instagram or the Indian-made TikTok alternative [Josh](https://www.bloomberg.com/news/articles/2022-04-13/josh-app-surges-in-india-as-tiktok-replacement).
One might argue that if the US bans TikTok based on national security concerns, China would also be able to legitimize its bans of US websites based on similar concerns. Where this argument fails is that China's bans include informational services like Wikipedia or the New York Times and privacy protecting services like DuckDuckGo and Signal. The blocks in China are so broad that even the architect of Chinese "Great Firewall" was [forced to employ a VPN](https://www.nytimes.com/2016/04/07/world/asia/china-internet-great-firewall-fang-binxing.html) to complete a presentation at his alma mater. What this demonstrates is that China's decision to ban foreign websites is based on the fact that these sites provide users with the ability to freely access information and points of view contrary to the party line.
This argument also suffers from a degree of whataboutism. It fails to recognize that in the US there is a clear separation between government and private companies. While the US government can request and incentivize private companies to participate in programs like PRISM, it cannot readily force them to, as is evidenced by [Apple's resistance to joining PRISM until 2012](https://techcrunch.com/2013/06/17/apple-nsa/), while Microsoft joined in 2007. On the other hand, consider the Chinese government's treatment of a number of Chinese tech giants like Jack Ma, many of whom the government has [simply disappeared](https://www.bbc.com/news/business-64781986) for periods of time. Imagine the outrage if the US government one day extrajudicially abducted Bill Gates, Mark Zuckerberg, or Tim Cook until they acquiesced to the demands of the NSA. Such a scenario is completely unthinkable outside of authoritarian nations like China and [Saudi Arabia](https://www.nytimes.com/2017/11/04/world/middleeast/saudi-arabia-waleed-bin-talal.html).
In addition to the aforementioned arguments, there are a couple of less serious arguments that can be quickly dispensed with.
There is the argument that none of this matters because TikTok is largely just a bunch of teenagers posting silly videos. What this argument fails to recognize is that these teenagers will one day be the leaders of the country. Providing a foreign adversary with the opportunity to harvest the data on hundreds of millions of young Americans with little effort runs the risk of providing opportunities for China to influence US persons in the future using data collected when they were in their youth, such as embarrassing or sensitive messages, photos, and videos shared privately on the platform. Also, this is a massive platform consisting of all sorts of people. It is no longer a platform of a bunch of teenagers, even if they are a disproportionate share of its user base.
Another spurious claim, [made by Bruce Schneier](https://www.schneier.com/essays/archives/2023/02/why-the-u-s-should-not-ban-tiktok.html), is that what TikTok is doing is little different than what Meta's Facebook and Instagram are doing. On the surface, this is true. Meta and TikTok's owner ByteDance are both companies trying to run profitable businesses. The primary difference is that TikTok provides a foreign adversary with the capability of accessing this data, while Meta does not, and the secondary difference is that, as was noted earlier, China long ago banned Facebook and Instagram itself. Schneier and others have also argued that we should not be focusing on banning TikTok and instead focusing on shoring up US data privacy regulations. This is a false dilemma. Banning TikTok and shoring up US data privacy regulations can both be done.
Yet another [weak claim](https://www.vice.com/en/article/epv48n/banning-tiktok-is-unconstitutional-ludicrous-and-a-national-embarrassment) made is that there is no "proof that the Chinese government has used TikTok’s data for intelligence purposes." The Chinese government is not dumb or impatient and is well-aware of the US intelligence community's capabilities. It is either accessing the data in a manner it is absolutely certain the US government is not capable of becoming aware of, or it is simply amassing as much data as possible for access at a later date. We do not have to wait for proof that the Chinese government has harvested the data of hundreds of millions of Americans to ban it, and banning it at that point would not get that data back. We know the intentions and capabilities of the Chinese government. That alone is sufficient.
Perhaps the most unserious claim, [made by Rep. Jamaal Bowman](https://www.nytimes.com/2023/03/22/nyregion/jamaal-bowman-tiktok.html), is that discussion of a ban of TikTok is a result of racial animus toward China. A social media company based in Taiwan, Japan, South Korea, Thailand, Malaysia, Germany, the UK, France, Italy, Brazil, Argentina, India, Sri Lanka, Nigeria, Ethiopia, or any other free society where companies are not beholden to totalitarian, hostile, single-party states would not be facing this kind of scrutiny. On the other hand, if Russia-based VK or Yandex achieved TikTok-levels of popularity, it is absolutely guaranteed there would be a similar discussion of banning such a platform among American policymakers.
On the other hand, a serious point is the nature of _how_ TikTok should be banned. There are a number of ways this could be accomplished. All else being equal, it would be better to ban it in such a way that is less restrictive of what Americans are allowed to do with their own devices. That means Americans should not be banned from using TikTok on the web — we definitely do not want to create a "Great Firewall of America" — or installing it on their phones via methods like sideloading.
America's Internet must remain totally free of censorship, as it has always been, and the threat posed by TikTok is insufficient to justify such a change. Instead, banning the app from the app stores in the US should be sufficient to achieve the desired goals without being unnecessarily restrictive. Those Americans who truly wish to access TikTok by jailbreaking their iPhones or sideloading the app to their Android devices would remain free to do so. Erecting this barrier to entry should be sufficient to achieve the desired goal, particularly since jailbreaking an iPhone is significantly more involved than sideloading an Android app, and iPhones are [ubiquitous](https://www.axios.com/2021/10/14/teen-iphone-use-spending-habits) among American teens. If it is insufficient, then at that point the restrictions required to achieve the goal would be outweighed by their costs.
Balancing the needs of national security and free expression is at times a tricky one. Cases like the incidental, warrantless collection of Americans' data by the NSA is one such example of a difficult tension between ensuring national security with protecting domestic civil liberties. The case of TikTok, on the other hand, is not a tricky one. A ban on TikTok is viewpoint neutral and has minimal ramifications for free expression. Americans have plenty of other platforms on which they can post their cat videos that are not beholden to the whims of hostile, illiberal nations. | alexcaro |
1,414,487 | Talking Algorithm: Exploration of Intelligent Web Crawlers | Introduction "If I had asked people what they wanted, they would have said faster horses"... | 0 | 2023-03-25T06:57:36 | https://dev.to/tikazyq/talking-algorithm-exploration-of-intelligent-web-crawlers-2kk2 | ai, machinelearning | ## Introduction
> "If I had asked people what they wanted, they would have said faster horses" -- Henry Ford
Today is the era of artificial intelligence. Whether it is ChatGPT or the various intelligent applications that follow, many people see the upcoming sci-fi world that was almost unimaginable a few years ago. However, in the field of reptiles, artificial intelligence does not seem to be involved too much. It is true that crawlers, as an "ancient" technology, have created many technical industries such as search engines, news aggregation, and data analysis in the past 20 years, but we have not seen obvious technological breakthroughs yet: crawler engineers still mainly rely on technologies such as XPath and reverse engineering to automatically obtain web data. However, with the development of artificial intelligence and machine learning, crawler technology can theoretically achieve "automatic driving". This article will introduce the current status and possible future development direction of the so-called **intelligent crawler** (intelligent, automated data extraction crawler technology) from multiple perspectives.
## Current Web Crawling Technology
A web crawler is an automated program used to obtain data from the Internet or other computer networks. They usually use automated scraping techniques to automatically visit the website and collect, parse and store information on the website. This information can be structured or unstructured data.
Crawler technology in the traditional sense mainly includes the following modules or systems:
1. **Network request** : initiate an HTTP request to a website or web page to obtain data such as HTML;
2. **Web page parsing** : parse HTML to form a structured tree structure, and obtain target data through XPath or CSS Selector;
3. **Data storage** : store the parsed structured data, which can be in the form of a database or a file;
4. **URL management** : manage the URL list to be crawled and the URL list that has been crawled, such as URL resolution and request for paging or list pages.

The above is the basic crawler technology modules. For a large crawler system, it is also necessary to have the necessary modules for the production environment such as task scheduling, error management, and log management. The author's crawler management platform [Crawlab ](https://www.crawlab.cn)is a crawler management platform for enterprise-level production environments. In addition, for some anti-crawling measures, such as verification code or IP blocking, additional modules are usually required, such as verification code identification, IP proxy, etc.
However, at present, the development of crawler programs is mainly focused on webpage parsing, which consumes a lot of manpower. Although HTML needs to be parsed into the webpage data, the layout, format, style, and content of different websites are different, so each website and webpage needs separate parsing logic, which greatly increases the cost of manual coding. Although some general-purpose crawlers such as search engine crawlers do not need to write too much parsing logic, such crawlers usually cannot focus on data extraction of specific topics. Therefore, in order to reduce the cost of manual writing, it is best to automatically extract web page data without writing or writing a small amount of parsing logic, which is the main goal of intelligent crawlers.
## Known Implementations
It is not easy to implement intelligent web page extraction, but there are already some attempts to develop intelligent crawlers. Among them, [GNE (GeneralNewsExtractor)](https://github.com/GeneralNewsExtractor/GeneralNewsExtractor) developed by Kingname is an open source implementation of webpage text extraction, based on [text and punctuation density extraction algorithms](https://kns.cnki.net/KCMS/detail/detail.aspx?dbcode=CJFQ&dbname=CJFDLAST2019&filename=GWDZ201908029&v=MDY4MTRxVHJXTTFGckNVUkxPZmJ1Wm5GQ2poVXJyQklqclBkTEc0SDlqTXA0OUhiWVI4ZVgxTHV4WVM3RGgxVDM=). [GerapyAutoExtractor ](https://github.com/Gerapy/GerapyAutoExtractor)developed by Cui Qingcai implemented webpage list page recognition based on [list cluster and SVM algorithm](https://cuiqingcai.com/9531.html). [Octoparse](https://www.octoparse.com/), a commercial client software, has developed an automatic list recognition module. [Diffbot](https://www.diffbot.com/) is an API-based intelligent web page recognition platform, with a very high recognition accuracy rate, claiming to be 99%. Known smart crawler implementations are currently based mainly on the HTML structure and content of web pages, such as GNE and GerapyAutoExtractor. For commercial software such as Octopus and Diffbot, we cannot know the specific implementation method.
## Explore List Page Recognition
Now the accuracy of text recognition is very high, and there are many technical implementations and applications. Here we mainly focus on the identification of list pages, which is the web page parsing work of many crawlers.
We can infer from experience how to automatically identify desired content. Humans are visual animals. When we see a web page with a list of articles, we will immediately recognize the list of articles without any surprises, as shown in the figure below. But how exactly do we recognize it? In fact, we naturally group the article list items of the same category into one category. So, we'll quickly realize that this is actually a list page. Of course, why are these list items similar? We can see that the child elements in these lists are also similar, so it's natural to tie them together. The individual sub-elements add up to a single list item, which our brains automatically group them together. This is the process of listing page recognition.
<img src="https://codao.crawlab.cn/images/2023-03-25-062942.png" style="width:640px">
Based on such an analysis, it is actually very easy to think of the clustering algorithm in machine learning. All we need to do is to extract the characteristics of each node on the webpage, and then use the clustering algorithm to filter out the nodes of the same category. Of course, the feature selection here needs to be considered. Instead of simply looking at a single node of HTML, we need to associate it with other nodes to extract features, so that we can get some nodes of different categories. Then, we can filter out the desired list page according to the overall information of the node cluster.
Of course, it is not an easy task to actually implement such an algorithm with code. It is necessary to model and vectorize each node of HTML, and build a tree-like graph based on them. This is very tedious thing. Fortunately, the author has used sklearn, networkx and other libraries to implement a basic list page recognition system [Webspot](https://github.com/crawlab-team/webspot), which can automatically recognize list elements on a list page, and can visually display the recognition results, as shown in the figure below.

For most listings, Webspot's ability to identify is good. Although it is not as accurate as Diffbot, it can still be accurately identified for pages that are not very complicated.
So, why invent a new wheel when there is already a list page identification solution like Diffbot? One of the most important reasons is that commercial and high-accuracy software such as Diffbot cannot directly provide reusable extraction rules such as XPath and CSS Selector. Extraction rules for automatic identification are all we need. Then, by integrating into open source crawlers such as Scrapy and Colly, the cost of data capture can be greatly reduced. This is also a feature that Webspot can currently bring to users. It is not only able to identify list page elements and corresponding fields, but also provide extraction rules, as shown in the figure below.

With such an extraction rule, data can be automatically extracted from similar web pages only by automatic identification once.
Currently, Webspot is still in the early stages of development, and there should be more new features and algorithm development and optimization in the future.
## Future Development
Intelligent crawlers are equivalent to autopilot on web pages, allowing crawlers to obtain the desired data or information as required without too many manual operations. This is an ideal technology for many data demanders and crawler engineers. However, intelligent crawlers are not yet mature at present, and the existing implementations and technologies are relatively simple. In the future, technologies such as deep learning and reinforcement learning may be used to improve the recognition ability of intelligent crawlers. In addition, the combination of graph theory and artificial intelligence, along with visual technology, may allow intelligent crawlers to achieve breakthroughs in accuracy. The author will continue to explore on intelligent crawlers through the Webspot project to solve the cost problem of data extraction. Those who are interested in the development of intelligent crawlers please feel free to contact me on GitHub "tikazyq". | tikazyq |
1,414,599 | Introduction To HTML | I wrote this article to help newbies that are about to enter into the world of web design. You may... | 0 | 2023-03-25T10:01:21 | https://dev.to/guonnie/introduction-to-html-11nb | webdev, javascript, beginners, tutorial |
I wrote this article to help newbies that are about to enter into the world of web design.
You may think it's difficult, but it's not. It's very easy.
Just follow through to the end.
[Click to read!](https://link.medium.com/plW67Pu6ryb)
| guonnie |
1,423,116 | Python map(), filter() and reduce() | map(), filter(), and reduce() are built-in Python functions that are used to perform data... | 0 | 2023-04-02T04:17:28 | https://dev.to/max24816/python-map-filter-and-reduce-1ldj | python, programming, tutorial, codenewbie | [map(), filter(), and reduce() are built-in Python functions](https://www.programdoc.com/python/map-filter-reduce) that are used to perform data manipulation on iteratable of data such as lists, tuples, and sets.
## Lambda function
[Python lambda is a anonymous function](https://www.programdoc.com/python/lambda), declared using single line expression. It can be passed as arguments in function call. Using this with map, filter and reduce makes it easier to write the expression in a single statement and easier to understand.
## Map function
_Syntax_
```py
map(function, iterator)
```
The map() function will send the each values from the iterator to the function, it will return the processed value. map will returns an iterator containing the results of the processed values.
_Example:_
```py
# square the values of a list using map()
numbers = [1, 2, 3, 4, 5]
squared_numbers = list(map(lambda i: i ** 2, numbers))
print(squared_numbers)
# Output: [1, 4, 9, 16, 25]
```
## filter function
_Syntax_
```py
filter(function, iterator)
```
The filter() function is similar to map but with one difference, it will return the new iterator based on the function return value. It returns an iterator containing the elements from a sequence for which a given function returns True.
_Example:_
```py
# filter even numbers from a list using filter()
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
even_numbers = list(filter(lambda i: i % 2 == 0, numbers))
print(even_numbers)
# Output: [2, 4, 6, 8, 10]
```
## reduce function
_Syntax_
```py
reduce(function, iterator)
```
The reduce() function will take first two values from the iterator sequence, pass it as a argument to the function, then it will take the return value from the function and pass the value along with the next value from iterator as a argument to the function, this will continue till the end of the sequence list. Finally will return a single value.
_Example:_
```py
# calculate the product of a list of numbers using reduce()
from functools import reduce
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
product = reduce(lambda i, j: i * j, numbers)
print(product)
# Output: 3628800
```
| max24816 |
1,414,692 | A Step-by-Step Guide to Creating and Valuing Your Own Cryptocurrency on Ethereum using ERC20 Tokens and add USD value to it | In the world of blockchain and cryptocurrencies, creating your own digital asset has become a popular... | 0 | 2023-03-28T15:38:32 | https://cryptoloom.xyz/tokenize-your-dream-a-step-by-step-guide-to-creating-and-valuing-your-own-cryptocurrency-on-ethereum-using-erc20-tokens-and-liquidity-pools-on-uniswap/ | general, cryptocurrency, erc20tokens, ethereum | ---
title: A Step-by-Step Guide to Creating and Valuing Your Own Cryptocurrency on Ethereum using ERC20 Tokens and add USD value to it
published: true
date: 2023-03-25 11:00:23 UTC
tags: General,cryptocurrency,erc20tokens,ethereum
canonical_url: https://cryptoloom.xyz/tokenize-your-dream-a-step-by-step-guide-to-creating-and-valuing-your-own-cryptocurrency-on-ethereum-using-erc20-tokens-and-liquidity-pools-on-uniswap/
---
In the world of blockchain and cryptocurrencies, creating your own digital asset has become a popular trend among businesses and individuals. With the rise of Ethereum and its ERC20 token standard, it has become comparatively easy to create your own cryptocurrency. In this article, we will cover the process of creating your own cryptocurrency on Ethereum using ERC20 tokens and how to assign value to it by creating a liquidity pool on Uniswap.
## What is Ethereum?
Ethereum is the second most popular and largest blockchain network after Bitcoin. Ethereum has a lot more functionality than Bitcoin, such as smart contract capabilities. The network also hosts a large number of decentralized applications (dApps) that run on its blockchain. The native currency of the Ethereum network is Ether (ETH).
## What are ERC20 Tokens?
ERC20 tokens are digital assets developed on the Ethereum blockchain using the ERC20 token standard. ERC20 tokens are non-fungible digital assets that are used as a means of exchange, utility, or security on the Ethereum network. They are fungible, meaning each token is interchangeable with another token of the same type.
ERC20 tokens have gained widespread adoption and popularity, especially within the decentralized finance (DeFi) ecosystem. Many of the DeFi protocols and dApps run on the Ethereum network and use ERC20 tokens as their primary currency.
## Creating your own Cryptocurrency on Ethereum using ERC20 Tokens
Creating your own cryptocurrency on Ethereum is a straightforward process. To get started, you need to have a decent understanding of Solidity, which is the programming language used to develop smart contracts on the Ethereum network.
### Step 1: Launch a New Ethereum Account
The first step in creating your own cryptocurrency on Ethereum is to launch a new Ethereum account or wallet. You can either use an existing Ethereum account or launch a new one using a software wallet like MetaMask or MyEtherWallet.
### Step 2: Create the Smart Contract
The ERC20 token contract contains the predefined set of rules for creating a digital asset that follows the ERC20 standard. The ERC20 token standard defines the basic functionalities that an ERC20 token should have, such as the total supply of the token, decimals, symbol, and name.
To create your ERC20 token contract, follow these steps:
1. Open up the Remix IDE and create a new file.
2. Copy the ERC20 Token Smart Contract from OpenZeppelin or any other reputable source.
3. Paste the Smart Contract code into a new file.
4. Compile the Smart Contract code to ensure that it is error-free.
5. Deploy the Smart Contract code to the Ethereum network using Remix.
If you are not familiar with programming, it is advisable to seek the help of professional developers to create your ERC20 token contract.
### Step 3: Deploy the Smart Contract to the Ethereum Network
Once you have completed the Smart Contract code, you need to deploy it to the Ethereum network. Before deploying the contract, you have to set up the gas fees and the gas limit for executing the transaction.
### Step 4: Customize your Token
After deploying the Smart Contract to the Ethereum network, you can customize your cryptocurrency by adjusting the terms set forth in the contract. These terms include the total supply of the token, number of decimals, symbol, and name.
### Step 5: Distribute your Token
Once you have customized your token, you can then distribute it to potential investors, users, or customers. This will require you to create a smart contract that controls the distribution of your tokens.
## Creating a Liquidity Pool on Uniswap
Creating a liquidity pool on Uniswap is a great way to assign value to your cryptocurrency. Uniswap is a decentralized exchange platform that uses an automated market maker (AMM) system to provide liquidity to its users. This system is powered by smart contracts running on the Ethereum network.
### Step 1: Launch Uniswap
First, you need to launch the Uniswap website and connect your wallet to the Uniswap platform. You can use any wallet that supports the Ethereum blockchain, like MetaMask or MyEtherWallet.
### Step 2: Add your Cryptocurrency to Uniswap
The next step is to add your cryptocurrency to Uniswap. To do this, follow these steps:
1. Click on the "Pool" tab on Uniswap.
2. Click on the "Add Liquidity" button.
3. Select your cryptocurrency token from the dropdown menu.
4. Enter the amount of cryptocurrency you want to add to the liquidity pool.
5. Enter the equivalent amount of ETH for your cryptocurrency.
6. Click on the "Approve" button to approve the transaction.
### Step 3: Assign a Price to your Cryptocurrency
After adding your cryptocurrency to the liquidity pool, the Uniswap platform will automatically assign a price to your cryptocurrency based on the demand and supply of the token. This price will also be used to determine the trading volume of your cryptocurrency on the Uniswap platform.
### Step 4: Provide Liquidity to the Pool
Now that you have assigned a price to your cryptocurrency, you need to provide liquidity to the Uniswap pool. This will enable other users to trade your cryptocurrency on the Uniswap platform.
To provide liquidity to the pool, follow these steps:
1. Click on the "Pool" tab on Uniswap.
2. Click on the "Add Liquidity" button.
3. Select your cryptocurrency token from the dropdown menu.
4. Enter the amount of cryptocurrency you want to add to the liquidity pool.
5. Enter the equivalent amount of ETH for your cryptocurrency.
6. Click on the "Supply" button to supply the liquidity to the pool.
### Step 5: Start Trading
Now that you have created a liquidity pool on Uniswap, your cryptocurrency is ready for trading. Users can now trade your cryptocurrency on the Uniswap platform, and you can earn trading fees for providing liquidity to the pool.
## Conclusion
Creating your own cryptocurrency on Ethereum using ERC20 tokens is an exciting and straightforward process. With the right knowledge and resources, anyone can create their own digital asset and assign value to it by creating a liquidity pool on Uniswap. We hope this guide has provided you with valuable insights on how to create your own cryptocurrency on Ethereum using ERC20 tokens and creating a liquidity pool on Uniswap. | cryptoloom |
1,414,702 | Best smm panel service provider | An SMM panel, or Social Media Marketing panel, is a web-based platform that provides various services... | 0 | 2023-03-25T11:51:24 | https://dev.to/smmdepth/best-smm-panel-service-provider-5129 | bestsmmpanel, smmpanel, bangladeshsmmpanel, smmdepth | **[An SMM panel](https://smmdepth.com)**, or Social Media Marketing panel, is a web-based platform that provides various services to help businesses and individuals boost their social media presence. SMM panels offer a range of social media services such as likes, followers, views, shares, and comments on popular social media platforms such as Facebook, Instagram, Twitter, TikTok, YouTube, and more.
SMM panels usually have a dashboard that allows users to select the desired service and enter the necessary information such as the username or URL of their social media account, the number of likes or followers they want to buy, and the payment information.
SMM panels are used by businesses and individuals to increase their social media presence and engagement, and ultimately, drive more traffic and sales to their website or products. However, it's important to use SMM panels responsibly and not engage in any unethical or spammy practices that could harm your social media reputation.
| smmdepth |
1,414,759 | Terms used in Reinforcement Leaning | Every AI/ML/Data Science enthusiast knows the definition of Reinforcement Learning - it is a... | 0 | 2023-03-25T13:43:21 | https://dev.to/anurag629/terms-used-in-reinforcement-leaning-56lo | reinforcementlearning, machinelearning, datascience, deeplearning | Every AI/ML/Data Science enthusiast knows the definition of Reinforcement Learning - it is a feedback-based machine learning technique in which an agent learns to behave in an environment by performing actions and observing their outcomes. For each good action, the agent receives positive feedback, and for each bad action, it receives negative feedback or a penalty. However, many are not familiar with the specific terms used in this definition. Let me explain them with an example.
Let's consider the example of a robot that is learning to navigate a maze. In this scenario:
🕵️Agent: The robot is the agent, which is the decision-maker that interacts with the environment. The agent can perceive the environment and take actions to achieve its goal.
🧀ꡌꡙꡚ🐁 Environment: The maze is the environment, which is the context in which the agent operates. The environment can provide feedback to the agent in the form of rewards or punishments.
🎬 Actions: The robot can take different actions such as moving forward, turning left, or turning right. These actions are the choices available to the agent.
🙂Feedback: The environment provides feedback to the agent based on its actions. The feedback can be positive, negative, or neutral.
🏆 Reward: The agent receives a reward when it takes an action that leads it closer to its goal. For example, if the robot moves towards the exit of the maze, it may receive a positive reward.
🚫 Punishment: The agent receives punishment when it takes an action that leads it further away from its goal. For example, if the robot hits a wall, it may receive a negative reward.
📜 Policy: The policy is the strategy used by the agent to select actions based on its current state. The goal of the agent is to learn an optimal policy that maximizes the long-term reward. For example, the robot may learn to follow the left wall of the maze to reach the exit.
📍 State: The state is a representation of the environment at a particular time, which includes information such as the location of the agent and other relevant information.
#datascience #machinelearning #ai #ml #reinforcementlearning | anurag629 |
1,414,767 | [FEEDBACK] - Quais Problemas vocês encontram na hora de RECEBER ou PAGAR, no EXTERIOR | Pessoal queria que me ajuda-se, me dando um Feed de problemas, causa e experiencias. É pra... | 0 | 2023-03-25T14:02:45 | https://dev.to/abraaoteixeira/feedback-quais-problemas-voces-encontram-na-hora-de-receber-ou-pagar-no-exterior-1hci | Pessoal queria que me ajuda-se, me dando um Feed de problemas, causa e experiencias. É pra ideliazação de uma solução.
| abraaoteixeira | |
1,414,809 | How to implement Dependency Injection in Laravel Livewire | I like to use Dependency Injection (DI) in my code. It helps me to keep my code clean and reduces... | 0 | 2023-03-26T10:32:53 | https://dev.to/iamkirillart/how-to-implement-dependency-injection-in-laravel-livewire-con | php, laravel, livewire, programming | I like to use Dependency Injection (DI) in my code. It helps me to keep my code clean and reduces code coupling.
## What is Dependency Injection
Dependency Injection is a design pattern that allows for the separation of concerns by removing the responsibility of creating objects and their dependencies from the class that uses them. Instead, these dependencies are provided or injected into the class by a third-party or a container.
Here's an example in PHP:
```php
class UserService {
private $userRepository;
public function __construct(UserRepository $userRepository) {
$this->userRepository = $userRepository;
}
public function getUsers() {
return $this->userRepository->findAll();
}
}
class UserRepository {
public function findAll() {
// fetch users from the database
}
}
```
In this example, the `UserService` requires a `UserRepository` object to fetch users from the database. Instead of creating the `UserRepository` object inside the `UserService`, we inject it via the constructor. This allows for better separation of concerns and makes the `UserService` more flexible, as we can easily swap out the `UserRepository` implementation without changing the UserService code. You can read more about how it works in Laravel [in documentation](https://laravel.com/docs/10.x/container).
## How to use DI with Livewire
When we use Livewire components, we can’t use the `__construct` method inside it because Livewire needs `$id` for a component.
I started to research and found this [thread on GitHub](https://github.com/livewire/livewire/issues/380). Some developers recommend using the `mount()` method like this:
```php
use Livewire\Component;
use Psr\Log\LoggerInterface;
class Foo extends Component
{
protected LoggerInterface $logger;
public function mount(LoggerInterface $logger)
{
$this->logger = $logger;
}
public function render()
{
$this->logger->info('component rendered');
return view('livewire.foo');
}
public function action()
{
$this->logger->info('action triggered');
}
}
```
The problem with `mount()` DI is that it doesn't work in some cases. For example, when you click on the button and call `action()`, your `$this->logger` will be empty. This happens because the `mount()` method isn't called when the user interacts with the component.
The good news for us is that in [version 2.6 of Livewire](https://laravel-news.com/livewire-v2-6-0) developers added `boot()` hook. This hook will be called every time you use your component. Here's how you can use DI inside your Livewire component:
```php
class YourComponent extends Livewire\Component
{
private SomeService $someService;
private OneMoreService $oneMoreService;
public function boot(
SomeService $someService,
OneMoreService $oneMoreService
)
{
$this->someService = $someService;
$this->oneMoreService = $oneMoreService;
}
}
```
| iamkirillart |
1,422,721 | MatchTheRegex | MatchTheRegex 100 points AUTHOR: SUNDAY JACOB NWANYIM Description How about trying to... | 0 | 2023-04-01T14:32:08 | https://dev.to/brunoblaise/matchtheregex-2pii | regex, cybersecurity, picoctf, beginners | # MatchTheRegex
**100 points**
AUTHOR: SUNDAY JACOB NWANYIM
Description
How about trying to match a regular expression
The website is running here.
*Note:* This challenge launches an instance on demand.
___
Looking inside the script tag we can see:
```html
<script>
function send_request() {
let val = document.getElementById("name").value;
// ^p.....F!?
fetch(`/flag?input=${val}`)
.then(res => res.text())
.then(res => {
const res_json = JSON.parse(res);
alert(res_json.flag)
return false;
})
return false;
}
</script>
```
The regex is supposed to be `^p.....F!?`, literally the first and only guess was `picoCTF`, when I entered it gave the flag:
> picoCTF{your flag}
| brunoblaise |
1,422,741 | ASDF is a single version manager for all languages | I had to manage multiple versions of NodeJS, so I installed Node Version Manager. Then I had to also... | 0 | 2023-04-01T15:22:41 | https://dev.to/jordanbravo/asdf-is-a-single-version-manager-for-all-languages-a8m | programming | I had to manage multiple versions of NodeJS, so I installed Node Version Manager.
Then I had to also manage multiple versions of Python, so I installed PyEnv.
Then I had to manage multiple versions of Rust, Golang, Lua...
There must be a better way.
Enter `asdf`, a single version manager for all programming languages.
Homepage: https://asdf-vm.com/
Caveat: It’s currently only available for MacOS and Linux.
 | jordanbravo |
1,423,038 | No code ML - Make accurate predictions with Amazon SageMaker Canvas | Empowering you to make decisions with no code ML Organizations are becoming more... | 0 | 2023-04-02T11:36:45 | https://dev.to/aws-heroes/make-predictions-with-no-code-ml-in-amazon-sagemaker-canvas-as-a-business-user-3af2 | aws, machinelearning, analytics, business | ## Empowering you to make decisions with no code ML
Organizations are becoming more data-driven no matter where your company is in the data analytics maturity cycle you may be thinking about your first use case. Whether you work for a startup, consulting, government or banking you may be helping your leaders make decisions to migrate a workload from on-prem to the cloud, doing some data engineering to build a pipeline, testing AI POCs or even making predictions from clean data in a data lake.
Today we will explore how you can get started easily in machine learning as a business user to test a hypothesis and confidently make decisions with no-code ML to empower you to get insights quickly without the need to have a computer science background.
### Lesson Outcomes
You will learn how to:
* Set up the environment to use machine learning
* Select a target variable to build your model
* Get started using Amazon SageMaker Canvas to make predictions
* Retrieve model evaluation metrics, draw model insights and view predictions
### What is Machine Learning?
In a nutshell machine learning helps you to explore patterns in your dataset. There are two types of machine learning which are:
**Supervised Learning:** Linear regression when you have a labelled dataset and you would like to make prediction from a dependent variable that is an integer or continuous variable. Classification is the opposite where you are trying to make predictions from an outcome variable that is a categorical variable.
**Unsupervised Learning:** Is when you have an unlabelled dataset and you would like to draw patterns from it. For example, clustering and princicpal component analysis.
You are welcome to explore courses in machine learning at [AWS Skillbuilder](https://explore.skillbuilder.aws/learn)
### What is Amazon SageMaker Canvas?
Amazon SageMaker Canvas empowers business analysts to make machine learning predictions with a user-friendly interface without the need to program or write any [code](https://aws.amazon.com/sagemaker/canvas/). You may simply bring in your own dataset and use a pre-trained custom model in Amazon SageMaker Canvas.
### Why Amazon SageMaker Canvas?
Amazon SageMaker Canvas fosters the bridge between technical teams and the business sponsor for greater collaboration between data scientists, engineers and business analysts. This enables greater alignment in agile between business requirements and technical strategy to deliver end-user goals.
You may take advantage of AWS Free Tier for 2 months which includes 160 session hours per month free under [AWS Free Tier](https://aws.amazon.com/sagemaker/canvas/pricing/) with your [AWS account](https://dev.to/aws-builders/getting-started-with-aws-a-sweet-journey-5cjj).
### Who should use it?
Anyone is welcome to use Amazon SageMaker Canvas especially if you are a BA, decision maker, executive leader and you don't have a machine learning background. You do not need to program in any language using [ready to use ML models](https://aws.amazon.com/sagemaker/canvas/).
However if you are a data scientist, developer, machine learning engineer or solution architect familiar with programming, you may wish to use [custom models](https://aws.amazon.com/sagemaker/canvas/).
### What are the common use cases?
As a business user your organization can explore use cases with Amazon SageMaker Canvas that include:
* Detect customer sentiment e.g. call transcripts
* Prediction e.g. customer churn and fraudulent applications
* Extract information from documents e.g. invoices, forms
* Classify images e.g. insurance claim assessments
* Classify text e.g. support tickets
* Demand forecasting
### Problem Statement
Let's start with the data analytics workflow using [CRISP-DM](https://www.datascience-pm.com/crisp-dm-2/) to define the business problem:
* Is this insurance claim form fraudulent?
### Dataset
This vehicle insurance claim fraud detection dataset is from [Kaggle.com](https://www.kaggle.com/datasets/shivamb/vehicle-claim-fraud-detection?resource=download).
This is a classification problem and the outcome variable i.e.**FraudFound_P** we are trying to predict, is a continuous variable with the binary values O = Not fraudulent and 1 = Fraudulent claim.
### Solution Architecture
This is the solution overview of how to bring in your own dataset to make predictions with Amazon SageMaker canvas.

### Tutorial 1: Setting up the environment
* Step 1: Sign into your AWS account for IAM user. If you don't have an AWS IAM admin user account you may create one [here](https://dev.to/aws-builders/creating-your-first-iam-admin-user-and-user-group-in-your-aws-account-machine-learning-part-1-3cne).
* Step 2: You may follow the getting started [tutorial](https://aws.amazon.com/getting-started/hands-on/machine-learning-tutorial-generate-predictions-without-writing-code/) to create a stack using AWS Cloud Formation to set up the environment.
Accept the default settings for US East (N.Virginia) region and check the box to acknowledge the terms and conditions. Click **Create stack**.

* Step 3: The stack is being created and will take about 1 hour to complete.


### Tutorial 2: Get started as a business user
* Step 1: Type into the search bar **Amazon SageMaker**.
On the left-handside, click **Canvas** and ensure that you are operating under AWS region US East (N.Virginia).

* Step 2: Open the Canvas to launch it.

It will take a few minutes to launch the canvas.

* Step 3: You may read the tutorials to help you get started.

* Step 4: Upload your dataset into the Amazon S3 bucket.

* Step 5: Select **Datasets**.

Click **Import** to upload your dataset.

Click **Amazon S3** to upload a dataset stored in Amazon S3 bucket.

Check the box against the file and click **Import data**.

### Tutorial 3: Start making predictions from your dataset
* Step 1: Choose from **Ready to use models**
With [ready to use models](https://docs.aws.amazon.com/sagemaker/latest/dg/canvas-ready-to-use-models.html) you may bring in your own dataset and take advantage of pre-trained models that use AWS services including Amazon Textract, Amazon Rekogntion and Amazon Comprehend to make predictions.

Step 2: If you cannot locate a ready to use model type the search bar 'Prediction' and select **Create a Custom model**.

* Step 3: Select **Predictive Analysis** and **Create**.

* Step 4: Click **Select dataset** to start building a model.

* Step 5: Under the tab **Build**, Choose a **target variable** from the drop-down menu as the outcome variable i.e. 'FraudFound_P' column.


The **model type** infers that this a two category prediction.
Select **Quick Build**.

It will take between 2-15 minutes to build the prediction model.

* Step 6: Under the **Analyze** tab in the section **Overview*, Amazon SageMaker provides details that 96.402 % of the time the
training results predicts fraud.

In the next tab **Scoring**, you can see the plot of the predicted and actual values. The model insights tell us that when an insurance claim is not fraudulent, the predicted outcome occurs 98.235 % of the time.

If you click **Advanced Settings**, you may view the Confusion Matrix which provides metrics to evaluate the model to predict the class = 0 (No fraud).
* F1 score = 98.083 %
* Accuracy = 96.402 %
* Precision = 98.230 %
* AUC (Area under the ROC curve) = 0.976
* Recall = 97.931 %

When we toggle to the class = 1 (Fraud), the evaluation metrics of the model include:
* F1 score = 70.712 %
* Accuracy = 96.072 %
* Precision = 69.072 %
* AUC (Area under the ROC curve) = 0.976
* Recall = 72.432 %

* Step 7: On the tab 'Predict' click the button **Predict** to make predictions. Select **Batch predictions** and select your dataset.
View and download the predictions.

Amazon SageMaker Canvas provides a list of predictions against the classes (Class 0 = no fraud and Class 1 = fraud) on the entire dataset which you may download.

* Step 8: You may also generate predictions by selecting **Single predictions** and Amazon SageMaker Canvas will provide details of the feature importance in descending order of the variables in the dataset.

You may also download the Acutal prediction which is:
* 95.31 % of the time there is no fraudulent insurance claim
* 4.69% of the time there is a fraudulent insurance claim
### Clean up resources
To avoid surprise charges on your AWS billing account at the end of the month, I recommend that you delete the AWS services that you no longer need.
* Step 1: Navigate to your Amazon S3 bucket and click **Empty** to clear the resource.

* Step 2: Under **My Models** click on the ellipse (3 dots) and delete the model.

* Step 3: Under **User Details**, for each application click **Delete apps** and enter the word 'delete' in the box.


And delete the user.

* Step 4: Navigate to **Cloud Formation** by typing the word into the search bar.
Click **Stack** on the left handside, select the radio-dial **CFN-SM-IM-Lambda-Catalog** and then click **delete** to disable all the AWS services that were previously created in that stack.

Confirm to delete the Cloudformation stack.

### AWS re:invent 2022 - Machine Learning
If you missed any of the keynotes or workshop sessions from AWS re:invent 2022 you may catch up and watch them on Youtube at this [link](https://www.youtube.com/c/AWSEventsChannel).
You may watch the breakout session 'AWS re:Invent 2022 - Better decisions with no-code ML using SageMaker Canvas' featuring a customer story from Samsung to hear how they implemented Amazon SageMaker Canvas. {% embed https://www.youtube.com/watch?v=vH5xc8C3OX4 %}
You may also hear about democratizing machine learning from the recent AWS re:Invent 2022 [keynote](https://www.youtube.com/watch?v=TL2HtX-FmiQ&t=550s) from Dr Swami Sivasubramanian, VP of analytics, database and ML at AWS.
### AWS SageMaker Canvas Announcement - 30 March 2023
Amazon SageMaker Canvas also supports NLP and computer vision with ready to use models. You may find out more [here](https://aws.amazon.com/about-aws/whats-new/2023/03/amazon-sagemaker-canvas-nlp-cv-use-cases/).
### References
* [Amazon SageMaker Developer Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/canvas-getting-started.html)
* [Amazon Sagemaker Canvas](https://aws.amazon.com/sagemaker/canvas/)
* [Ready to use models](https://docs.aws.amazon.com/sagemaker/latest/dg/canvas-ready-to-use-models.html)
* [Generate ML predictions without writing code](https://aws.amazon.com/getting-started/hands-on/machine-learning-tutorial-generate-predictions-without-writing-code/)
* [Delete an Amazon SageMaker Domain](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-studio-delete-domain.html)
Until the next lesson, Happy Learning! 😁
### What's new this week or coming soon to you 🌎
* AWS Sydney Summit - 4 April 2023. Register and join us for the live stream [here](https://aws.amazon.com/events/summits/sydney/).

* AWS re:Inforce - June 13-14 2023. Register and [join us](https://reinforce.awsevents.com/).

AWS London Summit - 7 June 2023. Register and [join us](https://aws.amazon.com/events/summits/london/).

| abc_wendsss |
1,423,047 | Creating a Keith Lee Food App w/ React.js & ChatGPT | Creating a Keith Lee Food App w/ React.js & ChatGPT This personal project was... | 0 | 2023-04-02T01:55:29 | https://dev.to/cmcoffeedev/creating-a-keith-lee-food-app-w-reactjs-chatgpt-2no5 | react, javascript, chatgpt, firebase |
## Creating a Keith Lee Food App w/ React.js & ChatGPT


This personal project was inspired by watching Keith Lee and wanting to try the family-owned restaurants he reviewed on his platforms. I was initially using a spreadsheet to keep track of the names and addresses of the places.
Here are his socials for reference:
[**@keith_lee125 | Twitter, Instagram, TikTok | Linktree**](https://linktr.ee/Keith_lee125)
For the weekend, I decided to hack together two simple React.js web apps to improve this process.
UPDATE: Recently added maps and the ability to order food.
> **Note**: I have moved away from firebase, but the following steps will work. I have moved to Supabase for the backend and Netlify for hosting. These platforms are more cost-efficient. I will have articles detailing how to do this soon.
### Set up Firebase Firestore
I wanted to store this in a remote database. I chose to use Firebase so I could quickly put this together.
You can create a new project by going to the firebase console.
[https://console.firebase.google.com/](https://console.firebase.google.com/)
Once your project is created. Click on **Add app.** It’s under your project name at the top center of the page.

Next, choose the web platform.

Give your web app a name and click Register app

Now it will give you instructions on how to install firebase.

We will follow these instructions soon, but first, we need to create a react app. Keep this tab open. We will refer to it soon. Specifically, we will need the firebase config
I also wanted ChatGPT to generate the boilerplate code. So let’s do that first.
### ChatGPT
I used ChatGPT to try to make this process easier. It gave a few incorrect code snippets, but this is also maybe because I’m using the latest versions of everything, and ChatGPT was only trained up to a certain time period.
I only wanted to store the information from Keith Lee’s TikTok videos. So this includes the id used on the TikTok web app, the description used in the video, and embedding the TikTok video.
This tool did help improve the process. Here was my first prompt:
> Can you create a react frontend that reads data from firebase firestore. example code
It then replied with this code:
import React, { useState, useEffect } from 'react';
import firebase from 'firebase/app';
import 'firebase/firestore';
const config = {
// Your Firebase project's config object goes here
};
firebase.initializeApp(config);
const db = firebase.firestore();
function App() {
const [data, setData] = useState([]);
useEffect(() => {
// Get data from the 'users' collection in Firestore
db.collection('users').get()
.then(snapshot => {
const data = snapshot.docs.map(doc => doc.data());
setData(data);
})
.catch(error => {
console.log(error);
});
}, []);
return (
<div>
<h1>Users</h1>
<ul>
{data.map(user => (
<li key={user.id}>{user.name}</li>
))}
</ul>
</div>
);
}
export default App;
Notice the comment **// Your Firebase project’s config object goes here.** This is the **const firebaseObject code. **The screenshot cuts off the rest of the code, but here is an example:
const firebaseConfig = {
apiKey: "yourApiKey",
authDomain: "projectid.firebaseapp.com",
projectId: "projectid",
storageBucket: "projectid.appspot.com",
messagingSenderId: "77777777777",
appId: "1:777777777777:web:ab7ag7v7a7",
measurementId: "G-MEASUREMENTID"
};
### Create a new React App
To use this code, we need to create a new react project. I created a new app using the following command
npx create-react-app keith-lee-food
After the react project is created, I installed the firebase SDK using the following command
npm install --save firebase
[**firebase**](https://www.npmjs.com/package/firebase)
I want to note that I installed the latest firebase version using npm, so the code in the useEffect won’t work.
We can follow the documentation to update this code. We could have also asked ChatGPT, and it may have known. Not sure how new this latest version of firebase is.
[**Get started with Cloud Firestore | Firebase**](https://firebase.google.com/docs/firestore/quickstart#read_data)
So the updated will look something like this:
import { collection, getDocs } from "firebase/firestore";
const querySnapshot = await getDocs(collection(db, "users"));
querySnapshot.forEach((doc) => {
console.log('${doc.id} => ${doc.data()}');
});
Of course, we don’t have any data in the database, so it won’t return anything. We also will have a different collection name.
This is the next prompt I gave ChatGPT:
> can you change the query to read an id and description and show a tiktok embed widget using the id. also show the description under that
This is the code it replied with
import React, { useState, useEffect } from 'react';
import firebase from 'firebase/app';
import 'firebase/firestore';
const config = {
// Your Firebase project's config object goes here
};
firebase.initializeApp(config);
const db = firebase.firestore();
function App() {
const [data, setData] = useState([]);
useEffect(() => {
// Get data from the 'tiktoks' collection in Firestore
db.collection('tiktoks').get()
.then(snapshot => {
const data = snapshot.docs.map(doc => doc.data());
setData(data);
})
.catch(error => {
console.log(error);
});
}, []);
return (
<div>
{data.map(tiktok => (
<div key={tiktok.id}>
<h1>{tiktok.description}</h1>
<iframe
src={'https://www.tiktok.com/embed/${tiktok.id}'}
frameBorder="0"
allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
/>
</div>
))}
</div>
);
}
export default App;
It still gives the code we can’t use in the useEffect. We can give ChatGPT our code, but I just replaced it with the code mentioned earlier in the documentation.
I took off the iframe because it loaded every video, leading to bad performance. It also auto-plays every time you scroll. **autoplay** option for allow doesn’t cause this.
### Setting up Firebase Firestore with React
Now we need to set enable Firestore on the Firebase console. Go back to the Firebase console and click Continue to console.

Click the **Build** menu option on the left. Click Firestore Database

On the next screen, click Create database.

Now a popup will show asking us what mode we would like to start in. Keep it at **Start in production mode**. Click **Next**.

Next will ask where we want the data stored. I kept it at this region and clicked enable.

Now we can add data through the console to quickly test our application. On this screen, click **Start collection**.

Give a collection id. I will use the id ChatGPT generated, **tiktoks**. Click **Next**.

Now we can create a document. If you are unfamiliar with Firestore, this is a record in our collection.
Using the TikTok website, we can figure out what information we need to save. For now, I will just save the id and description.
Observe the following URL: [https://www.tiktok.com/@keith_lee125/video/7186098325475839278](https://www.tiktok.com/@keith_lee125/video/7186098325475839278)
The Document ID and id field will be the long number at the end of the URL.
If you go to the URL, it will go to one of Keith Lee’s videos. I copied the description from there. Click **Save **after adding your data.

Now we need to update the code ChatGPT gave to include the config and web 9 version of firebase. Here is an example:
import React, { useState, useEffect } from 'react';
import { collection, getDocs } from "firebase/firestore";
import { initializeApp } from "firebase/app";
import { getFirestore } from 'firebase/firestore'
const config = {
apiKey: "yourApiKey",
authDomain: "projectid.firebaseapp.com",
projectId: "projectid",
storageBucket: "projectid.appspot.com",
messagingSenderId: "77777777777",
appId: "1:777777777777:web:ab7ag7v7a7",
measurementId: "G-MEASUREMENTID"
};
const firebaseApp = initializeApp(config);
const db = getFirestore(firebaseApp);
function App() {
const [data, setData] = useState([]);
useEffect(() => {
// Get data from the 'tiktoks' collection in Firestore
const querySnapshot = await getDocs(collection(db, "tiktoks"));
querySnapshot.forEach((doc) => {
setData(doc.data());
});
}, []);
return (
<div>
{data.map(tiktok => (
<div key={tiktok.id}>
<h4>{tiktok.description}</h4>
<iframe
src={'https://www.tiktok.com/embed/${tiktok.id}'}
frameBorder="0"
allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
/>
</div>
))}
</div>
);
}
export default App;
I left the code for the iframe in here, but you can leave it if you want. I also changed the h1 to h4.
In the root directory of this project, run this in the terminal to start your application.
npm start
I then wanted to show a button to get directions. For this, I wanted to add latitude and longitude in the database.
I gave another request to ChatGPT as follows:
> can you give example if we had longitude and latitude from firebase as well and show a button to get directions. can you show example code.
It replied :
> Sure, here is an example of how you can modify the code to read longitude and latitude from Firestore, and show a button that, when clicked, opens the default maps application with the location pre-populated as the destination:
import React, { useState, useEffect } from 'react';
import firebase from 'firebase/app';
import 'firebase/firestore';
const config = {
// Your Firebase project's config object goes here
};
firebase.initializeApp(config);
const db = firebase.firestore();
function App() {
const [data, setData] = useState([]);
useEffect(() => {
// Get data from the 'locations' collection in Firestore
db.collection('tiktok').get()
.then(snapshot => {
const data = snapshot.docs.map(doc => doc.data());
setData(data);
})
.catch(error => {
console.log(error);
});
}, []);
return (
<div>
{data.map(tiktok => (
<div key={tiktok.id}>
<h1>{location.name}</h1>
<button onClick={() => {
window.open('https://www.google.com/maps/dir/?api=1&destination=${tiktok.latitude},${tiktok.longitude}');
}}>Get Directions</button>
</div>
))}
</div>
);
}
export default App;
This will open Google Maps website on desktop and iPhone devices. It will open the Google Maps app on Android Devices.
I wanted it to open Apple Maps app on iOS devices. I then asked ChatGPT to do this.
> can you change the html to consider if the user is on an apple device. can you show example code
ChatGPT replied with this code:
import React, { useState, useEffect } from 'react';
import firebase from 'firebase/app';
import 'firebase/firestore';
const config = {
// Your Firebase project's config object goes here
};
firebase.initializeApp(config);
const db = firebase.firestore();
function App() {
const [data, setData] = useState([]);
const [isApple, setIsApple] = useState(false);
useEffect(() => {
// Get data from the 'locations' collection in Firestore
db.collection('tiktoks').get()
.then(snapshot => {
const data = snapshot.docs.map(doc => doc.data());
setData(data);
})
.catch(error => {
console.log(error);
});
// Check if the user is on an Apple device
const isAppleDevice = /iPad|iPhone|iPod/.test(navigator.userAgent);
setIsApple(isAppleDevice);
}, []);
return (
<div>
{data.map(tiktok => (
<div key={tiktok.id}>
<h1>{tiktok.name}</h1>
{isApple ? (
<a href={'maps://?q=${tiktok.latitude},${tiktok.longitude}'}>Get Directions</a>
) : (
<button onClick={() => {
window.open('https://www.google.com/maps/dir/?api=1&destination=${tiktok.latitude},${tiktok.longitude}');
}}>Get Directions</button>
)}
</div>
))}
</div>
);
}
export default App;
I wanted to move this location logic to a function.
This is my updated html:
{
place.lat === undefined ? <div></div> :
<button onClick={() => goToMap(tiktok)}>Get Directions</button>
}
I’m using a ternary operator to check if lat (latitude) is set. If it is not set, I will show an empty div element. If we have lat it will show the button. The button now calls a function that passes the **tiktok** object.
Here is the **goToMap** function.
const goToMap = (tiktok) => {
if (isApple) {
window.open('maps://?q=${tiktok.lat},${tiktok.lng}');
} else {
window.open('https://www.google.com/maps/dir/?api=1&destination=${tiktok.lat},${tiktok.lng}');
}
}
Now let’s add latitude and longitude fields to the firestore document we created earlier. I found the latitude and longitude using google maps. Click Add field, and add the following lat and lng fields.
**lat**: 36.11337630000001
**lng**: -115.3094725

Re-run or refresh the web apps page, and you should now be able to get directions to the location.
Since the embedded TikTok wasn’t going to be a good solution, I wanted to add a button to go to the video to watch the review again and get ideas of the kinds of food they have.
I copied and created a new button above the first button for this.
<button onClick={() => watchVideo(tiktok.id)}>
Watch Video
</button>
Then **watchVideo** function will look like this:
const watchVideo = (id) => {
window.open('https://www.tiktok.com/@keith_lee125/video/${id}', '_blank', 'noreferrer');
}
I’m basically recreating the url we based all of the data off of. Re-run the app to get an idea of how it works.
When clicking this button on the desktop, it will go to the videos page on the TikTok website. On iOS and Android devices, it will deep link to the TikTok app if you have it installed.
A couple of more things to note. For the design, I used the react Material UI package. I won’t go over the steps how, but I may in a future post.
[**MUI: The React component library you always wanted**](https://mui.com/)
I also created an admin react app to easily create and edit info.
I won’t go over this in this article but may create another one detailing those steps. For now, I’ll give a high-level overview of how I achieved this.
1. Create a new react app and copy the files from this project.
2. Installed react-router to go to the **edit** and **create new** page. [https://reactrouter.com/en/main](https://reactrouter.com/en/main)
3. Go to the Google Developers Console and enable the Google Places API for this project, which was created by firebase. Also, get the API key. [https://console.developers.google.com/](https://console.developers.google.com/)
4. Used code from here to implement the Google Places API. [https://github.com/Gapur/google-place-autocomplete](https://github.com/Gapur/google-place-autocomplete)
5. Once we get a place object from the autocomplete component, we can easily get info such as latitude, longitude, address, etc.

My current version of the Keith Lee Food web app can be found at [https://keithleefood.com](https://keithleefood.com). Would you try it? Rate it 1–10!
| cmcoffeedev |
1,423,597 | Hi this is Irfan, i am reactJs and UI developer | A post by Sayyed Irfan | 0 | 2023-04-02T19:55:06 | https://dev.to/sayyedirfan255/hi-this-is-irfan-i-am-reactjs-and-ui-developer-24d0 | sayyedirfan255 | ||
1,423,158 | The Dangers of Copy-Paste Malware, and how to prevent them | Copy-paste malware has become a growing concern in today's digital age. This type of malware is... | 0 | 2023-04-02T07:04:54 | https://dev.to/oruchepaschal/the-dangers-of-copy-paste-malware-and-how-to-prevent-them-46ie | copypastemalware, phishing, cybersecurity, socialengineering | Copy-paste malware has become a growing concern in today's digital age. This type of malware is designed to infect a computer (running desktop or mobile operating systems) by hiding within copied and pasted text or code. As soon as the copied content is pasted into a new document, the malware infects the system, giving hackers access to sensitive information.
The most concerning aspect of copy-paste malware is its silent operation. They can easily slip under the radar of even the most advanced antivirus software, making them difficult to detect. Once they have infiltrated a system, they can be used to steal sensitive data such as passwords, financial information, and confidential documents. It is crucial to stay vigilant and take the necessary precautions to prevent the spread of these digital saboteurs. For the most part, it may seem like a harmless function, but it can pose a significant risk to your digital security. Cybercriminals often use copy-paste techniques to spread malicious code, steal sensitive information, and gain unauthorized access to systems. By copying and pasting content from untrusted sources, you could inadvertently expose yourself to serious cyber threats.
To protect your data from copy-paste attacks, you should avoid copying and pasting content from untrusted sources. Instead, type URLs directly into your browser and be cautious when downloading files or opening email attachments. Additionally, you can use security software that scans your computer for malware and blocks suspicious activity. By taking these precautions, you can minimize the risk of falling victim to copy-paste attacks and keep your data safe.
As someone who considers themselves tech-savvy, I never thought I would fall victim to a copy-paste malware attack. But a recent experience with a close friend showed me just how easy it can be for hackers to gain access to sensitive information. It all started when my friend asked for my help in making a payment to a platform they subscribed to. They needed the payment to be made using cryptocurrency, and the options included Bitcoin, Litecoin, and USDT (both ERC20 and TRC20). Fortunately, I had some USDT in my Binance wallet, and I agreed to help. We communicated over WhatsApp, with my friend using WhatsApp Web on their laptop and me using the WhatsApp and Binance mobile apps on my smartphone. They provided the USDT address to send the payment to, and I copied and pasted it into my Binance app.


As I was about to send the funds (after inputting both OTPs sent via SMS and email), I was surprised to see that Binance's system had flagged the address. My withdrawal function was temporarily disabled, and I immediately contacted Binance support to find out what was going on.
To my horror, Binance support informed me that the USDT (TRX) address my friend had sent me was not the correct address. Instead, it was a copy-paste malware attack where the attacker had intercepted my friend's message and changed the USDT (TRX) address to their own. My friend's laptop had been infected with malware, and their personal information was at risk.



Thankfully, It turned out that the said address ‘’ had been reported a couple of times to Binance and their security systems caught the attack. My friend was able to get their laptop cleaned up and secured. However, this experience opened my eyes to just how vulnerable we all are to copy-paste malware attacks. It's a reminder that we should always be vigilant and cautious when copying and pasting information, especially when it comes to sensitive data like cryptocurrency addresses, bank account details, or login credentials. We should also be mindful of the applications and websites we use, ensuring that they are reputable and secure.

In conclusion, my experience with copy-paste malware was a wake-up call for me. This could affect professionals in different fields, a software Engineers, DevOps Engineers could copy-paste malicious codes orchestrated by nefarious players. It reminded me that we live in a digital age where our personal information is constantly at risk of being compromised. We must take steps to protect ourselves, such as using strong passwords, enabling two-factor authentication, and keeping our devices and softwares up-to-date.
**
## Simple Steps to Prevent Copy-Paste Malware from Infecting your computer or smartphone:
**
Copy-paste malware can be a challenging threat to detect and prevent. However, there are steps you can take to reduce the risk of infection. Here are five simple steps to prevent copy-paste malware from infecting your system:
- Keep your antivirus software up to date and perform regular system scans.
- Only copy and paste content from trusted sources.
- Double check content copies from trusted and untrusted sources ⚠️ 👀
- Disable the "Automatically download external images and style formats" option in your email settings.
- Use a firewall to block suspicious incoming connections.
- Educate yourself and your employees about the risks of copy-paste malware and how to avoid them.
By following these simple steps, you can significantly reduce the risk of copy-paste malware infecting your system and keep your data safe from cybercriminals.
| oruchepaschal |
1,423,169 | command line game | Dive into a gameplay like never before. summary The game is about akin to dungeons and... | 0 | 2023-04-02T07:59:54 | https://dev.to/symonkipkemei/command-line-game-813 | python, games, programming, beginners | Dive into a gameplay like never before.
### summary
The game is about akin to dungeons and dragons. There are two doors; the left door and the right door. One of the rooms/caves is empty. If you search further/look around you will encounter a sword.
The other room contains a dragon .If you choose to fight the dragon without a sword , chances of defeat are high.
### Give it a try
Would you like to give it a try?
https://github.com/symonkipkemei/command-line-game
### Pseudocode breakdown
Here is a sum of pseudocodes of how the game play works. I will go into the details later on.
1. get the player name
2. display message
3. present them with two doors
4. if they choose the left door, they'll see an empty room, they can choose to look around, If they do so, they will find a sword. They can choose to take it or leave it.
5. if they choose the right door, they'll encounter a dragon, they have the choice to fight it. If they have the sword from the other room, then they will be able to defeat it and win the game.
6. In both options they have an option to return to the previous room.
| symonkipkemei |
1,423,231 | Network Communication in Python: Sockets, Requests, and Urllib | Python provides several libraries for network communication, including sockets, requests, and urllib.... | 0 | 2023-04-02T08:54:51 | https://dev.to/zubairwazir/network-communication-in-python-sockets-requests-and-urllib-4545 | python, beginners, programming, tutorial | Python provides several libraries for network communication, including sockets, requests, and urllib. These libraries can be used to send and receive data over the internet, make HTTP requests, and more. In this tutorial, we'll explore these libraries and provide examples of how to retrieve the content of a web page using each one.
## Sockets
Sockets are a low-level interface to network communication. They allow you to directly send and receive data over a network connection. Sockets can be used to implement protocols like TCP, UDP, and more.
Here's how to use sockets to retrieve the content of a web page:
```
import socket
HOST = 'www.example.com'
PORT = 80
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((HOST, PORT))
sock.sendall(b'GET / HTTP/1.1\r\nHost: www.example.com\r\n\r\n')
response = b''
while True:
data = sock.recv(1024)
if not data:
break
response += data
print(response.decode())
```
In this example, we create a socket object using the `socket.socket()` method and connect to the web server at port 80 using the `sock.connect()` method. We then send a GET request to the server using the `sock.sendall()` method and receive the response in chunks using a while loop that reads data from the socket using the `sock.recv()` method. Finally, we print the response content as a string using the `response.decode()` method.
## Requests
Requests is a high-level library that provides an easier-to-use interface for making HTTP requests. Requests can be used to send GET, POST, PUT, DELETE, and other types of HTTP requests.
Here's how to use requests to retrieve the content of a web page:
```
import requests
url = 'http://www.example.com'
response = requests.get(url)
print(response.text)
```
In this example, we use the `requests.get()` method to send a GET request to the specified URL and receive the response as a `Response` object. We can then access the content of the response using the `text` attribute, which returns the response content as a string.
## Urllib
Urllib is a library that provides several modules for working with URLs and making HTTP requests. It can be used to send GET, POST, PUT, DELETE, and other types of HTTP requests.
Here's how to use urllib to retrieve the content of a web page:
```
import urllib.request
url = 'http://www.example.com'
response = urllib.request.urlopen(url)
print(response.read().decode())
```
In this example, we use the `urllib.request.urlopen()` method to send a GET request to the specified URL and receive the response as a file-like object. We can then read the content of the response using the `read()` method and decode it into a string using the `decode()` method.
## Conclusion
In this tutorial, we explained how to use Python's network communication libraries, including sockets, requests, and urllib. We provided examples of how to retrieve the content of a web page using each library. Sockets are a low-level interface to network communication, requests are a high-level library for making HTTP requests, and urllib is a library for working with URLs and making HTTP requests. The choice of which library to use depends on the specific requirements of your application. | zubairwazir |
1,423,238 | Let's build a Node.js project (REST API) | Backend lovers, this is for you! Today I am starting with this development series where I will be... | 0 | 2023-04-02T11:42:19 | https://dev.to/ananyacodes/lets-build-a-nodejs-project-rest-api-pnl | node, development, api, postman | ---
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/85ftcfjy501i4kv6g8rw.png
title: Let's build a Node.js project (REST API)
published: true
date: 2023-03-20 08:05:25 UTC
tags: nodejs, development, API, Postman
canonical_url:
---
Backend lovers, this is for you! Today I am starting with this development series where I will be talking about all things **BUILD**. This is the very first article in this series. We will be building a **node.js** project and learning how to develop a **REST API**. We will later test this API using **Postman**. I am using **VS Code** as IDE, but you can use your favorite text editor.
Let's begin...🏎

1. Let's create a folder called "node-demo" where we will create our node project.
2. Now run the command `npm init -y` in the terminal. This will create a package.json file that will contain the entire information about the project like its version, dependencies, license, author, etc. Right now it will insert some default info for each prop since we passed the `-y` flag but if you want you can change them later as per your need.

3. Now let's set up a server using `express` module. Run `npm install express` to install express. You will be able to see it in the dependencies object inside the package.json file after the installation is complete. There will be 2 more additions in the folder called `node_modules` and `package-lock.json` file.
4. Now create a file called `index.js` in the root folder. Here we will import our express package and start setting up the server. We are following the "commonJS" format hence we will use **require** instead of **import**. If you want to use import instead, you must specify the `"type":"module"` in your package.json file. Alternatively, you can rename the file index.js to `index.mjs`.

5. Let's call the express function to create a server object. We can call this `app`. Now using this object, we will set up our REST API. This server will be listening to the requests made by the user on some port on our `localhost`.
6. The first is a very simple `GET` request mechanism. This will simply send a response to the user when the request is made to the URL path - `/`. We are also making this server listen to the port `3000`. Now we can run the express server by writing the command `node index.js` in the terminal. If everything works, it will print a simple message in the terminal "server is listening..".

7. We can use Postman to test the server. Postman is an API testing platform. You can go and create a new workspace or just use the default "My Workspace" to make API calls. We will create a collection called "node-demo" and then add a request.

8. Since we are going to make requests to a server that is running on localhost, make sure that you use the Postman's Desktop Agent else you won't be able to make any requests. Now make a GET request to this URL: `http://localhost:3000` and click Send. You will see the response "Hello! I'm the server" below. This means that our server is working!!

9. Yay! We just made our first API request and it was successful. Let's move ahead. Now we can look into another type of request called POST.
10. We will now make a POST request where we will send some data. GET can be also used but it's less secure since **it doesn't provide data encryption** as POST does. Now before we make changes to the index.js file I want you to install another quite handy dependency. It's called `nodemon` and it automatically refreshes the server whenever there will be any change in the code. We will install it as a dev dependency since it's not meant for production. So run the command `npm install nodemon --save-dev` in the terminal. After this, we can create a script in the package.json file as `"dev":"nodemon index.js"`.

11. Below is the POST mechanism. Now make sure to use a middleware provided by express called "json" which is responsible for handling incoming requests with JSON payloads and is based on body-parser. Add this line to your index.js file: `app.use(express.json())`. Without it, your POST request would fail if you are trying to send some JSON as the body. Now, let's start the server again by running the script dev - `npm run dev` this time.

12. Let's make a POST request with Postman. This time we will have to send a body as well. We will send it as **raw -> JSON** type. After clicking Send you can see the response below.

13. Great! You have now learned 2 major request types: GET and POST. There are many other types as well like PUT, PATCH, DELETE, etc but for now, we will focus only on these 2. What about sending a GET request with some query params? For this, we will modify the same GET callback we defined earlier and tell it to look for the query object to send a response accordingly.

14. No make a GET request to the same URL in Postman but remember to pass query params. Set the key as "name" and the value as "Ananya". On clicking Send you should get the first response as defined in the callback.

15. Cool! Let's look into Path params now. What if you want to send some dynamic value to the URL maybe some kind of "id"? To access it on the server side we will have to do something like this:

16. Below is the Postman request with the Path variable "id".

Awesome! You just built your very first server with help of node.js and express. You also learned about Postman and how to test APIs with it. Hope this article helped you get a basic understanding of Node.js.

You can subscribe to my newsletter to get notifications for my future blog posts! Till then keep learning..😊 | ananyacodes |
1,423,252 | How To Deploy Jitsi Meet(video conference) on Ubuntu | Jitsi Meet is an open source video-conferencing application based on WebRTC. A Jitsi Meet... | 0 | 2023-04-02T10:20:06 | https://dev.to/syedasadrazadevops/how-to-deploy-jitsi-meet-on-ubuntu-ph9 | jitsi, meet, jitsimeet, opensource | ####**Jitsi Meet is an open source video-conferencing application based on WebRTC.**
A Jitsi Meet server provides multi-person video conference rooms that you can access using nothing more than your browser and provides comparable functionality to a Zoom or Skype conference call.
### Step 1 — System Hostname
```
sudo hostnamectl set-hostname jitsi.my_domain.com
```
Check that this was successful by running the following:
```
hostname
```
Next, you will set a local mapping of the server’s hostname. ```
nano /etc/hosts
```
add this line
```
127.0.0.1 <jitsi.my_domain.com>
```
### Step 2 — (Before Install) Must Configure Prosody XMPP Server
**ERROR:** I got an error in the jitsi-meet install.
`Jitsi-meet : Depends: jitsi-meet-prosody (= 1.0.6776-1) but 1.0.6644-1 is to be installed`
**Solution:** `Add the Prosody package repository`
To install the latest version, add the Prosody repository with the following command:
```
echo 'deb https://packages.prosody.im/debian focal main' | sudo tee /etc/apt/sources.list.d/prosody.list
```
Now, run the following command to download and import the Prosody public key.
```
wget https://prosody.im/files/prosody-debian-packages.key -O- | sudo apt-key add -
```
Next, update the local package index. Then, install
```
sudo apt update
sudo apt install prosody
sudo apt install libunbound-dev liblua5.3-dev
luarocks install luaunbound
systemctl status prosody
```
### Step 3 — Install Jitsi
First, download the Jitsi GPG key with the wget downloading utility:
```
wget https://download.jitsi.org/jitsi-key.gpg.key
```
Next, add the GPG key you downloaded to apt’s keyring using the apt-key utility:
```
sudo apt-key add jitsi-key.gpg.key
```
Now, you will add the Jitsi repository to your server by creating a new sources file that contains the Jitsi repository. Open and create the new file:
`sudo nano /etc/apt/sources.list.d/jitsi-stable.list`. Add this line to the file for the Jitsi repository:
```
deb https://download.jitsi.org stable/
```
then install the jitsi-meet package:
```
sudo apt update
sudo apt install jitsi-meet
```
During the installation of jitsi-meet you will be prompted to enter the domain name.

after that select let's Encrypt SSL. (et's Encrypt is a non-profit certificate authority) or select **no** if you don't need to jitsi-web-cloud in next option.

**_Your Jitsi Meet server is now set up_**
| syedasadrazadevops |
1,423,253 | Connectedminds | https://connectedminds.ai/ A mental health app that helps you discover contentment, purpose &... | 0 | 2023-04-02T09:45:13 | https://dev.to/cmworktemp/connectedminds-5991 | mobile, mentalhealth | https://connectedminds.ai/
**A mental health app that helps you discover contentment, purpose & connection.**
Connect with your mind and start your journey with our mental well-being app that helps you discover health, contentment, purpose, and connection. Connected minds is the number one global mental well-being platform for everyone who cares about their mental health. The platform offers immense mental well-being content in the form of Podcasts, Articles, Posts, and Courses curated by our carefully hand-picked, licensed Psychologists, Therapists, and Counselors. Subscribed users can access all this unlimited mental well-being content.
| cmworktemp |
1,423,296 | Create a Laravel CRUD Restful API and Secure it with Sanctum | Introducton Laravel is a popular PHP framework that allows developers to build web applications... | 0 | 2023-04-02T11:17:47 | https://dev.to/patricklusaya/create-a-laravel-crud-restful-api-and-secure-it-with-sanctum-3p2j | laravel, postgres, api, sanctum | **Introducton**
Laravel is a popular PHP framework that allows developers to build web applications quickly and efficiently. One of the most common use cases for Laravel is to create a RESTful API that can be used to communicate with other applications or services. In this tutorial, we'll show you how to build a Laravel CRUD RESTful API and secure it with Sanctum.
Sanctum is a Laravel package that provides a simple way to authenticate API requests using tokens. It works by creating a token for each authenticated user, which can then be used to make subsequent requests to the API. This makes it easy to secure your API and ensure that only authorized users have access to sensitive data. In this tutorial, we'll use Sanctum to secure our Laravel API and protect it from unauthorized access.
**Prerequisites**
Before we start, make sure you have the following:
- Laravel installed on your machine
- A basic understanding of Laravel
- A code editor of your choice
**Step 1: Set Up Laravel**
First, we'll create a new Laravel project. Open your terminal or command prompt and enter the following command: `composer create-project laravel/laravel my-api`
**Step 2: Set Up Database**
Next, we need to set up our database. In this tutorial i will be using PostgreSql but you can use any database that you are comfortable with. Head over to your database console and create a database by writing the command `CREATE DATABASE laravelapi`. Open the .env file in the root of your project and update the following lines with your database credentials:
```
DB_CONNECTION=pgsql //fill according to your database
DB_HOST=127.0.0.1
DB_PORT=5432 // //fill according to your database
DB_DATABASE=laravelapi
DB_USERNAME=yourusername
DB_PASSWORD=yourpassword
```
**Step 4: Create the Model and Migration**
In Laravel, a model is a class that represents a database table. It provides an easy way to interact with the database by providing methods for querying and manipulating the table data.
A migration, on the other hand, is a PHP file that defines the changes to be made to the database schema. It provides a simple way to create, update or delete database tables, columns, indexes, and other database-related entities.
So, we will create a `Product` Model and its migration. The migration file contains the necessary code to create the Product table with the `id`, `name`, `description`, `slug` ,`price `and `timestamps` columns.
To do this, write the following command on your project root directory: `php artisan make:model Product --migration`. Now head over to the database/migration folder and open up the file `create_products_table`. Now inside this file you will notice the `up` and `down` functions.
When we run a migration using the `php artisan migrate` command, Laravel will execute the up function to apply the changes defined in the migration. If we need to undo those changes later, we can run the `php artisan migrate:rollback` command, which will execute the down function to undo the changes made by the migration.
Inside, the up function add the code:
```
Schema::create('products', function (Blueprint $table) {
$table->id();
$table->string('name');
$table->string('description')->nullable();
$table->string('slug');
$table->decimal('price', 5,2);
$table->timestamps();
});
```
Here's a breakdown of what each line does:
- `Schema::create('products', function (Blueprint $table)` starts the process of creating a new database table called products. The function (Blueprint $table) is a callback that defines the columns and their data types.
- `$table->id();` creates an auto-incrementing id column, which serves as the primary key for the products table.
- `$table->string('name')`; creates a name column of type string.
- `$table->string('description')->nullable()`; creates a description column of type string that can be nullable (i.e. it can have a null value).
- `$table->string('slug')`; creates a slug column of type string.
- `$table->decimal('price', 5,2)`; creates a price column of type decimal with 5 total digits and 2 decimal places.
- `$table->timestamps()`; creates two columns, created_at and updated_at, which are used to track the creation and modification timestamps of each row in the products table.
Overall, this code creates a products table with five columns: id, name, description, slug, and price, along with the two timestamp columns.
On Your Product Model class add this code:
```
protected $fillable = [
'name',
'slug',
'price',
'description'
];
```
In Laravel, the `$fillable` property is used to specify which attributes of a model are allowed to be mass-assigned. Mass assignment is a convenient way to create or update multiple model instances at once using an array of data.
By setting these attributes in the `$fillable` array, we're telling Laravel that it's safe to mass-assign these attributes using an array of data. This is an important security measure that prevents malicious users from assigning arbitrary attributes to our model, which could potentially compromise our application.
**Step 5: Create the Controller**
Next, let's create a controller to handle our API requests. The primary responsibility of a controller is to receive requests from the client and perform the necessary actions to generate a response. This includes retrieving data from the database, processing data, and rendering views. Run the following command to create a controller : `php artisan make:controller ProductController`
**Step 6: Add API Methods to the Controller**
Now, let's add the methods to the controller that will handle the CRUD operations for our API. Open the ProductController.php file and add the following code:
```
* @return \Illuminate\Http\Response
*/
public function index()
{
return Product::all();
}
```
This `index` function uses the predefined method `all()` onto the `Product` Model to find all products
Then, add this code:
```
* @param \Illuminate\Http\Request $request
* @return \Illuminate\Http\Response
*/
public function store(Request $request)
{
$request -> validate(
[ 'name' => 'required', 'slug' => 'required',
'description' => 'required' , 'price' =>
'required'
]
);
return Products::create($request->all());
}
```
Here, the `store` method provides a way to create a new product in the database based on input data from the client, while also validating the input data to ensure that it meets certain requirements.
`public function store(Request $request)`: This is the method definition. The public keyword indicates that the method can be accessed from outside the class, and store is the name of the method. The $request parameter is an instance of the Request class, which contains data from the HTTP request.
`$request -> validate([ 'name' => 'required', 'slug' => 'required', 'description' => 'required' , 'price' => 'required' ]);`: This line uses the validate method on the $request object to validate the input data from the client. In this case, the validation rules require the name, slug, description, and price fields to be present and not empty.
`return Products::create($request->all());`: This line creates a new product in the database using the create method on the Products model. The create method creates a new instance of the Products model using the input data from the client, and saves it to the database. The all() method on the $request object returns an array of all input data from the client.
After that, add the following code to find Product by Id, Delete Product by ID and Update Product by ID:
```
* @param int $id
* @return \Illuminate\Http\Response
*/
public function show($id)
{
return Products::find($id);
}
/**
* Update the specified resource in storage.
*
* @param \Illuminate\Http\Request $request
* @param int $id
* @return \Illuminate\Http\Response
*/
public function update(Request $request, $id)
{
$product = Products::find($id);
$product->update($request->all());
return $product;
}
/**
* Remove the specified resource from storage.
*
* @param int $id
* @return \Illuminate\Http\Response
*/
public function destroy($id)
{
return Products::destroy($id);
}
```
The `show` function retrieves a product by its ID, the `update` function updates a product by its ID, and the `destroy` function deletes a product by its ID. Each of these functions returns an HTTP response that can be handled by a client-side application or another API.
**Step 6: Create the public API Routes**
Now, let's create the routes for our API. Open the routes/api.php file and add the following code:
```
//these are public routes
Route:: get('/products',[ProductController::class,'index']);
Route:: get('/products/{id}',[ProductController::class,'show']);
```
The first line defines a `GET` route for the `index `method of the `ProductController` class. This route is accessible at the "/products" endpoint and will return a list of all products.
The second line defines a GET route for the "show" method of the `ProductController` class that expects an "id" parameter. This route is accessible at the "/products/{id}" endpoint and will return a single product with the specified ID.
Now, to secure some of our endpoint we will use `Sanctum`. We want that for a user to create, update and delete a product he must be authenticated. Before anything else lets do a quick setup of Sanctum in your project.
- Install laravel Sanctum via composer using the command: `composer require laravel/sanctum`
- Next, you should publish the Sanctum configuration and migration files using the command: `php artisan vendor:publish --provider="Laravel\Sanctum\SanctumServiceProvider"`. This will create a table create_accesss_tokens in your database.
- Migrate using the command `php artisan migrate`.
- Now,you should add Sanctum's middleware to your api middleware group within your application's `app/Http/Kernel.php` file: Within the `middlewareGroups` array replace the second element of the array with following code :
```
'api' => [
\Laravel\Sanctum\Http\Middleware\EnsureFrontendRequestsAreStateful::class,
\Illuminate\Routing\Middleware\ThrottleRequests::class.':api',
\Illuminate\Routing\Middleware\SubstituteBindings::class,
],
```
**Step 7: Create User Model and Migration**
We need to have users who can be authenticated before accessing the secure endpoints. On your project's root directory run the command: `php artisan make:model User --migration `
Head over the migration folder, open the create_users_table file, and within its up function paste the following code:
```
Schema::create('users', function (Blueprint $table) {
$table->id();
$table->string('name');
$table->string('email')->unique();
$table->timestamp('email_verified_at')->nullable();
$table->string('password');
$table->rememberToken();
$table->timestamps();
});
```
This will create a users table with columns id, name, email, password, tokens and timestamps.
Now within the User Model class add :
```
protected $fillable = [
'name',
'email',
'password',
];
```
Then run the command `php artisan migrate` to create the users table in your database.
**Step 8: Create AuthController**
The Authcontroller will handle the login, register and logout logic.
- Register logic
To register user add the following code in your controller:
```
public function register(Request $request)
{
$fields= $request->validate(
[
'name'=> 'required |string',
'email' => 'required |unique:users',
'password'=> 'required|confirmed'
]
);
$user = User::create([
'name'=>$fields['name'],
'email'=>$fields['email'],
'password'=> bcrypt($fields['password'])
]);
$token = $user->createToken('mytoken')->plainTextToken;
$response = [
'user'=> $user,
'token'=>$token
];
return response($response, 201);
}
```
Code Explanation
The register function accepts a HTTP request object as a parameter, which is used to validate the user's input fields. The method then creates a new user in the database with the validated input fields and hashes the user's password using the bcrypt function.
The user is then assigned a token using Laravel's built-in Sanctum package, which is returned along with the user's details in a response object with a status code of 201.
The `createToken` method is called on the authenticated user object and takes a string argument that serves as a name for the token. In this case, the name is `mytoken`.
The `plainTextToken` method is then called on the returned token object to retrieve the plain text version of the token. This token can be used for subsequent requests that require user authentication.
- Login Logic
This almost the same as register only that some few things differ. Add the following code to your `AuthController`:
```
public function login(Request $request)
{
$fields= $request->validate(
[
'email' => 'required ',
'password'=> 'required'
]
);
$user = User::where('email', $fields['email'])->first();
if (!$user|| !Hash::check($fields['password'], $user->password)) {
return response(
[
'message' => 'Bad creds'
]
);
# code...
}
$token = $user->createToken('mytoken')->plainTextToken;
$response = [
'user'=> $user,
'token'=>$token
];
return response($response, 201);
}
```
Code Explanation
The function It takes in a POST request with the email and password fields validated. It then checks if a user with the provided email exists in the database and whether the password provided matches the hashed password in the database using the Hash facade provided by Laravel.
If the user is not found or the password doesn't match, it returns a response with a message indicating bad credentials. If the user is found and the password matches, it generates a token for the user and returns a response with the user details and the token. The token can be used to authenticate subsequent requests to the server.
- Logout Logic
Add:
```
public function logout(Request $request){
$request->user()->token()->delete();
return [
'message' => 'logged out'
];
}
```
This retrieves the authenticated user using $request->user() and then deletes the user's token using the delete() method
**Step 9: Created the protected API Routes**
A user must be authenticated to access these routes. Add the following code to `routes.api.php` file:
```
// protected routes
Route::group(['middleware'=> ['auth:sanctum']], function(){
Route::post('/products',[ProductController::class,'store']);
Route::put('/products/{id}',[ProductController::class,'update']);
Route::delete('/products/{id}',[ProductController::class,'destroy']);
Route::post('/logout',[AuthController::class,'logout']);
});
```
`Route::group()` is a function that allows you to group multiple routes that share a common middleware. In this case, the middleware specified is `auth:sanctum`, which is used to protect the routes in the group and restrict access to authenticated users only.
When a user tries to access any route within the group, the `auth:sanctum` middleware will check if the user is authenticated by checking for the presence of a valid access token. If the user is not authenticated, they will be redirected to the login page or will receive an HTTP 401 Unauthorized status code.
By wrapping a set of routes within a `Route::group() `function with the specified middleware, it ensures that those routes are only accessible to authenticated users who have a valid access token.
**Step 10: Test your endpoints with `postman`**
Head over to [Postman](postman.com) and test your endpoints. But first start your server by using the command `php artisan serve`
Since, there are no products in our database, lets first register a user to do so.
Use the endpoint `http://localhost:8000/api/register`


To Login use the endpoint `http://localhost:8000/api/login`

The response will have an authentication token that will be used to access protected routes.

Access protected routes after Authentication

Access protected routes without Authentication

Up to that point you can perform all the CRUD operations using the RESTful API you just created .
Here is a complete code of all we have done
`Product Model`
```
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
class Product extends Model
{
use HasFactory;
protected $fillable = [
'name',
'slug',
'price',
'description'
];
}
```
`User Model`
```
<?php
namespace App\Models;
// use Illuminate\Contracts\Auth\MustVerifyEmail;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Foundation\Auth\User as Authenticatable;
use Illuminate\Notifications\Notifiable;
use Laravel\Sanctum\HasApiTokens;
class User extends Authenticatable
{
use HasApiTokens, HasFactory, Notifiable;
/**
* The attributes that are mass assignable.
*
* @var array<int, string>
*/
protected $fillable = [
'name',
'email',
'password',
];
/**
* The attributes that should be hidden for serialization.
*
* @var array<int, string>
*/
protected $hidden = [
'password',
'remember_token',
];
/**
* The attributes that should be cast.
*
* @var array<string, string>
*/
protected $casts = [
'email_verified_at' => 'datetime',
];
}
```
`ProductController`
```
<?php
namespace App\Http\Controllers;
use App\Models\Products;
use Illuminate\Http\Request;
class ProductController extends Controller
{
/**
* Display a listing of the resource.
*
* @return \Illuminate\Http\Response
*/
public function index()
{
return Products::all();
}
/**
* Store a newly created resource in storage.
*
* @param \Illuminate\Http\Request $request
* @return \Illuminate\Http\Response
*/
public function store(Request $request)
{
$request -> validate(
[ 'name' => 'required', 'slug' => 'required',
'description' => 'required' , 'price' => 'required'
]
);
return Products::create($request->all());
}
/**
* Display the specified resource.
*
* @param int $id
* @return \Illuminate\Http\Response
*/
public function show($id)
{
return Products::find($id);
}
/**
* Update the specified resource in storage.
*
* @param \Illuminate\Http\Request $request
* @param int $id
* @return \Illuminate\Http\Response
*/
public function update(Request $request, $id)
{
$product = Products::find($id);
$product->update($request->all());
return $product;
}
/**
* Remove the specified resource from storage.
*
* @param int $id
* @return \Illuminate\Http\Response
*/
public function destroy($id)
{
return Products::destroy($id);
}
}
```
`AuthController`
```
<?php
namespace App\Http\Controllers;
use App\Models\User;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Auth;
use Illuminate\Support\Facades\Hash;
class AuthController extends Controller
{
public function register(Request $request)
{
$fields= $request->validate(
[
'name'=> 'required |string',
'email' => 'required |unique:users',
'password'=> 'required|confirmed'
]
);
$user = User::create([
'name'=>$fields['name'],
'email'=>$fields['email'],
'password'=> bcrypt($fields['password'])
]);
$token = $user->createToken('mytoken')->plainTextToken;
$response = [
'user'=> $user,
'token'=>$token
];
return response($response, 201);
}
public function login(Request $request)
{
$fields= $request->validate(
[
'email' => 'required ',
'password'=> 'required'
]
);
$user = User::where('email', $fields['email'])->first();
if (!$user|| !Hash::check($fields['password'], $user->password)) {
return response(
[
'message' => 'Bad creds'
]
);
# code...
}
$token = $user->createToken('mytoken')->plainTextToken;
$response = [
'user'=> $user,
'token'=>$token
];
return response($response, 201);
}
public function logout(Request $request){
$request->user()->token()->delete();
return response(null, 204);
}
}
```
`api.php`
```
<?php
use App\Http\Controllers\AuthController;
use App\Http\Controllers\ProductController;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Route;
use Laravel\Sanctum\Sanctum;
/*
|--------------------------------------------------------------------------
| API Routes
|--------------------------------------------------------------------------
|
| Here is where you can register API routes for your application. These
| routes are loaded by the RouteServiceProvider within a group which
| is assigned the "api" middleware group. Enjoy building your API!
|
*/
// Route::resource('products', ProductController::class);
// public routes
Route:: post('/register',[AuthController::class,'register']);
Route:: get('/products',[ProductController::class,'index']);
Route:: get('/products/{id}',[ProductController::class,'show']);
Route:: post('/login',[AuthController::class,'login']);
// protected routes
Route::group(['middleware'=> ['auth:sanctum']], function(){
Route::post('/products',[ProductController::class,'store']);
Route::put('/products/{id}',[ProductController::class,'update']);
Route::delete('/products/{id}',[ProductController::class,'destroy']);
Route::post('/logout',[AuthController::class,'logout']);
});
Route::middleware('auth:sanctum')->get('/user', function (Request $request) {
return $request->user();
});
```
**Conclusion**
In this tutorial, we covered the basics of building a RESTful API with Laravel and Sanctum. We started by setting up our Laravel project and database, creating migration files, and defining our models and controllers. We also discussed how to handle requests and responses, including creating, retrieving, updating, and deleting resources. We explored the use of middleware for authentication and authorization, and how to create and use access tokens with Sanctum.
Overall, this tutorial provided a solid foundation for building a robust and secure API with Laravel and Sanctum.
That's all, Enjoy!
| patricklusaya |
1,423,425 | Laravel 10 jwt auth using tymon/jwt-auth | Firstly install Laravel then install the tymon package: composer require tymon/jwt-auth Add... | 0 | 2023-04-02T15:15:34 | https://dev.to/debo2696/laravel-10-jwt-auth-using-tymonjwt-auth-297g | laravel, jwt, auth, php | Firstly install Laravel then install the tymon package:
`composer require tymon/jwt-auth`
Add following to config/app.php:
```php
'aliases' => Facade::defaultAliases()->merge([
'Jwt' => Tymon\JWTAuth\Providers\LaravelServiceProvider::class,
'JWTFactory' => Tymon\JWTAuth\Facades\JWTFactory::class,
'JWTAuth' => Tymon\JWTAuth\Facades\JWTAuth::class,
])->toArray(),
```
Can also add the following in the providers array:
```php
Tymon\JWTAuth\Providers\LaravelServiceProvider::class,
```
Now publish the necessary files:
`php artisan vendor:publish --provider="Tymon\JWTAuth\Providers\LaravelServiceProvider"`
Run the migrations:
`php artisan migrate`
Generate a jwt token, will get saved in .env
`php artisan jwt:secret`
Add the following to the api array in the guards array:
```php
'api' => [
'driver' => 'jwt',
'provider' => 'users',
],
```
Now create some routes in routes/api.php
```php
Route::middleware(['api'])->group(function() {
Route::post('/login', [AuthController::class, 'login']);
Route::post('/register', [AuthController::class, 'register']);
Route::get('/getaccount, [AuthController::class, 'getaccount']);
});
```
Modify the Users model to use the jwt functionalities:
```php
<?php
namespace App\Models;
// use Illuminate\Contracts\Auth\MustVerifyEmail;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Foundation\Auth\User as Authenticatable;
use Illuminate\Notifications\Notifiable;
use Laravel\Sanctum\HasApiTokens;
use Tymon\JWTAuth\Contracts\JWTSubject;
class User extends Authenticatable implements JWTSubject
{
use HasApiTokens, HasFactory, Notifiable;
/**
* The attributes that are mass assignable.
*
* @var array<int, string>
*/
protected $fillable = [
'name',
'email',
'password',
];
/**
* The attributes that should be hidden for serialization.
*
* @var array<int, string>
*/
protected $hidden = [
'password',
'remember_token',
];
/**
* The attributes that should be cast.
*
* @var array<string, string>
*/
protected $casts = [
'email_verified_at' => 'datetime',
];
public function getJWTIdentifier()
{
return $this->getKey();
}
public function getJWTCustomClaims()
{
return [];
}
}
```
Lastly modify the AuthController:
```php
<?php
namespace App\Http\Controllers;
use App\Models\User;
use Illuminate\Support\Facades\Validator;
use Illuminate\Support\Facades\Hash;
use JWTAuth;
use Illuminate\Http\Request;
class AuthController extends Controller
{
//
public function __construct()
{
$this->middleware('auth:api', ['except' => ['login', 'register']]);//login, register methods won't go through the api guard
}
public function login(Request $request)
{
$validator = Validator::make($request->all(), [
'email' => 'required',
'password' => 'required',
]);
if ($validator->fails()) {
return response()->json($validator->errors(), 422);
}
if (! $token = auth()->attempt($validator->validated())) {
return response()->json(['error' => 'Unauthorized'], 401);
}
return $this->respondWithToken($token);
}
public function register(Request $request)
{
$validator = Validator::make($request->all(), [
'name' => 'required|string|between:2,100',
'email' => 'required|string|email|max:100|unique:users',
'password' => 'required|string|confirmed|min:6',
]);
if($validator->fails()){
return response()->json($validator->errors()->toJson(), 400);
}
$user = User::create([
'name' => $request->get('name'),
'email' => $request->get('email'),
'password' => Hash::make($request->get('password')),
]);
$token = JWTAuth::fromUser($user);
return response()->json([
'message' => 'User successfully registered',
'user' => $user,
'token' => $token,
], 200);
}
public function getaccount()
{
return response()->json(auth()->user());
}
public function logout()
{
auth()->logout();
return response()->json(['message' => 'Successfully logged out']);
}
public function refresh()
{
return $this->respondWithToken(auth()->refresh());
}
protected function respondWithToken($token)
{
return response()->json([
'access_token' => $token,
'token_type' => 'bearer',
'expires_in' => auth('api')->factory()->getTTL() * 60 //mention the guard name inside the auth fn
]);
}
}
```
login response:

register response:

getaccount reponse:

For knowing more about the package check the documentation:
https://jwt-auth.readthedocs.io/en/develop/auth-guard/
All of the above code has been tried and tested on Laravel 10.0,
Do drop a Like if it has helped you. :smiley: | debo2696 |
1,423,471 | Tableau: Your Ultimate Guide to Data Visualization | Table of contents I. Introduction II. Getting Started with Tableau III. Creating... | 0 | 2023-04-02T20:15:28 | https://dev.to/bellatrick/tableau-your-ultimate-guide-to-data-visualization-4gki | datascience, visualization, tableau, analytics | ### Table of contents
I. Introduction
II. Getting Started with Tableau
III. Creating Visualizations
IV. Creating a dashboard
V. Sharing and Collaborating
VI. Conclusion
## Introduction
Tableau is one of the most popular data visualization platforms out there and for good reason. Tableau was created by a company called Tableau Software, which was founded in 2003 by Chris Stolte, Christian Chabot, and Pat Hanrahan. They developed Tableau as a data visualization tool to help people see and understand their data better. The first version of Tableau was released in 2004. Since then, Tableau has become a leading data visualization tool used by individuals and organizations worldwide.
Data analysts prefer to use Tableau because it has a simple-to-use platform for combining and displaying data without requiring any coding skills. Tableau has many helpful tools that make it quick and effortless to create data visualizations, and its user interface is straightforward to navigate.
By the end of this user guide, you will be able to navigate around the Tableau interface, create data visualizations and join imported data from multiple sources for data analysis.
## Getting started with Tableau
The Tableau has both the desktop and web interface which can both be used for data visualizations, which can then be shared to the public through Tableau public. Since Tableau is basically a free and open source platform, they have a large community of Tableau users who share their data visualizations on the platforms for other people in the community to interact with.
To get started navigate to [Tableau public](https://public.tableau.com/app/about) on the web and create an account.

Click on the "Sign up for Tableau Public" button to create a new account. After you have filled in your details and you have signed into your account, you will be navigated to the home page.

To get started, click on "create" on the homepage, then select "web authoring". This will take you to the workbook page where you can work with data and make visualizations.

### Getting sample data
To use the workbook page, you need a dataset. Here's how to get it:
1. Go back to the Tableau public home page.
2. Click "resources" on the navbar.
3. Select "Sample data".
4. Download the "Netflix Movies and TV Shows" dataset.

The sample data page can be found [here](https://public.tableau.com/app/resources/sample-data)
### Importing dataset into Tableau
On the workbook page, you will see a small window appear. In this window, click the button that says "upload file from computer." After that, choose the dataset that you just downloaded and add it to the workbook. To inspect the tables in the dataset, click on the table icon that's located on the right of each of the table names, a new web page will open up. This page contains the details of the selected table.

### Connect multiple tables
Tableau helps us to connect different tables that have the same keys easily. You don't need to write any code, you just need to select and drag the tables you want to join together and then choose the common key to merge them. You do not have to physically merge the tables to connect them, pointing out their similar keys is enough for Tableau to intuit their connection.
Drag all the tables to the right side of the screen. If you inspect each table and you will see that they all have a column called "show_id". This column is important because it links all the tables together.

When the tables are on the right side of the screen, click on "update now" on the bottom right side of the screen to update their relationship.
## Creating visualizations
Making visuals is a really important job for data experts when they use Tableau. It's not just about computing or getting data and cleaning it. Knowing how to make visual presentations is vital to explain what the data means to you and your listeners. This basically gives the data a voice to narrate its story. Being able to do this is a very important ability that will help you progress in your career as a data analyst.
To start creating visuals in Tableau, go to the bottom of the page and find the tab labeled "sheet 1". Click on this tab to go to the page where you can create your visualization. The image below shows you what your screen should look like.

### Creating a line chart
The dataset for this visualization was taken from Tableau's sample data store and has not yet been cleaned. Usually, it's not suggested to clean data on Tableau as it mainly focuses on data visualization. However, for this tutorial, you will be able to modify your data on the worksheet.
We'll use the "Release year" and "show id" columns in the "netflix_titles" table for this visualization. The "Release year" column is in the wrong data type. It is currently a string type, and that is why it has a green "abc" prefixed text, this indicates it's data type. To fix it, hover on "release year" and click on the white arrow that appears. Next, hover on "change data type" and select "date" from the dropdown menu. This will change the data type to date.
Drag the "Release year" to the columns panel on the right hand side of the page. Then drag the "show_id" to the rows panel. This will create a Gnatt chart, but it's not the one we want. Click on "show_id" button on the pane, select "Measure" and choose "count" from the menu that appears. This will count how many times a "show" appears in a given year. If you've done everything correctly, your screen should look like the image below.

To filter the show_id values and remove null values. Drag the show_id column to the filter panel located in the middle column of the page. Click on the new filter button for "show_id" and select "show all values", then deselect "null" to exclude null values in the visualization.
### Creating a packed bubble chart
To make a packed bubble chart that counts and aggregates categories, create a new sheet by clicking on the icon with a plus next to "sheet 1" at the bottom of the page. This will give you a blank sheet to create a new chart. You can change the name of the sheet by double-clicking on its current name at the bottom of the page, then typing in the new name and pressing enter to save.
For this chart, select the type column in the netflix_titles table and drag it in the rows pane. Then drag the same column in the columns pane. Select any of the two type columns you selected and click on it, to bring out the drop down menu, hover on "Measure" to select count. This will count the movie type in the table. The second column that was not converted will serve as the label for the the count values.
On the right most part of the page, click on the chart icon with the text "show me" beside it. This will reveal the charts available for use. Select the one with multiple circles to automatically create a packed bubble chart for the movie types. Currently null values are present in the columns. You can remove it by adding the type column to the filter pane and deselecting the null value like you did in the line chart above.
If all goes well, your chart should look similar to the image below.

You can continue experimenting with the other chart types available in the "show me" panel. Up next, you will create a simple dashboard with your charts.
## Creating a dashboard.
Tableau has a special page just for making dashboards. To find it, click on "dashboard" in the menu bar. When you click on the dashboard link, a new tab will appear at the bottom of the screen that says "dashboard 1". This shows you that you've created a new dashboard. You can give your dashboard a different name by double-clicking on the text.
### Dashboard tools
The left-sidebar on the dashboard interface contains the settings and toolbar needed to build the dashboard. Let's go through the interface.
On the top, there is a tab with dashboard and layout settings as seen below.

#### Device preview
The default view is the desktop view. The phone view will automatically make the desktop layout stack on top of each other in order to make the desktop layout responsive.
#### Sheets
The sheets are all your previously created charts. They are automatically loaded on the dashboard. To add a sheet to the dashboard, double click on the name of the sheet or drag and drop the chart to the dashboard builder panel.
#### Objects
The dashboard can have extra features added to it for specific purposes. You can include a container that goes up and down or side to side to hold your charts, or pictures from outside sources. The "Ask Data" tool is powered by artificial intelligence and allows you to ask a question and receive an answer based on the data available on the dashboard.
#### Layout
The layout tab allows you to style your dashboard to your preference. Here, you can adjust the size of the dashboard and even the background color to suit the theme of your data or the company that owns the data.
#### Floating or Tiled layout
When creating a dashboard, how things are arranged is crucial. It can be frustrating when the layout isn't aligning the way you want it to be. Luckily, Tableau helps data analysts by providing two layout options: tiled or floating. This makes arranging things much easier. Floating layout allows you to freely position each view on the dashboard without being restricted to a fixed grid or alignment. This means you can adjust the size and position of each view as needed, allowing for more flexibility and creativity in your dashboard design. However, this can also make it more difficult to maintain a consistent layout and can sometimes result in views overlapping or being hidden behind other views.
Tiled layout, on the other hand, automatically arranges views in a grid-like fashion, with each view taking up an equal amount of space. This provides a more structured and organized look to your dashboard, making it easier to compare different views side-by-side. However, this can also be more limiting in terms of design options, as you are restricted to the predetermined grid.
Ultimately, the choice between floating and tiled layouts depends on your specific needs and preferences for your dashboard design.
Let's get started on creating a simple dashboard with only 4 charts.
### Adding charts to the layout
To make the charts on the page movable and adjustable, choose the floating layout located on the sidebar, on the bottom left of the dashboard page.
#### Increase the size of the chart
To increase the size of the chart, select the chart, then hover on the edges of the chart till you see a double arrow icon, this indicates that the size can be adjusted.
#### Move the chart
To move a chart on a page, first select the chart. Then, move your mouse cursor to the top area of the chart until you see an icon with arrows pointing in all directions. This icon means that the chart can be moved.
Adjust and change the sizes of the chart till it looks good enough. Here is how my dashboard currently looks.

Great job on creating a Tableau dashboard from scratch! Now, let's learn how to share your visualizations with others, including the public, and work with other team members to collaborate on your project.
## Sharing and collaborating
Follow these steps to share your work on Tableau public;
A. Share or publish your visualization:
* Open the workbook that contains the visualization you want to share.
* Click the "Server" menu in the top navigation bar, and select "Publish Workbook".
* In the "Publish Workbook" dialog box, select the project to which you want to publish the visualization, and click "Publish".
B. Set Permissions:
* Once the workbook is published, click the "Permissions" button to set the permissions for the visualization.
* In the "Permissions" dialog box, select the "Everyone" group, and choose the permission level you want to give them (e.g., "View").
* Click "Add" to save the permission changes.
C. Publish to Tableau Public:
* To make the visualization publicly accessible, publish it to Tableau Public.
* Click the "Server" menu in the top navigation bar, and select "Publish to Tableau Public".
* In the "Publish Workbook to Tableau Public" dialog box, enter a title and description for the visualization, and choose the category and tags that apply.
* Click "Publish" to upload the workbook to Tableau Public.
Share the link:
Once the visualization is uploaded, you can share it with others by sharing the URL.
To get the URL, open the visualization on Tableau Public, and copy the link from the browser address bar.
You can also share the link on social media or embed the visualization on a website.
### Collaborate
Tableau allows you to work as a team and collaborate on your visualization. This feature allows for a more efficient and team-oriented workflow. To get started, follow these steps;
Step 1: Invite collaborators
To invite collaborators to your project, click on "Share" in the top right corner of the screen. Enter the email addresses of the people you want to invite and select the level of access you want to give them (Editor or Viewer). You can also include a message to your collaborators.
Step 2: Collaborate on the project
Once your collaborators have accepted your invitation, they can access the project and work on it with you. You can collaborate on different parts of the project by creating and sharing worksheets, dashboards, and stories. You can also leave comments and annotations for your collaborators to see.
Step 3: Manage permissions
You can manage permissions for your collaborators by clicking on "Share" and selecting "Manage Permissions". Here you can change the level of access for each collaborator, remove collaborators from the project, and add new collaborators.
## Conclusion
In this user guide, you have hopefully learned what Tableau is, how to import data into a Tableau workbook, how to connect different tables, how to create visualizations, and how to collaborate with other people using the platform. Tableau is an amazing platform for creating data visualizations and I hope you have learned enough in this user guide to continue to build amazing visualizations in Tableau as you continue in your data analytics journey. Happy building!!!
| bellatrick |
1,423,480 | Debugging AWS Lambda Functions in GO | I've spent a fair amount of time on the interwebs reading various resources on the topic of debugging... | 0 | 2023-04-02T17:00:44 | https://dev.to/js10x/debugging-aws-lambda-functions-in-go-1cjk | programming, aws, go, cloudskills | I've spent a fair amount of time on the interwebs reading various resources on the topic of debugging AWS Lambda functions locally for testing. There's really not much helpful information out there. Most resources point you to overengineered solutions that involve way more than you need and the final result still leaves something to be desired in most cases.
In the AWS SDK for the Go runtime, Lambda functions support the ability to run as RPC servers, making them very easy to invoke in a multitude of different environments.
So, when we run our Lambda locally, it just serves and blocks like any other server based system. From there, we only need a RPC client to send it a message.
A huge benefit to doing it this way, is that if your Lambda uses any other AWS service clients (i.e DynamoDB, S3, Secrets Manager, etc.), you can actually debug those live deployed cloud based resources without doing anything else. **No 3rd party libraries, no frameworks, no containers**. The only catch is that you have to have your AWS shared credentials file updated in such a way that the session that the Lambda creates can be authenticated.
Check out the source for the RPC client, it's quite simple, and is just the right amount of engineering to solve the problem.
You're welcome, internet.
https://github.com/JS10X/aws-lambda-rpc-client
| js10x |
1,425,854 | Dual-boot Ubuntu-Windows 11 | Dual-boot First create a partition for installing there Ubuntu Instal the .iso from... | 0 | 2023-04-04T14:35:46 | https://dev.to/paugarcia32/dual-boot-ubuntu-windows-11-4n69 |
# Dual-boot
First create a partition for installing there Ubuntu
Instal the .iso from [ubuntu webside](https://ubuntu.com/download/desktop)
Install [rufus](https://rufus.ie/es/)
Install the ubuntu iso into a pendrive using rufus software
Restart the pc and setup the usb as the first option to boot
In the ubuntu installer, configure all ass you want untill the "Installation type" where you should check the "Something else" option
There select the free space, create a new partition in the "+" button, and assing 2048 mb of space as a primary type partition and use as swap partition.
The rest of space create a partition as a primary partition with format Ext4 and mount point in "/"
Then continue with the installation
# Shared folder between systems
Create a new ntfs partition (in ubuntu you can install GParted)
Mount the new partition
Create a folder in the root where you want to share files
```bash
sudo mount -t ntfs o rw /dev/*partition* /forder/
```
| paugarcia32 | |
1,427,300 | Help! How do I merge multiple Tailwindcss output from two projects? Let me explain. | I need help as google, AI and my search on Dev.to has resulted in no findings. I'm somewhat new to... | 0 | 2023-04-05T15:22:43 | https://dev.to/dechamp/help-how-do-i-merge-multiple-tailwindcss-output-from-two-projects-let-me-explain-2lfp | help, tailwindcss | I need help as google, AI and my search on Dev.to has resulted in no findings.
I'm somewhat new to Tailwindcss. I rewrote my whole site in it and it's working beautifully. It's performant for what I need.
However, I ran into an issue. First let's understand how I'm using it so you can tell me where I went wrong and how to fix it :P
My project started off with React. So we'll call it the "App".
Overtime, the performance and seo just wasn't cutting it so I realized I needed to move my blog portion over to a static setup, so I chose Nextjs.
We'll call this part of the site, the "Blog".
When I built the blog, I wanted it to share the same look and feel without repeating my work. So I abstracted out the component-library.
For simplicity, it's the "library"
So I have tailwind in all three. This might have been my mistake but I've seen it done before so I know it works.
App: React/Webpack
Blog: Nextjs
Library: React/vite
Tailwind configs all match for each project.
So both the App and the Blog, have their own tailwindcss index file as well as the Library tailwind file. It seems buggy.
95% of my styles work but others seems to be lost and confused by which one overwrites which.
I may have solved most of the issue last night by removing `@tailwind base` from the apps since it's passed through the Library but not all of them.
A few takeaways.
- I should have considered this at the start but I think I was just running with it being new to it.
- This shows that perhaps I did not read the docs fully. RTFM
- Yes, I'm aware Nextjs has zones which would allow me to share the apps. I chose apache2 as my server because I've seen better load times for the App.
Let me know if I need to add more details like how my site is actually configured with screenshots and what not.
If you help me fix this I'll buy you a coffee no problem! :)
| dechamp |
1,429,448 | Trust issues: The Topic Of ChatGPT & Its Position In The World Of Coding | “That seemed like magic to me,” Pietro Schirano says. “Since it was so brilliant, it scared me a... | 0 | 2023-04-07T17:51:40 | https://pinjarirehan.medium.com/trust-issues-the-topic-of-chatgpt-its-position-in-the-world-of-coding-b68a98f8e429 | gpt3, chatgptreview, chatgpt4, chatgpt | ---
title: Trust issues: The Topic Of ChatGPT & Its Position In The World Of Coding
published: true
date: 2023-04-07 11:00:02 UTC
tags: chatgpt3,chatgptreview,chatgpt4,chatgpt
canonical_url: https://pinjarirehan.medium.com/trust-issues-the-topic-of-chatgpt-its-position-in-the-world-of-coding-b68a98f8e429
---

“That seemed like magic to me,” Pietro Schirano says. “Since it was so brilliant, it scared me a little.”
He’s discussing the most recent version of ChatGPT, an artificial intelligence (AI) platform.
You enter something in, and it generates a response for you.
It’s strange how human, friendly, and intelligent the replies appear to be.
GPT-4 is an AI model that sits below it.
- Everything you need to know about ChatGPT
- ChatGPT chatbot is banned in Italy.
Mr. Schirano, design lead at business financial company Brex, has been creating computer programs with ChatGPT.
“I want to make a Pong-like game,” he said, about Atari’s 1970s table tennis game.
“What is the ideal language to [use] so that a large number of people can test it?”
ChatGPT not only suggested a web programming language (JavaScript) but also generated code for a basic version of the game.
What an experienced developer would have taken half an hour to write took ChatGPT 40 seconds.
It was similarly effective for the popular arcade games Breakout and Asteroids.

_Pong is pictured here being played at a trade show in Cologne in 2019. ChatGPT immediately developed a basic version of the game._
On August 21, 2019, visitors play the classic game “Pong” during the Gamescom trade show in Cologne, western Germany.
There are countless code examples for these games offered online, but ChatGPT is not looking for a ready-made remedy.
While being created with massive amounts of web content, ChatGPT produces new text for you by predicting the correct answer.
“ChatGPT understood my question about how I might make this game more popular,” Mr. Schirano says.
“It has a lot of contextual awareness that someone who has never developed wouldn’t be able to obtain from Google.”
You’d have to use a search engine to figure out which coding language to use, how to use it, and how to complete any missing examples available.
ChatGPT could provide you with all of the game codes as well as guidelines suited to your specific needs.
“Today, even someone with no knowledge of what they’re doing can make a simple web app,” Mr. Schirano said.
Others say that because there are so many examples online, recreating these classic games was too simple.
Then Mr. Schirano’s colleague, the design manager at Brex Ammaar Reshi, pushed ChatGPT even further.
He requested it to rebuild an old PC game for which no example code is available on the internet.
Skyroads is a 3D game in which you direct a car onto a path while hopping over objects.
ChatGPT heard about the game through its Wikipedia page, and Mr. Reshi explained how it worked.
“There was a lot of back and forth with this one,” Mr. Reshi says.
“Maybe an hour of adjusting this and changing that.”
He had to test the game, tell ChatGPT how to change the game’s behavior, and then do it all over again.
Considering this, Mr. Reshi was able to create an online 3D game despite without knowing any of the JavaScript languages or having access to code examples for that game. | pinjarirehan |
1,437,704 | Deploying to Google Cloud Run with Github Actions: A Step-by-Step Guide | What is Google Cloud Run? Google Cloud Run is a serverless container platform that... | 0 | 2023-04-16T16:39:40 | https://dev.to/oluwafemiakind1/deploying-to-google-cloud-run-with-github-actions-a-step-by-step-guide-53nf | githubactions, devops, serverless, googlecloudrun |
## What is Google Cloud Run?
> Google Cloud Run is a serverless container platform that enables developers to run applications in a fully managed environment. It allows you to deploy stateless containers on a pay-as-you-go basis and auto-scales your application based on incoming traffic.
## What is Github Actions?
> GitHub Actions is a powerful workflow automation tool that allows developers to automate their development workflows. It integrates well with Google Cloud Run, making it easy to deploy applications from GitHub to Cloud Run.
In this article, I will be deploying a containerized web application to Google Cloud Run using GitHub Actions.
**NOTE: If you want to use Gitlab CI/CD instead of GitHub Actions, see my other article** [**here**](https://medium.com/@oluwafemiakinde/deploying-to-google-cloud-run-with-gitlab-ci-cd-a-step-by-step-guide-2c617e4ea2d4)**.**
Let’s continue….
## Prerequisites
Before we dive into the tutorial, make sure you have the following:
* A Google Cloud Platform account
* A GitHub account
* Docker installed on your local machine
## Step 1: Set up your project on Google Cloud
Before we can deploy our application to Google Cloud Run, we need to create a new project on Google Cloud Platform and enable the Cloud Run API. Here’s how to do it:
1. Go to the [Google Cloud Console](https://console.cloud.google.com/).
2. Click on the project dropdown menu and select “New Project”.
3. Give your project a name and click “Create”.
4. Once your project is created, click on the “Activate Cloud Shell” button on the top right corner of the page.
5. Run the following command to enable the Cloud Run API:
```
gcloud services enable run.googleapis.com
```
**An alternative way to enable Cloud Run API**
1. Go to the Google Cloud Console and select your project.
2. In the left navigation menu, click on “APIs & Services” and then “Dashboard.”
3. Click on the “+ ENABLE APIS AND SERVICES” button.
4. Search for “Cloud Run API” and click on it.
5. Click the “Enable” button.
## Step 2: Create a Dockerfile
Next, we need to create a Dockerfile for our application. This file will contain instructions on how to build a container image for our application.
Here’s an example Dockerfile for a Node.js application:
```
\# Use the official Node.js image
FROM node:14-alpine
\# Set the working directory
WORKDIR /app
\# Copy the package.json and package-lock.json files
COPY package\*.json ./
\# Install the dependencies
RUN npm install --production
\# Copy the rest of the application code
COPY . .
\# Expose port 8080
EXPOSE 8080
\# Start the application
CMD \["npm", "start"\]
```
Save this file in the root directory of your project.
## Step 3: Build and test the container locally
Before deploying our container to Google Cloud Run, let’s build and test it locally. Run the following command to build the container image:
```
docker build -t <your-image-name> .
```
Replace `<your-image-name>` with a name for your container image. Once the build is complete, run the container with the following command:
```
docker run -p 8080:8080 <your-image-name>
```
This will start the container and map port 8080 on your local machine to port 8080 inside the container. Open your web browser and go to `http://localhost:8080` to test your application.
## Step 4: Set up GitHub Actions
GitHub Actions is a powerful tool that allows you to automate your software development workflows. In this step, we will be creating a GitHub Actions workflow to build and deploy our container to Google Cloud Run.
1. In your GitHub repository, click on the “Actions” tab.
2. Click on the “Set up a workflow yourself” button.
3. Replace the contents of the file with the following code:
```
name: "Deploy to Google Cloud Run"
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Google Cloud SDK
uses: google-github-actions/setup-gcloud@master
with:
project\_id: <your-project-id>
service\_account\_key: ${{ secrets.GCP\_SA\_KEY }}
export\_default\_credentials: true
- name: Configure docker for GCP
run: gcloud auth configure-docker
- name: Build and push Docker image
uses: docker/build-push-action@v2
with:
context: .
push: true
tags: gcr.io/<your-project-id>/<your-image-name>:latest
build-args: |
HTTP\_PORT=8080
- name: Deploy to Cloud Run
uses: google-github-actions/deploy-cloudrun@main
with:
image: gcr.io/<your-project-id>/<your-image-name>:latest
service: <your-service-name>
region: <your-region>
platform: managed
allow-unauthenticated: true
env\_vars: |
FOO=bar
ZIP=zap
```
Replace `<your-project-id>`, `<your-image-name>`, `<your-service-name>`, and `<your-region>` with your own values.
You can see more [here](https://github.com/google-github-actions/deploy-cloudrun) on how to use the google cloud run github actions.
4\. Click on the “Start commit” button and commit the changes to the repository.
## Step 5: Deploy to Google Cloud Run
Once the GitHub Actions workflow completes successfully, your container should be deployed to Google Cloud Run. To verify that your application is running, go to the Google Cloud Console, select your project, and click on “Cloud Run” in the sidebar. You should see your service listed there.
Click on the service to view its details, including the URL for your application. Open this URL in your web browser to test your deployed application.
Congratulations! You have successfully deployed a containerized web application to Google Cloud Run using GitHub Actions.
If you liked this article, please leave a clap or even a comment and don’t forget to follow me to get updated when I publish another one. Thanks!
| oluwafemiakind1 |
1,438,558 | Nathaniel's theory | https://www.bankfanz.com/how-to-check-fidelity-bank-account-balance/ https://plaza.rakuten.co.jp/fid... | 0 | 2023-04-17T11:57:40 | https://dev.to/fidelity11/nathaniels-theory-2i9n | [https://www.bankfanz.com/how-to-check-fidelity-bank-account-balance/](https://www.bankfanz.com/how-to-check-fidelity-bank-account-balance/#utm_source=backlinks&utm_medium=search&utm_campaign=darry+ring+us&utm_content=Michelle)
https://plaza.rakuten.co.jp/fidelityaccount/diary/202304170000/
https://peatix.com/user/16897877/
https://www.provenexpert.com/fidelity-bank-code-to-check-balance6/
https://www.producthunt.com/@nathani03945187
https://www.divephotoguide.com/user/Fidelityaccountbalance11
https://vocal.media/authors/nathaniel-bassey-0s2lo0et0
https://www.lifeofpix.com/photographers/fidelityaccountbalance11/
https://seedandspark.com/user/fidelity-bank-code-to-check-balance-6
https://rosalind.info/users/Fidelityaccountbalance11/
http://phillipsservices.net/UserProfile/tabid/43/userId/213827/Default.aspx
https://app.roll20.net/users/11873822/fidelity-bank-code-to-check-b
https://speakerdeck.com/fidelityaccountbalance11
https://social.msdn.microsoft.com/profile/fidelity%20bank%20code%20to%20check%20balance/
https://camp-fire.jp/profile/Fidelityaccountbalance11
https://plazapublica.cdmx.gob.mx/profiles/Fidelity11/activity
https://www.metal-archives.com/users/Fidelityaccountbalance11
https://storium.com/user/fidelityaccountbalance11
https://trabajo.merca20.com/author/fidelityaccountbalance11/
https://www.intensedebate.com/people/fidelitybank11
https://www.mifare.net/support/forum/users/fidelityaccountbalance11/
https://pinshape.com/users/2615798-fidelity11
http://foxsheets.com/UserProfile/tabid/57/userId/128375/Default.aspx
https://www.kompasiana.com/fidelity11
https://leanin.org/circles/fidelity-bank-code-to-check-11
https://www.credly.com/users/fidelity-bank-code-to-check-balance.b7514343/badges
https://www.myminifactory.com/users/Fidelityaccountbalance11
https://www.sqlservercentral.com/forums/user/fidelityaccountbalance11
https://www.longisland.com/profile/Fidelityaccountbalance11
https://guides.co/a/nathaniel-bassey
https://myanimelist.net/profile/Fidelitybank11
| fidelity11 | |
1,438,922 | Réduisez l'impact environnemental des vidéos YouTube sur votre site web avec des vignettes. | Il y a quelques semaines, l'un de nos clients (chez Troopers) nous a remis les résultats d'un audit... | 0 | 2023-04-17T17:24:20 | https://dev.to/gautron_marie/reduisez-limpact-environnemental-des-videos-youtube-sur-votre-site-web-avec-des-vignettes-1fb4 | Il y a quelques semaines, l'un de nos clients (chez [Troopers](https://troopers.coop/)) nous a remis les résultats d'un audit d'éco-conception. L’audit a montré que l'une des pages auditées pesait 20 Mo, dont 9,7 Mo étaient attribuables aux lecteurs intégrés de YouTube ! De plus, 51 requêtes de cette page sur 221 concernaient uniquement YouTube.com, alors que la page ne comportait que 4 vidéos intégrées.
Pour remédier à cela, nous avons décidé d'intégrer une petite action rapide lors de notre sprint : **réduire le poids de la page en diminuant la taille des vidéos YouTube.**

_Ce lecteur pèse lourd alors que la vidéo n'est même pas lancée, c'est triste._
## Réduire la taille de votre page web en affichant des vignettes pour les vidéos YouTube.
Les vidéos en ligne, et surtout celles de YouTube, sont connues pour être de vrais ogres énergétiques. En fait, elles représentent environ 20 % du total des émissions de gaz à effet de serre (GES) dues au numérique, **soit 1 % des émissions mondiales de gaz à effet de serre**, d'après une étude menée par The Shift Project en 2019.
Une des solutions pour réduire cet impact est de **remplacer le lecteur YouTube par une vignette.** Cette vignette peut ressembler au lecteur et inciter l’utilisateur à cliquer pour lancer la lecture. Cela évite de charger le lecteur vidéo au chargement de la page pour rien si le visiteur décide de ne pas lire la vidéo.
Avec cette méthode, la page web sera plus légère et moins énergivore. En effet, les vignettes sont des images statiques qui sont généralement assez petites en taille, ce qui signifie qu'elles ne nécessitent pas beaucoup de bande passante pour être téléchargées. De plus, les vignettes sont souvent mises en cache sur les navigateurs des utilisateurs, ce qui signifie que si une vignette a déjà été téléchargée pour une vidéo particulière, elle ne sera pas téléchargée à nouveau lorsque l'utilisateur visite une autre page qui contient la même vignette.
En revanche,** les players vidéo, comme celui de YouTube, nécessitent une grande quantité de bande passante et de temps de chargement**. Ils doivent non seulement télécharger la vidéo elle-même, mais également charger tous les éléments nécessaires pour lire et contrôler la vidéo, tels que les boutons de lecture/pause, les barres de progression, les options de qualité vidéo, etc. En conséquence, les vignettes sont une alternative efficace aux players vidéo pour réduire la consommation de bande passante et le temps de chargement des pages web.
En plus d'être plus respectueux de l'environnement, remplacer le player YouTube par une vignette de la vidéo permettra à vos utilisateurs de **ne pas être embêtés par des lectures automatiques de vidéos** qui peuvent vite devenir agaçants.
Et pour vous, pas de souci : vous n'aurez qu'à veiller à la taille des vignettes pour garantir une expérience utilisateur optimale. Tout le monde y gagne !
## Comment remplacer le player YouTube par une vignette ?
### Récupération du code embed depuis le CMS Headless Prismic
Dans le cas de ce client, nous récupérons un code embed depuis le [CMS Headless](https://troopers.coop/expertises/cms-headless) Prismic.
Nous avons donc les données nécessaires de la vignette. Si ce n'est pas le cas dans votre projet, vous pouvez récupérer l'URL de la vignette comme ceci pour YouTube : `https://img.youtube.com/vi/${videoId}/0.jpg`. (0.jpg est la taille de la vignette que vous voulez récupérer).
### Étape 1 : modification du composant `YoutubePlayer`
Nous allons modifier le composant `YoutubePlayer` pour qu'il affiche soit la vignette, soit la vidéo au clic. Dans ce projet, nous utilisons la librairie react-youtube pour afficher la vidéo, mais vous pouvez, en fonction de votre projet, simplement faire un `<iframe>` en HTML.
```
import React, { useState } from 'react'
import YouTube from 'react-youtube'
import Thumbnail from './Thumbnail'
const YoutubePlayer = ({
videoId,
autoplay,
subtitles = 0,
thumbnailUrl,
thumbnailHeight,
title,
...props
}) => {
const [showVideo, setShowVideo] = useState(false)
const handleClick = () => {
setShowVideo(true)
}
if (showVideo) {
return (
<YouTube
key={videoId}
videoId={videoId}
opts={{
playerVars: {
autoplay: true,
cc_load_policy: subtitles,
cc_lang_pref: 'fr',
rel: '0',
},
}}
style={{ minHeight: thumbnailHeight }}
{...props}
/>
)
}
return (
// Ici on affiche la vignette : Étape 3
)
}
```
### Étape 2 : Créer un composant `Thumbnail`
La deuxième étape consiste à créer un composant `Thumbnail` pour afficher la vignette de la vidéo et charger la vidéo uniquement lorsque l'utilisateur clique sur play.
Nous avons également ajouté un icône player pour donner à la vignette l'apparence d'une vidéo. Personnalisez le svg comme vous voulez !
```
import React from 'react'
import styles from './YoutubePlayer.scss'
import cn from 'classnames'
const Thumbnail = ({ handleClick, thumbnailUrl, videoId, thumbnailHeight, title, ...props }) => {
return (
<button onClick={handleClick} {...props}>
<img
src={thumbnailUrl || `https://i.ytimg.com/vi/${videoId}/hqdefault.jpg`}
alt={title}
height={thumbnailHeight}
className={styles.Thumbnail}
/>
<div className={styles.playButton} title="Lire la vidéo">
<svg
version="1.1"
id="play"
x="0px"
y="0px"
height="70px"
width="70px"
viewBox="0 0 100 100"
enable-background="new 0 0 100 100"
>
<path
fill="black"
d="M38,69c-1,0.5-1.8,0-1.8-1.1V32.1c0-1.1,0.8-1.6,1.8-1.1l34,18c1,0.5,1,1.4,0,1.9L38,69z"
/>
</svg>
</div>
</button>
)
}
export default Thumbnail
```
### Etape 3 : Utiliser le composant `Thumbnail`
Pour finir, nous utilisons le composant `Thumbnail` que nous venons de créer dans notre composant `YoutubePlayer`.
```
import React, { useState } from 'react'
import YouTube from 'react-youtube'
import Thumbnail from './Thumbnail'
const YoutubePlayer = ({
videoId,
autoplay,
subtitles = 0,
thumbnailUrl,
thumbnailHeight,
title,
...props
}) => {
const [showVideo, setShowVideo] = useState(false)
const handleClick = () => {
setShowVideo(true)
}
if (showVideo) {
return (
<YouTube
key={videoId}
videoId={videoId}
opts={{
playerVars: {
autoplay: true,
cc_load_policy: subtitles,
cc_lang_pref: 'fr',
rel: '0',
},
}}
style={{ minHeight: thumbnailHeight }}
{...props}
/>
)
}
return (
<Thumbnail
handleClick={handleClick}
thumbnailUrl={thumbnailUrl}
thumbnailHeight={thumbnailHeight}
title={title}
videoId={videoId}
{...props}
/>
)
}
```
### Le résultat
Et voilà ! Le composant `Thumbnail` charge la vignette de la vidéo et ne charge la vidéo que lorsque l'utilisateur clique sur play.
Voici à quoi ça ressemble :

Vous vous souvenez de cette fameuse page dont l’audit a montré qu’elle pesait 20 Mo ? A présent elle pèse 12 Mo. **Nous avons donc gagné 8 Mo sur cette page** ! De plus, **on a réduit le nombre de requêtes de la page. **Précédemment, le navigateur faisait 221 requêtes pour charger tous les contenus de la page, aujourd'hui il n’y a plus que 136 requêtes !

## Des astuces supplémentaires pour réduire l'impact environnemental de vos vidéos
Tout d'abord, essayez d'utiliser un lecteur vidéo optimisé pour la consommation d'énergie, comme le lecteur [Plyr](https://github.com/sampotts/plyr). En réduisant la quantité d'énergie nécessaire pour diffuser des vidéos, vous pouvez considérablement réduire l'impact environnemental de votre application.
De plus, si vous le pouvez, utilisez des** plateformes alternatives à YouTube**, comme Peertube. PeerTube est une plateforme décentralisée et open-source qui permet aux utilisateurs de créer et de gérer leur propre serveur PeerTube, créant ainsi un réseau de serveurs autonomes qui partagent du contenu vidéo via un protocole peer-to-peer (P2P). Contrairement à YouTube, où chaque vidéo est stockée sur les serveurs de Google et diffusée directement aux utilisateurs, PeerTube permet aux utilisateurs de **télécharger des vidéos sur des serveurs locaux** et de les diffuser en utilisant le protocole P2P, ce qui réduit la charge sur les serveurs centraux et la consommation d'énergie associée.
Si vous utilisez des vidéos YouTube, il est également possible de limiter la consommation énergétique en n'affichant pas l'iframe de la vidéo sur votre page. À la place, vous pouvez simplement afficher une vignette qui, lorsqu'elle est cliquée, **ouvrira une nouvelle fenêtre avec la vidéo YouTube**. [Sur notre site pour nos vidéos Peertube](https://troopers.coop/cooperative), nous avons opté pour cette solution.
De plus, au lieu de charger directement la vidéo, vous pouvez utiliser l'API Youtube (ou Peertube dans notre cas) pour récupérer les informations sur la vidéo, telles que le titre, la description et le nombre de vues. **Cela vous permettra d'afficher ces informations sans avoir besoin de charger la vidéo elle-même**. Astuce que l’on a aussi utilisé pour notre site !
## Conclusion
En utilisant des vignettes et en chargeant les vidéos seulement lorsque l'utilisateur en a besoin, vous pouvez non seulement réduire la taille de votre page, mais aussi réduire la consommation d'énergie nécessaire pour charger votre site.
En utilisant ces bonnes pratiques, vous pouvez contribuer à la préservation de l'environnement tout en offrant **une expérience utilisateur optimale à vos visiteurs et conforme au numérique responsable.** | gautron_marie | |
1,438,947 | Avalanche Effect and Proof of Work blockchains | In this article, we will explore the concept of the avalanche effect in cryptography, its... | 0 | 2023-04-17T18:02:35 | https://dev.to/vishakbhaskar/avalanche-effect-and-proof-of-work-blockchains-eg9 | blockchain, ethereum, web3, smartcontract | In this article, we will explore the concept of the avalanche effect in cryptography, its significance in securing data, and its application in blockchain technology, specifically in the case of Bitcoin. By the end of this article, you will have a good understanding of what the avalanche effect is, why it matters, and how it contributes to the security of blockchain systems. So, let's dive in!
The avalanche effect in cryptography is a property of cryptographic functions, where a small change in the input of the function results in a significant change in the output. In other words, if you make a small change to the input of a cryptographic function, the output will change so much that it appears completely unrelated to the original output.
The avalanche effect is important for cryptographic functions, such as hash functions and encryption algorithms, because it makes it much harder for an attacker to predict the output of the function or to find patterns in the output. If a cryptographic function did not exhibit the avalanche effect, an attacker might be able to deduce information about the input or the key used to encrypt the data by analyzing the output.
To understand it in more detail, we'll be looking at SHA256 hash function. It takes an input (message) and produces a fixed-length output, known as a hash or message digest. The SHA-256 algorithm produces a 256-bit (32-byte) hash value that is commonly represented as a hexadecimal string of 64 characters.

In this image you can see that even a small change in the input results in a completely different hash. You can play around with SHA256 hash function [here](https://emn178.github.io/online-tools/sha256.html).
Bitcoin too uses SHA256 algorithm. To understand how the avalanche effect influences the difficulty of Bitcoin mining, it's important to first understand the mining process. Bitcoin mining involves trying to find a nonce that, when combined with other data from the Bitcoin network, results in a hash that meets a certain set of criteria. This criteria is known as the mining difficulty and is adjusted periodically by the Bitcoin network to ensure that new blocks are added to the blockchain at a roughly consistent rate.
The avalanche effect ensures that even a small change in the nonce results in a completely different hash. This means that miners cannot rely on incremental changes to the nonce to find a valid hash. Instead, they must try many different nonces until they find one that produces a valid hash. The more difficult the mining difficulty, the more nonces a miner will need to try before finding a valid hash.
In bitcoin mining the miner must perform a series of mathematical operations on the block header data, which includes the transactions being confirmed, the previous block's hash, and a random number called a nonce.
The resulting hash value is then compared to the target value. If it is less than or equal to the target, the miner has successfully mined a new block, and the block is added to the blockchain.
The number of zeroes in the target determines the difficulty of mining. The more zeroes there are, the harder it is to find a hash value that is less than or equal to the target. For example, if the target begins with four zeroes, it will be easier to find a valid hash value than if it begins with eight zeroes. As the difficulty increases, it becomes more computationally expensive to find a valid hash value, and miners must use more powerful hardware or join mining pools to compete effectively.

The miner uses a trial-and-error approach to find the right hash value. This involves generating and testing thousands of possible hash values every second until a value is found that is less than or equal to the target value. If the target hash starts with more zeroes, it is harder for the miner to find a matching hash value because they will have to generate and test more hashes until they find one that meets the criteria.
A miner's successful discovery of a hash that meets the target in Bitcoin mining proves that they have invested energy and computational power to find it.
The hash is the proof that work was done by the miner to find it.
**And that is why it is called Proof of Work!**
| vishakbhaskar |
1,438,997 | Sematic + Ray: The Best of Orchestration and Distributed Compute at your Fingertips | Finding Dynamic Combos Getting Machine Learning (ML) infrastructure right is really hard.... | 0 | 2023-04-17T19:56:08 | https://dev.to/sematic/sematic-ray-the-best-of-orchestration-and-distributed-compute-at-your-fingertips-328m | machinelearning, mlops, datascience, devops | ## Finding Dynamic Combos
Getting Machine Learning (ML) infrastructure right is really hard. One of the challenges for any ML project getting off the ground is finding the right tools for the job. The number of tools out there that target different parts of the ML lifecycle can easily feel overwhelming.

<figcaption align = "center"><em>A sampling of tools that help with ML development, created by the <a href="https://landscape.lfai.foundation/">Linux Foundation</a>.</em></figcaption>
Sometimes, two tools seem to “just fit” together, and you forget that you’re even working with multiple tools as the lines blur into a coherent experience. One example that every ML Engineer or Data Scientist is familiar with is [numpy](https://numpy.org/) and [pandas](https://pandas.pydata.org/). Numpy enables fast and powerful mathematical computations with arrays/matrices in Python. Pandas provides higher-level data structures for manipulating tabular data. While you can of course use one without (explicitly) using the other, they complement each other so well that they are often used together. Pandas works as a usability layer, while numpy supercharges it with compute efficiency.

<figcaption align = "center"><em>Pandas and numpy working together seamlessly.</em></figcaption>
At Sematic, we care **a lot** about usability. We aim to make your ML workflows as simple and intuitive as possible, while providing you with best-in-class lineage tracking, reproducibility guarantees, local/cloud parity, and more. You can chain together the different parts of your ML pipelines using Sematic, [and specify what kind of resources](https://docs.sematic.dev/public-api-reference/api#resource-requirements) you need in the cloud. But many parts of the modern ML lifecycle require more than one computing node–you need a cluster. For example, training the [original ResNet-50 on a single GPU takes 14 days](https://arxiv.org/abs/1709.05011). Leveraging cluster computing can cut this time drastically. Sematic needed a tool to help supercharge it with cluster computing resources, ideally in a way that “just fits” with another tool.
## Ray
[Ray](https://www.ray.io/) pitches itself as “an open-source unified compute framework that makes it easy to scale AI and Python workloads.” Ray can be broken down into three major pieces:
1. **Ray Core**: some primitives for distributed communication, defining workloads and logic to be executed by the distributed compute layer, and initializing computing resources to interact with the system.
2. **Ray-native domain libraries**: libraries provided “out of the box” with Ray for various parts of ML development, such as [hyperparameter tuning](https://docs.ray.io/en/latest/tune/index.html), [data processing](https://docs.ray.io/en/latest/data/dataset.html), and [training](https://docs.ray.io/en/latest/train/train.html).
3. **Ecosystem of integrations**: Ray integrates with many popular tools and frameworks within the broader ML landscape, such as Hugging Face, Spark, PyTorch, and many more.
With these pieces, Ray easily stands as a powerhouse for distributed computing within ML.
## Sematic + Ray

<figcaption align = "center"><em>How Ray and Sematic complement each other.</em></figcaption>
Sematic was designed to let you create end-to-end ML pipelines with minimal development overhead, while adding visualization, lineage tracking, reproducibility, and more. In the language of Sematic, your pipeline steps are Sematic Functions–perhaps one for data processing, one for training, one for evaluation, and so on. Then, within these Sematic Functions, you can use Ray to efficiently scale data processing beyond a single compute node.
That’s great as a conceptual model, but how does Sematic integrate with Ray in practice?
When you’re authoring a pipeline, using Ray within a Sematic Function is as easy as using the RayCluster context manager inside the function. This will spin up a Ray cluster on-demand and enter the ‘with’ context only once the cluster is ready for use. Your code can then use Ray just like it would in any other situation. When your code is done executing (either successfully or unsuccessfully), the Ray cluster will be cleaned up for you. The Ray cluster uses the same container image as your pipeline so that the same code and dependencies are guaranteed to be present on every node.

<figcaption align = "center"><em>Using Ray within Sematic.</em></figcaption>
If you’re familiar with Ray or Sematic, you likely know that both can be used locally as well as in the cloud. Sematic’s Ray integration is no exception! When you execute the code above locally, a local-process based Ray cluster will be created instead of one executing on Kubernetes. This enables rapid local development, where you can use all of your favorite debuggers and other tools until you’re ready to move execution to the cloud.
## Unlocking New Use Cases
This combination of Sematic + Ray can jumpstart your journey to a world-class ML platform. Using these tools together, your Sematic Functions can now do things such as:
- Do quick and efficient distributed training on a [PyTorch](https://pytorch.org/) image classifier using [PyTorch Lightning](https://lightning.ai/docs/pytorch/stable/) and Ray.
- Perform distributed Hyperparameter tuning of a [TensorFlow](https://www.tensorflow.org/) natural language model using [Ray Tune](https://docs.ray.io/en/latest/tune/index.html).
- Do distributed data processing and ingest with [Ray Datasets](https://docs.ray.io/en/latest/data/dataset.html).
And you can do all these things while taking advantage of Sematic’s lineage tracking, visualization and orchestration capabilities.
## A Look Behind the Scenes
When you use RayCluster as above, there’s a lot going on behind the scenes. Sematic uses [KubeRay](https://docs.ray.io/en/latest/cluster/kubernetes/index.html), a tool developed by the maintainers of Ray to manage Ray clusters within Kubernetes. Your code execution will result in calls to the Sematic server, which will in turn publish information about the required cluster to KubeRay. KubeRay will then create and manage the new Ray cluster for you.
Since Sematic knows all about your code and what container image it’s using, it can ensure that KubeRay uses that same image for the Ray head and workers. This means that you don’t have to worry about any new dependency management when using Ray from Sematic – any code that can be used from your Sematic Functions can be used from Ray, even without using Ray’s [Runtime environments](https://docs.ray.io/en/latest/ray-core/handling-dependencies.html#runtime-environments)!

<figcaption align = "center"><em>Architecture of Ray + Sematic</em></figcaption>
## Learning More
If you want to know more about Sematic’s Ray integration, you can check out [our docs](https://docs.sematic.dev/integrations/ray). If you’re looking for something more hands on, check out one of our examples doing distributed training and evaluation using Ray from Sematic. One uses PyTorch Lightning to [do distributed training of a ResNet model](https://github.com/sematic-ai/sematic/blob/main/sematic/examples/lightning_resnet/README.md), and another uses Ray’s AIR APIs (including Ray Datasets) to [do distributed training of a simple image classifier](https://github.com/sematic-ai/sematic/blob/main/sematic/examples/cifar_classifier/README.md) on the [CIFAR10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). You can also join our Discord if you’d like to ask some questions. We’re always happy to help!
_Sematic’s Ray integration is part of Sematic’s paid “Enterprise Edition”. Get in touch if you’d like to use it! Rather play around with Sematic for free first? Most of it is [free and open-source!](https://github.com/sematic-ai/sematic)_ | augray |
1,439,024 | 40 berufliches Vorwärtskommen: Schütze dich vor dem Einflüsterer. | Moin Moin, das Wort wird zwar selten gebraucht, doch ist es immer wieder ein Anliegen in Beiträgen... | 22,304 | 2023-04-17T20:25:14 | https://dev.to/amustafa16421/40-berufliches-vorwartskommen-schutze-dich-vor-dem-einflusterer-3g4g | deutsch, career, motivation, discuss | Moin Moin,
das Wort wird zwar selten gebraucht, doch ist es immer wieder ein Anliegen in Beiträgen über Beruf und Konflikt. Es geht um _Leute, die einem heimlich etwas einreden_. Warum heimlich? Weil deren Absichten für gewöhnlich schädlich für den Zuhörer sind.
Lügen, „Kritik“ und Fehlbewertungen lassen sich gerade am Arbeitsplatz unter Kollegen, wo man gegebenenfalls einen gewissen Schein waren muss,
einbringen, ohne dass es sofort offensichtlich ist. Viele sind sich anfangs unsicher, ob sie schikaniert werden, oder ob der kritisierende Arbeitskollege einfach etwas rau bzw. hart in der Formulierung ist. Macht sich ein Kollege über einen lustig oder hat er mit seinem Humor einfach über die Stränge geschlagen?
Wichtig ist dabei, darauf zu achten, ob dies regelmäßig vorkommt.
1. Hast du Einflüsterung am Arbeitsplatz beobachten können?
2. Hast du mal erlebt, dass die Kompetenz eines Kollegen immer wieder in Frage gestellt wurde?
3. Wie seit ihr damit auf der Arbeitsstelle umgegangen?
Beste Grüße,
Mustafa
Austausch mittels E-Mail: mustafa.kevin.dwenger@posteo.de
Betreff: Austausch | amustafa16421 |
1,439,180 | Introduction to testing | All programmers do testing the very first time they write code in their lives. After you add or... | 0 | 2023-04-17T23:01:14 | https://tahazsh.com/blog/introduction-to-testing | webdev, javascript, programming, testing | All programmers do testing the very first time they write code in their lives. After you add or modify some line of code, you head to the browser to see the changes. Sometimes, you want to check the new feature you added, and other times you just want to see if something broke after you modified something in your code.
That kind of testing is called *manual testing*.
If something is called *manual*, it means there should be an automated version. And there is, and it's called *automated testing*.
With automated tests, you don't need to go manually testing your changes in the browser most of the time; there should be a command you run to do all the testing for you.
It's easier said than done. That command is something you need to implement; and here comes the other aspect of learning programming, which is learning to write tests.
## Functional vs. Non-functional tests
In this article I'm talking about functional tests. Functional tests are the tests that ensure that your code is working as expected. It's testing the behavior of your software—does it do what it should do?
Non-functional tests, on the other hand, are for testing the other aspects of the software, such as testing performance and security.
## Great benefits of writing automated tests
The two biggest benefits of writing automated tests are: detecting bugs and refactoring.
Detecting bugs is basically checking if you broke the code after certain changes—the same reason you do manual testing.
Refactoring is also checking if you broke the code but after improving the code. There are many times where you think about improving the design of the code, but you are reluctant to do that because you might break something. With good tests, you don't have to worry about this because you will know if you broke something while refactoring it.
So a common thing developers with tests would do is writing the code the fastest way possible just to make the tests pass, and then refactoring it to make it look better.
## Types of tests
If you had ever read anything about testing, it's likely that you have come across these terms: unit testing, integration testing, end-to-end testing, acceptance testing, and the list goes on and on.
Interestingly, each of these types tends to have a different definition. Explaining each one in detail will require a whole article for each one. But in my experience, understanding the following three types of tests the way I (and many others) look at them will save you a lot of time and confusion in the future.
## Unit tests
*Unit tests* are the most basic type of tests. They form the majority of the tests in your code. Some see units as the low-level stuff in your code, like objects and functions. Others, including me, see each unit as a single test that tests some specific behavior in the software. That test might use multiple objects and functions to achieve some goal.
An example would be a test that tests the parsing of CSV string. That test might use multiple objects and functions to achieve that, but it's still a single test that tests a behavior of my software.
## Integration tests
*Integration tests* test the integration between two standalone parts of the system. Some look at integration tests as a way to test the integration of multiple objects and functions in the same system. Others, including me, look at it as the integration between the software's code and other services.
An example would be testing the integration between your app and a third-party payment service. In this case, your app works alone, and the payment service works alone, but does the integration between them work as expected in the context of your app?
## End-to-End tests
If unit tests are for testing the internals of your software, and integration tests are for testing the integration with other parties, how can you test if your whole app works as a whole?
That's what *end-to-end* tests are for. They are the slowest kind of tests. They should run the same way your user would use the app: go to a specific url with some input, and then see something on the page (or get a specific response from that endpoint).
There are tools for that kind of testing. These tools can literally open a new instance of the browser, go to a specific url, and inspect the results on the screen—you can literally see that process happening in front of you, unless you are using headless browser testing.
## The speed of tests
The speed of your tests are based on the type of tests you are running and how they are implemented. They can be as fast as a few hundred milliseconds or as slow as several hours.
Unit tests are usually the fastest because they don't need to wait for a response from some external server or run the app in the browser.
Integration tests can be as fast as unit tests or can be slower. This depends on how you implement them: if you are mocking the third-party service (I'll talk about mocks in a little bit), then they will be fast. If you don't mock them, then the speed depends on the performance of that third-party interaction.
End-to-end tests are the slowest because they need to run as if the app is in production. You need to wait for requests and third-party services to complete before seeing the results of your tests.
A good practice is to split your test suites in types so you can run each one separately. A general rule of thumb is to run unit tests on every change because they are the fastest; run integration tests multiple times in an hour; run end-to-end tests a couple times per day to make sure that the whole system works as expected.
## Mocking
Mocking is a very a big topic if I want to get into its details. But in its essence, mocking is replacing an element in your code (like object or function) with a fake version.
There are multiple reasons why you would need to do that. One reason is performance. If your tests need to talk to an external service, you can replace that external service with a mock so you don't have to make an actual request, and instead use the mock to get some response.
Some people use mocking for isolating the thing they are testing from the rest of the app—this is called the *Mockist* testing style. Some people don't like this approach, but it's a valid one used by many great developers.
As I mentioned above, getting into the details of mocking is beyond the scope of this article. So all you need to know is that mocking is a fake version of some element in your tests—most testing tools support creating them.
## Test-driven development
You probably heard the term *TDD* (Test-driven development). TDD is just one way to write and run your tests. From its name, it's a way to derive your code through tests, which means you write tests first and then the code to make it pass.
It might feel awkward at first, but it's actually a good thing to do for two main reasons. Writing the test first ensures you have a test for each code you implement—if you don't do this, you might forget to write the test after writing the actual code, or might miss some cases.
Another reason TDD is good is because it forces you to think how your code will work and look like before implementing it, which would help you make better upfront design decisions—some find this true and some don't, but personally I find it useful in that regard.
TDD might not be for everyone. You don't have to use it if you don't like it. I think of it as one style of writing tests.
## Tools for writing tests
I don't think the tool is the most important thing here. The JavaScript ecosystem has a good number of testing tools, like [Jest](https://jestjs.io/), [Mocha](https://mochajs.org/), or [Vitest](https://vitest.dev/), which is what I use these days.
The above tools are mainly used for unit and integration testing. At the time of this writing, [Cypress](https://www.cypress.io/) and [Playwright](https://playwright.dev/) are the most popular tools for end-to-end testing.
Any of these tools would do the work. The important thing here is to learn how to write good tests—and that only comes with practice.
## Code coverage
Code coverage is the percentage of how much your code is covered with tests. Some aim for 100% coverage. I find this almost impossible to have; instead, you should aim to cover the critical, major parts of the behavior of your software.
For example, you don't have to test all the getters and setters of an object if they work as simple getters and setters. Instead, test the thing that really matters to how your software pieces should behave. For example, test a setter of an object if it runs some validation before setting a value.
So the question of how much coverage I should have in my tests is not easy to answer. It usually depends on what phase the software is in and what the software is really trying to do.
## A simple example
My goal in this article is not to show you how to write tests (I might have more articles on these in the future), but rather to show you the *whats* and the *whys* of testing.
Having said that, I think it would be a good idea to show you a quick example of what testing looks like in the code.
In this simple example, I'll write a function that adds two numbers (the classical example of testing), and I'll write a test for it.
I'll use [Vitest](https://vitest.dev/) in this example, but you can use whatever tool you like.
You can check their [docs](https://vitest.dev/guide/) to see how to install it. After you install it, create a new test file called `calculator.test.js`.
```js
// calculator.test.js
import { describe, it, assert } from 'vitest'
import { add } from './calculator'
describe('calculator', () => {
it('add two numbers', () => {
assert.equal(add(2, 2), 4)
})
})
```
This code tests a function called `add` in a module called `calculator`. That function takes two arguments to add. So, it tests if adding 2 and 2 returns 4.
To make it pass, you have to create the *calculator* module—see how I wrote the test before the code, that's basically what TDD is.
```js
// calculator.js
export function add(a, b) {
return a + b
}
```
Now, if you run the test it should pass.
You can check the docs of the tool to see how to run the tests. In my example with Vitest, I had to add a new script to *package.json* called `test`. And when I run `npm run test`, it would run all of my tests.
## Conclusion
Testing is an important part of writing good code. With it, you can detect bugs and refactor your code. Not only that, but it also helps you think about how your software should behave, especially when you use TDD.
Learning how to write good tests only comes with practice. It's okay if you find it difficult to test something at first, but if you keep practicing, you will eventually find your own testing style and what works and what doesn't.
---
Thanks for Reading! Follow me on [Twitter](https://twitter.com/tahazsh) for web dev tips. Let's be friends! | tahazsh |
1,439,214 | openGauss Application Scenarios | Transaction applications Applications need to process highly concurrent online transactions... | 0 | 2023-04-18T01:41:45 | https://dev.to/liyang0608/opengauss-application-scenarios-1od | **Transaction applications**
Applications need to process highly concurrent online transactions containing a large volume of data, such as e-commerce, finance, O2O, telecom customer relationship management (CRM), and billing.
**IoT data**
In IoT scenarios, such as industrial monitoring, remote control, smart cities, smart homes, and loV, challenges come from a large number of sensors and monitoring devices, high sampling frequency, additional storage modes, and concurrent operation and analysis. | liyang0608 | |
1,439,380 | Making a Speedrun Timer: Chapter 4 | Another migration.. I'm sorry 😅 I know, it's the last thing you want to see. But I'm... | 0 | 2023-04-18T07:38:36 | https://dev.to/kevthedev/making-a-speedrun-timer-chapter-4-3fi6 | javascript, webdev, vue, devjournal | ## Another migration.. I'm sorry 😅
I know, it's the last thing you want to see. But I'm hoping this will be the last major one! Since migrating to an Electron.js in the previous [post](https://dev.to/kevthedev/making-a-speedrun-timer-chapter-3-f8k), our code has essentially been broken up into two parts:
1. Node.js code
2. Chromium browser code
The chromium code houses the frontend responsible for displaying our timer (the Vue.js stuff). The Node.js code is responsible for our communication to the operating system. However, Node.js handles the importing of modules differently from the V8 engine that powers the chromium browser. Not only that, but a lot of APIs are different between the two.
As a result of this migration, our code is starting to become a bit messy and all over the place. Furthermore, it would be nice to share code between the Node.js API and the Chromium browser API without having to worry about the entanglement of importing strategies. Therefore, I think it would be wise to migrate our app to [TypeScript](https://www.typescriptlang.org/)!
## Migrating To TypeScript
Once again, I don't really want to go over _EVERY_ single change that went into this. You'll have to checkout the [GitHub repo](https://github.com/kdevcse/SpeedrunTimerClient) for that.
### The Obvious Changes
First thing we want to do is change all of our `.js` file extensions to `.ts`. Next, we need to run `npm i --save-dev typescript ts-node` to install the packages we need.
These changes might seem obvious, but I think the next few changes might be more helpful to those migrating from a Vite + Vue 3 + Electron.js vanilla JavaScript app to a TypeScript version.
### The Major Changes
Quite a few changes occur in our `forge.config.ts` file. We'll be switching a lot of code over to module imports:
```TypeScript
// forge.config.ts
// Taken from https://github.com/caoxiemeihao/electron-forge-vite/blob/main/vite-typescript/tmpl/forge.config.ts
import type { ForgeConfig } from '@electron-forge/shared-types';
import { MakerSquirrel } from '@electron-forge/maker-squirrel';
import { MakerZIP } from '@electron-forge/maker-zip';
import { MakerDeb } from '@electron-forge/maker-deb';
import { MakerRpm } from '@electron-forge/maker-rpm';
import { VitePlugin } from '@electron-forge/plugin-vite';
const config: ForgeConfig = {
packagerConfig: {},
rebuildConfig: {},
makers: [new MakerSquirrel({}), new MakerZIP({}, ['darwin']), new MakerRpm({}), new MakerDeb({})],
plugins: [
new VitePlugin(
{
// `build` can specify multiple entry builds, which can be Main process, Preload scripts, Worker process, etc.
// If you are familiar with Vite configuration, it will look really familiar.
build: [
{
// `entry` is just an alias for `build.lib.entry` in the corresponding file of `config`.
entry: 'src/main/main.ts',
config: 'vite.main.config.mts',
},
{
entry: 'src/main/preload.ts',
config: 'vite.preload.config.mts',
},
],
renderer: [
{
name: 'main_window',
config: 'vite.renderer.config.mts',
},
],
}),
],
};
export default config;
```
Another major change we needed to make was exposing certain types at the top of our `main.ts` file:
```TypeScript
// This allows TypeScript to pick up the magic constants that's auto-generated by Forge's Vite
// plugin that tells the Electron app where to look for the Vite-bundled app code (depending on
// whether you're running in development or production).
declare const MAIN_WINDOW_VITE_DEV_SERVER_URL: string;
declare const MAIN_WINDOW_VITE_NAME: string;
```
Lastly, we just want to redirect our renderer reference in our `index.html` file
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<title>Speedrun Timer</title>
</head>
<body>
<div id="app"></div>
<script type="module" src="/src/renderer/renderer.ts"></script>
</body>
</html>
```
### The Final Changes
Beyond that, most of the remaining code changes modify the Node.js code to use `import` statements instead of `require`, adds declarative types to our code, and explicitly informs the `vue` files to use TypeScript in the script code via `<script setup lang="ts">`.
Here's our `tsconfig.json` for reference:
```json
{
"compilerOptions": {
"module": "NodeNext",
"moduleResolution": "node",
"target": "ESNext",
"outDir": "../../dist",
"strict": true,
"sourceMap": true,
"experimentalDecorators": true,
"esModuleInterop": true,
"declaration": true,
"skipLibCheck": true
},
"include": ["**/*.ts", "vite.renderer.config.mts", "vite.renderer.config.mts", "vite.preload.config.mts", "vite.preload.config.mts", "vite.main.config.mts", "vite.main.config.mts"]
}
```
## Conclusion
It was pretty hard to figure out how to migrate this project TypeScript, as it didn't seem like there was a lot of documentation on my specific configuration. Hopefully somebody finds this useful. I'd like to shoutout this [repo](https://github.com/caoxiemeihao/electron-forge-vite) for guiding me on how to make this happen.
If you'd like a detailed list of changes, you can look [here](https://github.com/kdevcse/SpeedrunTimerClient/commit/14dc2d848b3858e0acf2983ede9e1b997a2ce989#diff-fe372c44570e4c8f3bc21566f850113f849b912635a5d80b219bca6ce33fc779).
UPDATE: In a [follow-up change](https://github.com/kdevcse/SpeedrunTimerClient/commit/661f0cdceab9ff06cbfa5726fecc7a1d7c9e4d21), we fixed some IDE issues and added a linter.
Next chapter we'll look to finally write new stuff instead of migrating everything! | kevthedev |
1,439,404 | Exception Handling | The persistence module handles exceptions by using the Postgres error reporting infrastructure... | 0 | 2023-04-18T06:30:25 | https://dev.to/jerrywang1983/exception-handling-4coh | opengauss | The persistence module handles exceptions by using the Postgres error reporting infrastructure (ereport). An error message is recorded in the system log for each error condition. In addition, the error is reported to the envelope using Postgres’s built-in error reporting infrastructure.
The following exceptions are reported by this module –
Table 1 Exception Handling
 | jerrywang1983 |
1,439,491 | What is Machine Learning (ML)? | What is Machine Learning (ML)? Machine Learning is a specific subset of Artificial... | 0 | 2023-04-18T08:03:44 | https://dev.to/shivamchamoli18/what-is-machine-learning-ml-4b2l | python, infosectrain, machinelearning | ## **What is Machine Learning (ML)?**
Machine Learning is a specific subset of Artificial Intelligence (AI) that uses statistical models and algorithms to enable systems to learn and improve performance from experience without being explicitly programmed automatically. It involves building and training algorithms and models that can make decisions based on patterns and trends in the data to perform tasks such as classification, regression, clustering, and recommendation.

## **Top Machine Learning (ML) Libraries**
Below is the list of top Python libraries for Machine Learning:
⦁ Keras

It is a high-level open-source library that provides a user-friendly interface for building and training neural networks and allows easy prototyping and experimentation.
⦁ Matplotlib

It is a data visualization library for Python, often used in Machine Learning to create graphs, bar charts, error bars, and other visual representations of data to better understand trends, patterns, and relationships.
⦁ NumPy

It is a Python library that provides efficient and powerful mathematical operations on large, multi-dimensional arrays and matrices, essential for handling large datasets. It is used for data analysis, scientific computing, and Machine Learning applications.
⦁ NLTK

NLTK (Natural Language Toolkit) is a popular Python library for natural language processing tasks such as tokenization, stemming, tagging, and sentiment analysis. It is widely used in research and industry to process and analyze large amounts of text data.
⦁ Pandas

It is a popular data manipulation library in Python for Machine Learning. It provides high-performance data structures and tools for data analysis, cleaning, and preparation, which is essential for ML tasks.
⦁ PyTorch

It is a popular open-source ML library for building and training deep learning models. Its dynamic computing graph and simple APIs are perfect for research, prototyping, and production deployment across domains.
⦁ TensorFlow

It is an open-source ML library used to build and train Machine Learning models for various applications, such as image and speech recognition, reinforcement learning, and natural language processing.
⦁ Theano

It is a Python library for deep learning research and development. It enables efficient mathematical computations and supports various types of neural network architectures.
⦁ Scikit-learn

It is a popular Python library for ML tasks such as classification, regression, clustering, and dimensionality reduction. It provides a simple and efficient toolset for data analysis and modeling.
⦁ SciPy

It is an open-source Python library for scientific computing and technical computing. It provides several modules for optimization, integration, linear algebra, statistics, and more.
Check out the related blogs:
- [Different Machine Learning Algorithms for Data Science](https://www.infosectrain.com/blog/different-machine-learning-algorithms-for-data-science/)
- [The Popular Artificial Intelligence and Machine Learning Trends of 2022](https://www.infosectrain.com/blog/the-popular-artificial-intelligence-and-machine-learning-trends-of-2022/)
- [Introduction to Deep Learning and Neural Network](https://www.infosectrain.com/blog/introduction-to-deep-learning-and-neural-networks/)
- [Top Data Science Skills in 2023](https://www.infosectrain.com/blog/top-data-science-skills/)
- [How AI and ML are Used in Cybersecurity?](https://www.infosectrain.com/blog/how-ai-and-ml-are-used-in-cybersecurity/)
## **How can InfosecTrain help?**
Machine Learning is currently one of the most trending skills, as it has numerous applications in various industries. The demand for professionals with ML expertise has grown dramatically in recent years, and it is expected to grow in the future. To start a career in Machine Learning, you can join [InfosecTrain](https://www.infosectrain.com/)’s [Machine Learning](https://www.infosectrain.com/courses/machine-learning-certification-training/) and [Machine Learning with Python](https://www.infosectrain.com/courses/machine-learning-with-python-training/) certification training courses and learn with certified and highly skilled trainers having in-depth knowledge of the subject. | shivamchamoli18 |
1,439,628 | Navigating the Legal Landscape of London: Finding the Right Commercial Solicitor for Your Business Needs | Introduction: When it comes to running a successful business in London, having access to... | 0 | 2023-04-18T10:14:12 | https://dev.to/waterfrontlaw/navigating-the-legal-landscape-of-london-finding-the-right-commercial-solicitor-for-your-business-needs-3f3f | law, legal | ## Introduction:
When it comes to running a successful business in London, having access to reliable legal advice is crucial. The city is home to numerous commercial solicitors, [technology solicitors](https://www.waterfront.law/sector/technology/), commercial lawyers, redundancy lawyers, settlement agreement lawyers, data protection law firms, and copyright lawyers. However, finding the right legal partner for your business needs can be challenging. In this article, we will explore the different types of legal support available in London and provide tips for finding the right commercial solicitor to help your business succeed.
## Commercial Solicitors in London:
Commercial solicitors specialize in providing legal advice to businesses. They can help with a range of legal issues, including drafting contracts, negotiating deals, and resolving disputes. When searching for a commercial solicitor in London, consider factors such as their experience, expertise, and reputation. Look for solicitors who have worked with businesses similar to yours and have a track record of success.
## Technology Solicitors:
Technology solicitors specialize in providing legal advice to companies in the technology sector. They can help with issues such as data protection, intellectual property, and software licensing. When looking for a technology solicitor in London, consider their experience in the industry and their understanding of the unique legal issues facing tech companies.
## Commercial Lawyers in London:
[Commercial lawyers](https://www.waterfront.law/expertise/corporate/) provide legal advice to businesses of all sizes and across all sectors. They can help with issues such as mergers and acquisitions, employment law, and regulatory compliance. When searching for a commercial lawyer in London, consider their expertise in the relevant area of law and their experience working with businesses similar to yours.
## Redundancy Lawyers in London:
Redundancy lawyers specialize in providing legal advice to employers and employees in redundancy situations. They can help with issues such as redundancy selection, consultation, and the negotiation of settlement agreements. When looking for a [redundancy lawyer London](https://www.waterfront.law/expertise/redundancies/), consider their experience in handling redundancy cases and their understanding of employment law.
## Settlement Agreement Lawyers:
Settlement agreement lawyers specialize in negotiating and drafting settlement agreements. These agreements are used to resolve disputes between employers and employees, and they often involve the payment of compensation to the employee. When looking for a [settlement agreement lawyer](https://www.waterfront.law/expertise/settlement-agreements/) in London, consider their experience in negotiating these agreements and their understanding of employment law.
## Data Protection Law Firms:
Data protection law firms specialize in providing legal advice on issues related to data protection and privacy. They can help with issues such as data breaches, GDPR compliance, and the drafting of privacy policies. When looking for a [data protection law firm](https://www.waterfront.law/expertise/privacy-and-data-protection/) in London, consider their expertise in this area of law and their experience working with businesses similar to yours.
## Copyright Lawyers in the UK:
[Copyright lawyers Uk](https://www.waterfront.law/expertise/copyright-database-solicitors/) specialize in providing legal advice on issues related to copyright law. They can help with issues such as copyright infringement, licensing agreements, and copyright registration. When looking for a copyright lawyer in the UK, consider their expertise in this area of law and their experience working with businesses similar to yours.
Conclusion:
Finding the right legal partner for your business can be challenging, but with the right research and considerations, you can find [Commercial Solicitors London](https://www.waterfront.law/) that meets your specific needs. Whether you're looking for a commercial solicitor, technology solicitor, commercial lawyer, redundancy lawyer, settlement agreement lawyer, data protection law firm, or copyright lawyer, consider their expertise, experience, and reputation before making a decision. With the right legal support, you can navigate the complex legal landscape of London with confidence and ensure the success of your business. | waterfrontlaw |
1,439,671 | What is a FinTech API? | According to Tipalti, the global financial sector will be worth 26.5 trillion by the end of 2022. The... | 0 | 2023-04-19T05:34:26 | https://dev.to/ragavi_document360/what-is-a-fintech-api-l4b | fintechapi, api, apidocumentation | According to Tipalti, the global financial sector will be worth 26.5 trillion by the end of 2022. The rapid growth of the worldwide financial sector and the push for digital transformation in businesses are undoubtedly factors contributing to the proliferation of FinTech in recent years. Legacy financial institutions can digitally transform by offering their data to third parties for subscription fees. Startups are now seeing the opportunity to create personalized experiences for users by providing new functionality that leverages institutional data that was unavailable before.
## What is a FinTech API?
Let’s break it down. The Oxford dictionary defines FinTech as “computer programs and other technology used to support or enable banking and financial services.” An API is an abbreviation for Application Programming Interface. APIs define the rules software components follow to interact and communicate programmatically. FinTech APIs are a specific type of API that allows businesses to integrate banking and financial services components into applications.
You have likely interacted with a FinTech API without knowing it. For example, if you have ordered takeout, the app you used to place an order likely (almost certainly) used a payment processing API to validate and process your payment. Another example could be a personal finance application that uses an Open Banking API to retrieve your account balances and analyze your expenses.
## What are BaaS and open banking?
FinTech is closely related to another concept, Banking as a Service, or the BaaS model. In the BaaS model, a BaaS platform exposes APIs for FinTech businesses, digital banks, or third-party providers to access the information they need to integrate financial components into their applications. The third party pays the platform a fee in exchange for access to data and functionality. This practice is known as Open Banking.
After subscribing to a BaaS platform, a FinTech business builds new functionality “on top” of an existing financial institution’s platform. New functionality includes new banking products or financial data aggregation from many accounts.
FinTech APIs allow financial institutions to make their data available without revealing how their internal systems operate. A benefit to subscribers is their third-party applications can leverage the financial institution’s existing security and compliance standards.
## The types of FinTech APIs
There is a broad range of financial services. And so, many FinTech APIs are available. The most common FinTech API types are financial data providers/aggregators, payment processors, investment brokers, regulatory tech (RegTech), and KYC (Know Your Consumer) APIs.
### Financial data providers and aggregator APIs
These APIs provide financial data to third-party applications, including accounts and transactions, customer profile data, and account statements.
Think of **data providers** as traditional banks. Traditional banks only allow you to access data for that bank. Almost all banks have APIs, including Citibank, Discover, Wells Fargo, and Synchrony.
**Data aggregator** APIs, on the other hand, are more flexible because they allow access to data from many banks. Aggregators let developers combine personal banking, investment, and debt management into the same interface. One of the top financial data aggregators is Plaid.
Since financial data providers are limited to one bank, they have fewer use cases than non-bank aggregators. However, data providers (traditional banks) are known for their robust security.
### Payment processor APIs
Payment processing APIs are among the fastest-growing types of FinTech APIs. A payment API allows applications to connect to a payment platform to validate and process payment transactions. You can use these APIs to create an interface with a payment solution without needing to create one. Examples of payment processor APIs include Stripe, Square, Paypal, and Adyen.
A key reason businesses use payment processors to process their transactions is security. They can build applications that piggyback off an existing payment platform’s infrastructure for authentication and fraud detection. Businesses use payment APIs because, without them, they would need to properly store and secure cardholder data to protect them from breaches. Payment technology requires PCI compliance (Payment Card Industry Data Security Standard) and maintenance of PCI security standards over time. And, not to mention, payment processors offer tokenization and P2PE (point-to-point) encryption.
### Investment APIs
There are two primary types of investment APIs: **brokerage** and **stock market** APIs.
First, let’s discuss brokerage APIs. Traditionally, brokerages were walled gardens. There was no way for a third party to build off a brokerage’s infrastructure to access user data and create new functionality. Investment brokers are financial institutions that expose their data to third-party applications through APIs. Brokerages allow applications to use APIs to buy and sell securities on behalf of users. Examples of brokerage companies offering APIs are Interactive Brokers and Binance.
The next type of investment API is the stock market API. This type of API does not offer the ability to buy and sell securities. Instead, it focuses on providing rich stock data to users so they can make informed investment decisions. Examples include Yahoo Finance, Alpha Vantage, and Quotient. CoinAPI offers market data for the cryptocurrency space.
### RegTech APIs
Regulatory technology, referred to as RegTech, is a technology aimed at helping businesses maintain regulatory compliance. Before RegTech, analyzing and synthesizing regulatory documentation into actionable obligations was a manual process. RegTech is seeing massive growth due to increased regulations, particularly in the financial sector, which kickstarted after the 2008 financial crisis as governments attempted to regain the public’s trust. RegTech is an attempt to help businesses deal with the burden of compliance.
In the past, compliance workers needed to sift through regulatory documents from many separate regulatory agencies to monitor updates that may affect compliance. Documents included regulatory websites, press releases, and RSS feeds for each regulatory website. Compliance workers needed to synthesize the data into an actionable plan to meet new regulations.
Developers can use RegTech APIs to build interfaces that only display relevant regulatory information to the user. Not only does RegTech allow you to collect regulatory information, but it turns data into actionable compliance obligations. Artificial Intelligence (AI) is central to this capability and is at the heart of RegTech. A prominent RegTech platform built around AI is Ascent.
### KYC (Know Your Customer) APIs
According to Swift, a leading provider of financial messaging services, “Know Your Customer (KYC) standards are designed to protect financial institutions against fraud, corruption, money laundering, and terrorist financing.”
Businesses use KYC APIs to verify the identity of users, monitor user activities, and verify payment sources to prevent fraud and corruption. Features include analyzing digital footprints, verifying documents, and performing anti-money laundering checks. Some APIs, like those offered by Onfido, offer verification using biometrics, video, and e-signatures.
Also Read: [What is Open API? Advantages, Disadvantages & Examples](https://document360.com/blog/open-api/)
## Advantages of using FinTech APIs
### Reduced cost and increased development speed
FinTech APIs reduce development costs and increase development speed because you no longer need to build financial components like payment gateways into applications. Instead, you can access a BaaS platform’s data and functionality through an API. Development speed is also faster because you do not need to debug services offered by BaaS platforms.
### Improved customer experience
[API developers ](https://document360.com/blog/api-developer-portal-examples/)can use FinTech APIs to improve the customer experience by focusing on the core features that make their applications unique. Instead of reinventing the wheel, they can build functionality on top of an existing BaaS platform to provide personalized experiences to users.
Also, check out our article on [How to create an enchanting API developer experience with the documentation](https://document360.com/blog/api-developer-experience/)
### Leverage security infrastructure
FinTech APIs allow developers to leverage a BaaS platform’s existing security infrastructure. In the realm of payment processor APIs, for example, you can leverage authentication, fraud detection, cardholder data storage, PCI compliance, and P2PE encryption.
### Automate compliance
RegTech APIs allow you to automate compliance. They can analyze regulatory information from multiple sources to produce actionable obligations specific to your business.
### Prevent fraud
KYC “Know Your Customer” APIs specialize in verifying the identity of users to protect financial institutions from fraud and other illegal activities. In addition, they verify payment sources and ID documents.
## The Best FinTech Platforms by Use Case
### Financial Data Providers and Aggregator APIs
**Plaid API – Best for Connecting Multiple Accounts**
The Plaid API is a financial data aggregator that allows you to connect accounts from multiple financial institutions into one interface.
Plaid‘s API lets you analyze users’ financial data and perform tasks like identity verification and accessing financial transactions and account balances.
As an aggregator, Plaid allows you to gain insights and spot patterns in a user’s preferences and behavior. Using this data, you can create personalized experiences to market, sell and support the user more effectively. A bonus is that Plaid, like other aggregators, provides authentication for all accounts accessed through the platform.
**Citibank API – Best bank-specific provider**
The Citibank API is a financial data provider, not an aggregator. As a provider, Citibank only supports transactions related to that bank. Citibank is a good choice if you only need to access Citibank customers’ data.
The Citibank API allows you to access a user’s accounts and transactions, profile data, and retrieve customer statements.
### Payment Processor Platforms
**Stripe – Best for eCommerce**
According to the Forbes article “The 11 Biggest Fintech Companies In America 2019”, the payment processor Stripe is the largest FinTech in the United States and is worth 22.5 billion USD. Stripe focuses on eCommerce and POS transactions to a lesser extent. Stripe is both a payment processor and a payment gateway.
Stripe is known for its excellent developer resources, including thorough documentation, use cases, and code tutorials. Stripe supports credit and debit card transactions, Google Pay, Apple Pay, account routing by currency, ACH, and invoicing.
One disadvantage (for some) is that Stripe requires you to use their payment gateway, unlike some more flexible payment gateway providers.
**Square – Best full-service payment platform**
Square‘s features make it suited for both eCommerce and brick-and-mortar businesses. If you need to support brick-and-mortar, you may choose Square over Stripe.
Square offers a host of APIs to access different functionalities within Square’s platform. Even though the platform is large and complex, you can find the APIs you need by using their API explorer to filter by component.
Square has more features than Stripe and is considered a “full service” payment platform. In addition to processing card transactions, Square supports loyalty programs, marketing, loans, inventory management, and more.
[Click here to read more ](https://document360.com/blog/fintech-api/)
| ragavi_document360 |
1,439,685 | Javascript Object #14 | In the Post we are going to some latest ECMA method introduced in Javasciript. ... | 0 | 2023-04-18T12:00:25 | https://dev.to/asir-sam/javascript-object-13-1ai9 | javascript, webdev, beginners, programming | In the Post we are going to some latest ECMA method introduced in Javasciript.
## **Object.values()**
To access the value of an Object we usually use **for...in** loop to iterate over it and access the elements inside the Object.But there is problem with for...in as we have already seen this in past post about for...in loop,that iterates over all the inherited properties of an Object.That is not fair and we have sort that it with **Object.hasOwnProprty()** method.That's a Good way, but what if we can do it in more easy way.
> ES2017 has introduced the **Object.values** to access the values of an own enumerable properties of an Object.
let's see this with an example,
```javascript
`const person = {
firstName: 'John',
lastName: 'Doe',
age: 25
};
for (const key in person) {
if (person.hasOwnProperty(key)) {
const value = person[key];
console.log(value);
}
}
`
```
OUTPUT
```javascript
`John
Doe
25
`
```
Let's now see this with the **Object.values()**,
The Syntax is,
```javascript
`Object.values(obj)`
```
```javascript
`const person = {
firstName: 'John',
lastName: 'Doe',
age: 25
};
const profile = Object.values(person);
console.log(profile);
`
```
OUTPUT
```javascript
`[ 'John', 'Doe', 25 ]
`
```
The **Object.values()** accepts an object and returns its own enumerable property’s values as an array.
## **Object.entries()**
As like Object.values(),in ES2017 introduced the **Object.entries**.
**Object.entries** are same like Object.values,but it returns the enumerable _string-keyed_ property into **[key, value]** pair of Object.
```javascript
`Object.entries()`
```
```javascript
`const ssn = Symbol('ssn');
const person = {
firstName: 'John',
lastName: 'Doe',
age: 25,
[ssn]: '123-345-789'
};
const kv = Object.entries(person);
console.log(kv);
`
```
OUTPUT
```javascript
`[
['firstName', 'John'],
['lastName', 'Doe'],
['age', 25]
]
`
```
The firstName, lastName, and age are own enumerable string-keyed property of the person object, therefore, they are included in the result.
The ssn is not a string-key property of the person object, so it is not included in the result.
That's all for now,hope you learnt something.Please put down comments suggesting me to do better,that will motivates me a lot to do more in Javascript.
Thanks for you Time in between breaks,
Sam
| asir-sam |
1,439,692 | Cracking the Code: Digital Signatures and Bitcoin Transactions | In the modern digital age, security is of the utmost importance. One way to ensure the security of... | 0 | 2023-04-18T12:11:46 | https://blog.learnhub.africa/2023/04/18/cracking-the-code-digital-signatures-and-bitcoin-transactions/ | blockchain, cybersecurity, bitcoin, cryptocurrency | <h3 class="graf graf--h3 graf--empty"></h3>
<p class="graf graf--p">In the modern digital age, security is of the utmost importance. One way to ensure the security of digital documents is through digital signatures, and Bitcoin is creating a more advanced security struture.</p>
<p class="graf graf--p">In this article, we will explore what a digital signature is, what cryptographic primitives are, introduce bitcoin transactions, how they use signatures, and how these topics are related.</p>
<h3 class="graf graf--h3">What is a Digital Signature?</h3>
<p class="graf graf--p">A digital signature is a mathematical technique used to validate the authenticity and integrity of digital documents.</p>
<p class="graf graf--p">It provides a way to ensure that a message or file was not altered in transit and that the sender is who they claim to be.</p>
<p class="graf graf--p">Digital signatures use a combination of hashing and public-key cryptography to create a unique code that the owner of a private key can only create.</p>
<p class="graf graf--p">When a digital document is signed, the document is first hashed, creating a unique code that represents the document’s contents.</p>
<p class="graf graf--p">The signer then encrypts this code using their private key, creating the digital signature. When the recipient receives the document and signature, they can hash it themselves to create a code and decrypt it using the signer’s public key.</p>
<p class="graf graf--p">The document is considered valid if the decrypted signature matches the hash code.</p>
<p class="graf graf--p"><strong class="markup--strong markup--p-strong">Example</strong></p>
<p class="graf graf--p">Alice wants to send a contract to Bob. Alice creates a digital signature by hashing the contract using a cryptographic hash function and then encrypting the hash with her private key.</p>
<p class="graf graf--p">She sends the contract and the signature to Bob. When Bob receives the contract and signature, he can hash the contract himself and decrypt the signature using Alice’s public key.</p>
<p class="graf graf--p">If the decrypted hash matches the hash of the contract that Bob computed, he can be confident that the contract was not tampered with and that Alice is the one who signed it.</p>
<h3 class="graf graf--h3">What are Cryptographic Primitives?</h3>
<p class="graf graf--p">Cryptographic primitives are the building blocks of modern cryptography. These primitives are the mathematical functions and algorithms to create digital signatures and other cryptographic tools.</p>
<p class="graf graf--p">The most commonly used cryptographic primitives include hash functions, symmetric-key encryption, and public-key cryptography.</p>
<p class="graf graf--p">Hash functions take an input message of any length and output a fixed-size hash code. These codes are unique to the input and cannot be reversed.</p>
<p class="graf graf--p"><a class="markup--anchor markup--p-anchor" href="https://blog.learnhub.africa/2023/04/05/understanding-bitcoin-mining-from-solo-mining-to-p2p-mining-pools/" target="_blank" rel="noreferrer nofollow noopener" data-href="https://blog.learnhub.africa/2023/04/05/understanding-bitcoin-mining-from-solo-mining-to-p2p-mining-pools/">Symmetric-key encryption uses a single secret key to both encrypt and decrypt messages.</a></p>
<p class="graf graf--p">Public-key cryptography, also known as asymmetric cryptography, uses public and private keys to encrypt and decrypt messages.</p>
<p class="graf graf--p"><strong class="markup--strong markup--p-strong">Example</strong></p>
<p class="graf graf--p">Hash functions are used in many security applications, such as password storage. When you create an account on a website, it stores a hash of your password instead of storing it.</p>
<p class="graf graf--p">When you log in, the website hashes the password you entered and compares it to the stored hash. If the hashes match, you are authenticated.</p>
<p class="graf graf--p">This way, even if someone gains access to the website’s database, they cannot easily obtain the passwords because they only have access to the hashes.</p>
<h3 class="graf graf--h3">Bitcoin Transactions</h3>
<p class="graf graf--p">Bitcoin is a decentralized digital currency that uses a distributed ledger called a blockchain to record transactions. In Bitcoin, transactions are validated and added to the blockchain by a network of users called miners.</p>
<p class="graf graf--p">Transactions in Bitcoin are composed of inputs and outputs. Inputs represent funds that the sender is spending, while outputs represent the recipient’s address and the amount of funds being sent.</p>
<h3 class="graf graf--h3">Bitcoin Signatures: Pros and Cons</h3>
<p class="graf graf--p">Bitcoin signatures are an essential part of the Bitcoin protocol. They are used to ensure the authenticity and integrity of transactions on the Bitcoin network.</p>
<p class="graf graf--p">This section will discuss the pros and cons of Bitcoin signatures and why they are important.</p>
<p class="graf graf--p">Pros of Bitcoin Signatures:</p>
<ul class="postList">
<li class="graf graf--li">Security: Bitcoin signatures use public-key cryptography, which is extremely secure. Only the person who owns the private key can authorize a transaction. This makes it difficult for someone to steal Bitcoins or forge a transaction.</li>
<li class="graf graf--li">Non-Repudiation: Once a transaction is signed and broadcasted to the Bitcoin network, it cannot be repudiated. This means the sender cannot deny that they sent the transaction, and the receiver cannot deny that they received it.</li>
<li class="graf graf--li">Efficiency: Bitcoin signatures are fast and efficient. They can be verified quickly and do not require extensive computation or processing power.</li>
</ul>
<h3 class="graf graf--h3">Cons of Bitcoin Signatures:</h3>
<ul class="postList">
<li class="graf graf--li">Complexity: Bitcoin signatures require a deep understanding of public-key cryptography. This can make it difficult for non-experts to understand and use the Bitcoin network.</li>
<li class="graf graf--li">Privacy: Bitcoin signatures are not completely private. While they do not reveal the private key, they do reveal the public key, which can be used to track transactions.</li>
<li class="graf graf--li">Malleability: Bitcoin signatures are vulnerable to transaction malleability. This means that the signature can be changed without invalidating the transaction. While this does not compromise the security of the transaction, it can cause issues with transaction tracking and accounting.</li>
</ul>
<h3 class="graf graf--h3">Why are Bitcoin Signatures Important?</h3>
<p class="graf graf--p">Bitcoin signatures are important because they ensure the authenticity and integrity of transactions on the Bitcoin network. Without signatures, anyone could create a transaction claiming to transfer Bitcoins from one address to another.</p>
<p class="graf graf--p">Signatures ensure that only the private key owner can authorize a transaction, preventing fraud and theft.</p>
<p class="graf graf--p">In the Bitcoin code base, signatures are implemented using the <a class="markup--anchor markup--p-anchor" href="https://blog.learnhub.africa/2023/04/05/understanding-bitcoin-mining-from-solo-mining-to-p2p-mining-pools/" target="_blank" rel="noreferrer nofollow noopener" data-href="https://blog.learnhub.africa/2023/04/05/understanding-bitcoin-mining-from-solo-mining-to-p2p-mining-pools/">Elliptic Curve Digital Signature Algorithm (ECDSA).</a></p>
<p class="graf graf--p">When users create a transaction, they sign it with their private key using ECDSA. The signature is then broadcasted to the Bitcoin network along with the transaction.</p>
<p class="graf graf--p">When a miner verifies the transaction, they use the sender’s public key to verify the signature. If the signature is valid, the transaction is added to the blockchain.</p>
<h3 class="graf graf--h3">How Bitcoin Transactions Use Signatures</h3>
<p class="graf graf--p">Bitcoin transactions use digital signatures to ensure the authenticity and integrity of each transaction. When a Bitcoin transaction is created, the sender signs it with their private key, creating a digital signature.</p>
<p class="graf graf--p">This signature is added to the transaction data to prove the sender owns the funds spent.</p>
<p class="graf graf--p">When a miner validates a transaction, they verify the digital signature using the sender’s public key. If the signature is valid, the miner can confirm that the sender owns the funds being spent and that the transaction is authentic.</p>
<p class="graf graf--p">In addition to signature verification, miners also use proof-of-work to validate transactions and add them to the blockchain. Proof-of-work is a computational puzzle that miners must solve to add a block of transactions to the blockchain.</p>
<p class="graf graf--p">This process ensures that transactions are validated by a distributed network of users, making it difficult for any single user to manipulate the blockchain.</p>
<h3 class="graf graf--h3">Conclusion</h3>
<p class="graf graf--p">Digital signatures, cryptographic primitives, and Bitcoin transactions are all important components of modern cryptography and digital security.</p>
<p class="graf graf--p">Digital signatures provide a way to ensure the authenticity and integrity of digital documents, while cryptographic primitives provide the building blocks for these signatures and other cryptographic tools.</p>
<p class="graf graf--p">Bitcoin transactions use digital signatures to validate the authenticity and ownership of funds being spent, and miners use proof-of-work to validate transactions and add them to the blockchain.</p>
<p class="graf graf--p">As digital security becomes increasingly important in our modern world, these technologies will continue to play a vital role in keeping our data and transactions safe.</p>
<h4>Learn More</h4>
<ul>
<li><a href="https://www.geeksforgeeks.org/what-is-cryptographic-primitive-in-blockchain/">What is Cryptographic Primitive in Blockchain?</a></li>
</ul> | scofieldidehen |
1,439,921 | What is the OSI Reference Model? | Open System Interconnection - OSI The OSI Model is a reference model used to... | 0 | 2023-04-18T15:08:00 | https://dev.to/aybikedalbul/what-is-the-osi-reference-model-4o68 | network, security, osimodel | ## Open System Interconnection - OSI
---
The OSI Model is a reference model used to standardize the communication processes of computer networks.
With the emergence of the concept of networking, many protocols were developed, but these protocols were developed by each manufacturer according to their own products, rather than adhering to a specific standard. This eventually led to communication issues between hardware produced by different manufacturers.
- For instance, in the 1970s, IBM developed protocols called System Network Architecture (SNA) for its own hardware.
- Similarly, Digital Equipment Corporation also created protocols named DECNet for its own products.
However, the lack of a common point of reference due to different protocols posed a significant problem for communication between different hardware during that time.
In order to address this issue, the OSI reference model was introduced by ISO (International Organization for Standardization) in 1984, which provided a standardized framework for communication independent of hardware and underlying network infrastructure.
When data is being transmitted between two network devices based on the OSI reference model, it starts from the top layer and progresses downwards through the lower layers. At each layer, specific information pertaining to that layer is added. This process is called *Encapsulation*. When the data reaches the physical layer, it gains access to the physical layer of the other device. At this point, the reverse process occurs, where the information added at each layer is extracted as the data progresses from the lowest layer to the top layer. This process is called *De-encapsulation*.

**The OSI model divides the data communication processes of computer networks into 7 layers:**
**1- Physical Layer :**
The Physical Layer enables direct transmission of data bits (1s and 0s) between devices in physical form, such as electrical signals, radio waves, and optical signals. It also detects and corrects transmission errors at the Physical Layer.
For example, in electrical medium, signals are expressed in voltage levels. The Physical Layer performs Line Coding process to convert data bits into appropriate voltage levels, generating the signal.
Similarly, in optical transmission media, signals are represented by light waves. Again, Line Coding process converts data bits into appropriate light signals, generating the signal at this layer.

**2- Data Link Layer:**
Data packets are divided into frames by the Data Link Layer and each frame is transmitted to the destination device in the network. Frames are numbered and organized to ensure the correct sequencing of data for the receiving device. The Data Link Layer enables direct transmission between devices using the physical address (MAC address) of the destination device.
Furthermore, the Data Link Layer detects and, if necessary, corrects errors in data packets using error detection methods such as special algorithms or cyclic redundancy checks (CRC).
In summary, the Data Link Layer is responsible for ensuring the security and correct sequencing of data during transmission, making it a crucial layer for reliable data communication.

**3- Network Layer:**
The Network Layer determines how data packets are transmitted from source to destination. Data packets are directed by the Network Layer from the source device to the destination device. In other words, the Network Layer enables routing of data packets between multiple network segments.
This layer works with protocols such as IP (Internet Protocol) to select the path for data packets from source to destination and make routing decisions.
In summary, the Network Layer manages the routing, addressing, communication between different networks, traffic management, and accessibility of the destination device for data packets.

**4- Transport Layer :**
The Transport Layer, as the name implies, is responsible for the task of data transfer. This layer ensures reliable and accurate data transmission between the source application layer and the destination application layer. It manages data packets, controls data flow, and establishes and terminates communication connections.
> This layer uses protocols to manage data flow and ensure reliable communication between devices in a network, from source to destination. Examples of commonly used protocols include TCP and UDP.
! Each protocol is chosen based on the requirements of the
application.

**5- Session Layer:**
The Session Layer provides session management between devices in a network. By establishing and terminating sessions between two devices, it regulates communication. The Session Layer ensures a reliable and consistent session during data exchange by performing tasks such as session initiation, management, and termination in the communication process.

**6- Presentation Layer:**
The layer where operations such as encryption and compression are performed is the Presentation Layer. The purpose of this layer, which is the 6th layer, is to convert data coming from the application layer (7th layer) into a readable format. This way, the data is transmitted in a format that the target device can understand. The operations performed in this layer also ensure data security.

**7- Application Layer:**
The Presentation Layer provides an interface for data communication of user applications and manages the operation of application-level protocols. Applications such as web browsers, email clients, Domain Name System (DNS), and File Transfer Protocol (FTP) operate at this layer.

Thank you for reading..
| aybikedalbul |
1,439,923 | Reasons Behind the Open Letter on the Temporary Suspension of AI Training | Elon Musk and more than 5500 people (as of press time), including AI experts, have signed an open... | 0 | 2023-04-18T15:16:34 | https://dev.to/erzawansyah/reasons-behind-the-open-letter-on-the-temporary-suspension-of-ai-training-4fe8 | ai, news, gpt3, openai | Elon Musk and more than 5500 people (as of press time), including AI experts, have signed an open letter calling for a pause in advanced AI development.
The letter, issued by the Future of Life Institute, asks AI labs to stop training on any technology more powerful than OpenAI's GPT-4. The letter warns: this could "represent a major change in the history of life on Earth."
Prominent AI researchers and tech leaders such as Elon Musk, are unanimous in asking AI labs to temporarily stop training AI systems for at least 6 months.
This pause is meant to establish protocols on safety and ethical issues. If a voluntary pause cannot be enforced, they suggest government intervention is necessary.
In the open letter, several concerns were expressed, especially that AI systems are becoming so competitive with humans.
With the rapid development of AI systems as it is now, the concern that AI will take over jobs, be used to spread propaganda, and outsmart and replace humans, is considered a problem that will be faced in the next few years.
With such a huge impact on the history of life on Earth, advanced AI should indeed be managed and developed by commensurate resources.
Unfortunately, this level of planning and management is not happening. Instead, there is a tendency towards an "out-of-control race" to develop and deploy increasingly powerful digital minds that no one - not even the creators themselves - can understand.
As such, this letter proposes that AI Labs & experts use the pause to develop safety protocols, overseen by independent outside experts.
## Bill Gates: pausing AI won't "solve the challenge"
Calls to halt the development of artificial intelligence will not "solve the challenges" ahead, Bill Gates said in an interview with Reuters, April 5, 2023.
Bill Gates said, "it would be better to focus on [how best to use AI developments](https://sediksi.com/cara-menggunakan-chatgpt-openai/), because it is difficult to understand how pauses can work globally."
This opposing response from Bill Gates is certainly justified from the point of view of his business interests. Since Microsoft has been trying to outpace its competitors through its multi-billion dollar investment in ChatGPT owner OpenAI, this call for a pause certainly has implications for its business progress.
"I don't really understand who they say can stop, and will every country in the world agree to stop, and why should it stop," he said.
## Goldman Sachs just released their AI report on economic growth
Goldman Sachs is an American multinational investment banking and financial services company headquartered in New York City. The company offers investment management, securities, asset management, prime brokerage, and underwriting services.
They estimate that 300 million jobs will be lost to automation. Here are the major takeovers in several fields of work that everyone should know about:

## Labor market disruption
Generative AI has huge macroeconomic effects:
- 2/3 of current jobs exposed to some AI automation
- 1/4 of current jobs could be replaced by generative AI
- 300 million jobs will be automated from extrapolated estimates
- Productivity boom
Displacement of workers from automation has historically been offset by:
- New job creation
- Significant labor cost savings
- Higher productivity for non-displaced workers
It is noted that AI could increase annual US labor productivity growth by 1.5% over a 10-year period.
## Investment in AI
Management teams of public companies are increasingly citing AI in earnings calls (teleconferences, or webcasts, where public companies discuss the financial results of the reporting period). These indications of interest predict a major increase in corporate capital investment.
By 2021, global private investment in AI will reach $94 billion-5x that of 5 years earlier.

2/3 of US jobs are exposed to automation and 1/4 of current tasks could be automated by AI in the US.
High exposure to white-collar jobs:
- Administration (46%)
- Legal (44%)
Low exposure to blue-collar jobs:
- Construction (6%)
- Maintenance (4%)

## Replacing vs complementing the future of work
Most jobs are only partially exposed to automation. They are therefore more likely to complement rather than be replaced by AI.
For current US jobs:
- 7% → replaced
- 63% → complemented
- 30% → unaffected
So how is it?
---
> This article is translated using DeepL Translator from the Indonesian article which you can access [here](https://sediksi.com/surat-terbuka-penghentian-sementara-pelatihan-ai)
| erzawansyah |
1,440,030 | Building a prototype with Chunk | Written by Nefe James✏️ Across the tech ecosystem, developers are constantly building new tools and... | 0 | 2023-04-18T17:46:48 | https://blog.logrocket.com/building-prototype-with-chunk | react, webdev | **Written by [Nefe James](https://blog.logrocket.com/author/nefejames/)✏️**
Across the tech ecosystem, developers are constantly building new tools and technologies to make life easier for us. We use many such tools today, significantly improving our productivity and efficiency.
In this article, we will learn about another excellent dev tool called Chunk. Chunk was [recently released](https://twitter.com/deletethistheo/status/1557046835193290752) and comes with benefits for developers, particularly those who write server-side code. Let’s get started.
_Jump ahead:_
* [What is Chunk?](#chunk)
* [What is a chunk?](#what-chunk)
* [Exploring the Chunk platform](#exploring-chunk-platform)
* [The user profile page](#user-profile-page)
* [The Chunk editor](#editor)
* [Applications of Chunk](#applications-chunk)
* [Creating a simple products API with Chunk](#creating-simple-products-api-with-chunk)
* [Consuming the products API](#consuming-products-api)
## What is Chunk? <a name="chunk">
[Chunk.run](https://chunk.run/) is an online platform that allows us to write server-side code without deploying them. Instead of deploying, Chunk runs the code for us on its platform; hence the name Chunk.run. Chunk gives us different methods for running code. We can do it manually, use [webhooks](https://blog.logrocket.com/receive-webhooks-python-flask-or-django/), or schedule it using [cron jobs](https://blog.logrocket.com/setting-up-cron-jobs-in-deno/).
A great thing about Chunk is that its easy-to-use interface takes away the complexity of writing and running server-side code. Instead, it makes it easier for us. 
### What is a chunk? <a name="what-chunk">
To use the Chunk platform properly, we must understand what a chunk is. A chunk is a project on Chunk that contains some code and exports a single function called `run()`. The `run()` function acts as the entry point to the chunk. From there, a chunk can take in JSON as `input` and return JSON as its `output`.
This is a simple chunk that sends back some JSON data. Here’s a code snippet of what a chunk looks like:
```javascript
export const run: RunFunction = async (context: RunContext) => {
return { message: `Hello world!` };
};
```

## Exploring the Chunk platform <a name="exploring-chunk-platform">
Let’s have a quick rundown of the Chunk platform. Its UI comprises a profile page, an online editor, and an explore section where we can review the chunks of other developers.
### The user profile page <a name="user-profile-page">
The user profile page allows us to see and access all the chunks created on our profile page or dashboard. The image below shows us what the user profile page looks like: 
### The Chunk editor <a name="editor">
The Chunk.run editor is broken into three parts: the editor on the left, the Response section on the top right, and the Logs section on the bottom right. The left side of the editor has two tab sections: the `code` section for writing code and the `env` section for environment variables.
> Note, if `envs` are present, we will need to use [Deno](https://blog.logrocket.com/getting-started-deno-fresh/) to access them in the code section
In the top right corner of the editor, there is a link we can use to access the data a chunk returns to us. The image below shows us the editor: 
## Applications of Chunk <a name="applications-chunk">
Chunk is a platform for [running server-side code](https://blog.logrocket.com/improve-app-performance-react-server-side-rendering/). This means there are so many possibilities for its applications that are only limited by our imagination or the problem we are trying to solve. We can use Chunk for web scraping, creating bots, developing APIs and utilities, setting up automated processes, and much more. Chunk provides a few starter templates that cover some specific use cases, and we can use these templates to get started quickly. It also offers several community chunks, which are projects that other developers have created.
> Note, that if we want our chunks to be added to the community, we must change the visibility settings in the online editor.
## Creating a simple products API with Chunk <a name="creating-simple-products-api-with-chunk">
We’ve learned a bit about Chunk and its applications. Now, let’s use it to create a simple ecommerce products API that we will later consume. Here’s a sample of the product data we will create:
```javascript
const products = [
{
id: 1,
title: "Laptop Backpack",
price: 109.95,
description: "A cool laptop backpack",
image: "https://fakestoreapi.com/img/81fPKd-2AYL._AC_SL1500_.jpg",
rating: 3.9,
},
....
]
```
Follow along by getting the complete data for the API from this [GitHub gist](https://gist.github.com/nefejames/bbeda8f257d7fa94b3e863926e730875), or use the chunk I created [here](https://chunk.run/c/product-api). Here’s the complete code for the products API:
```javascript
export const run: RunFunction = async () => {
const products = [
{
id: 1,
title: "Laptop Backpack",
price: 109.95,
description: "A cool laptop backpack",
image: "https://fakestoreapi.com/img/81fPKd-2AYL._AC_SL1500_.jpg",
rating: 3.9,
},
....
];
return { products };
};
```
That’s it! With that, we have a working API.
### Consuming the products API <a name="consuming-products-api">
This products API chunk returns an array of products. We can access this data through the chunk’s link. Here's [mine](https://live.chunk.run/product-api). Now, let’s consume the API. Enter the following code:
```javascript
import { useEffect, useRef, useState } from "react";
function App() {
const [products, setProducts] = useState([]);
useEffect(() => {
async function fetchData() {
const res = await fetch("https://live.chunk.run/product-api");
const data = await res.json();
setProducts(data.products);
}
fetchData();
}, []);
return (
<main>
{products.map((product) => (
<div class="card" key={product.id}>
<div class="img-box">
<img src={product.image} alt={product.name} />
</div>
<p class="prod-title">{product.title}</p>
<p class="prod-description">{product.description}</p>
<ul>
<li>Price: {product.price}</li>
<li>Rating: {product.rating}</li>
</ul>
</div>
))}
</main>
);
}
}
```
In the code above, we used the [`useEffect` Hook](https://blog.logrocket.com/useeffect-hook-complete-guide/) to fetch the data from the chunk API, we stored the data in the `products` state, mapped through the array, and displayed the different products in the UI. Here’s the full project:
See the Pen [logrocket chunk products api](https://codepen.io/nefejames/pen/GRXNoVN) by Emadamerho Nefe ([@nefejames](https://codepen.io/nefejames)) on [CodePen](https://codepen.io).
## Conclusion
Chunk is an excellent tool for creating simple projects and demos. It has an easy-to-use interface and removes the complexities of running server-side code. No complex setup or installations are required on our end.
As of this writing, the only issue I spotted with Chunk is the poor approach to product education, as no documentation exists. This makes it difficult to truly understand what Chunk is, how it works, its inner workings, and how to use it best. Chunk currently expects the user to find their way. It makes some assumptions about the technical knowledge users should have, and ultimately the responsibility rests on the user to learn how to use Chunk properly. Explore some of the [chunks that the Chunk community has built here.](https://chunk.run/explore)
---
##Cut through the noise of traditional React error reporting with LogRocket
[LogRocket](https://lp.logrocket.com/blg/react-signup-issue-free) is a React analytics solution that shields you from the hundreds of false-positive errors alerts to just a few truly important items. LogRocket tells you the most impactful bugs and UX issues actually impacting users in your React applications.
[](https://lp.logrocket.com/blg/react-signup-general)
[LogRocket](https://lp.logrocket.com/blg/react-signup-general) automatically aggregates client side errors, React error boundaries, Redux state, slow component load times, JS exceptions, frontend performance metrics, and user interactions. Then LogRocket uses machine learning to notify you of the most impactful problems affecting the most users and provides the context you need to fix it.
Focus on the React bugs that matter — [try LogRocket today](https://lp.logrocket.com/blg/react-signup-general). | mangelosanto |
1,440,148 | Deploying a Laravel 10.7 and Vue 3 project from GitHub to Hostinger | Shared hosting is kind of overlooked these days in favor of platforms like Vercel and Netlify that do... | 0 | 2023-04-18T18:42:30 | https://dev.to/divewitholeg/deploying-a-laravel-107-and-vue-3-project-from-github-to-hostinger-4lko | laravel, vue, php | Shared hosting is kind of overlooked these days in favor of platforms like Vercel and Netlify that do a courtesy of building and instant deployment of your project. Just push the ok button and watch your website being published online.
I decided to check how fast is a Git deployment with Hostinger and was astonished with the speed and convenience of deployment of my freshly started Laravel 10.7 / Vue 3 blog project.
Upon purchase of https://oleg-blog.tech domain name for a ridiculous price of 1.41 EUR, installing SSL certificate, establishing an e-mail account me@oleg-blog.tech and MySQL database (all complimentary with my Premium Hosting plan), I have started bash terminal on my Linux Ubuntu 22.04 machine to initiate a new Laravel project:
```bash
composer create-project laravel/laravel larablog
```
I will not go through the whole procedure here since it’s not a Laravel tutorial. However, I would like to point out that the documentation at https://bootcamp.laravel.com/inertia/installation is profound and so easy to read that it doesn’t take more than 20 minutes before you insert `php artisan serve` in your terminal and see the starter page looking at you from localhost:8000.
Since we are using Vue 3 in this project there is still a few moments I would like to drive your attention to. First, you will need to install Laravel Breeze to use the framework. There is also a Breeze installation for React available from the same source. You can enter into your terminal the following:
```bash
composer require laravel/breeze --dev
php artisan breeze:install vue
```
Next, you need to start your local development environment for the node files:
```npm
npm run dev
```
Last, you will need to create a MySQL database in your local environment and enter all the details for the database, the e-mail SMTP server and other information you might need into `.env` file. After that you can run the following command from your bash terminal:
```bash
php artisan migrate
```
Once you have made sure that your project is now working in the local environment and you can register in your database and log in, it’s time to create a private repository on GitHub and commit your code into it. Please note, that it is crucial for your code to work later on Hostinger platform to remove all the `.env`, `/public/*`, `/storage/*`, `vendor` files from `.gitignore` before you make a commit.
Also, since your Laravel project will serve all the files from `/public` folder per se, you will need to create an additional `.htaccess` file in the root folder with the following code inside:
```.htaccess
RewriteEngine On
RewriteBase /
# Serve files from the root directory
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ public/$1 [L]
```
After making a Git commit and performing file synchronization it was time to get back to the Hostinger account, find Git from the Advanced menu and follow the instruction steps from the provided tutorial https://support.hostinger.com/en/articles/1583302-how-to-deploy-a-git-repository to deploy the repo from under the intended domain.
Before the actual deployment you will need to copy the RSA key and paste it in your GitHub settings in order for the hosting provider to be able to access private repository on GitHub. I also chose to add a webhook directly to my project so that I don’t need to manually update Hostinger every time I add new features to my project. Now, with the webhook added and the project being successfully published, there is still one last step to be taken.
Database migration, that I already initially did in my project locally before committing it to GitHub will not work on a database on Hostinger. I needed to use SSH to go on the Hostinger server and run `php artisan migrate` command from the terminal there. In order to access the server, you need to copy the code from the menu on the dashboard under `Advanced` → `SSH Access` and post it into your terminal. Voi la now I am free to develop my blog with instant deployment using a shared hosting service with a convenience of edge computing.
In conclusion, with a few extra minutes invested in setting up the project, it is possible to have a domain, hosting, database and an e-mail service ready almost at the speed that cloud computing services offer. For a startup, a blogger or a small company this might be an ideal solution to bring out their ideas, products and services directly to their customer and save some money on the way. | divewitholeg |
1,440,176 | Merging PDFs in NodeJs | public static async mergeProposalPDFs(frontURL: string, backURL: string): Promise<Buffer> { ... | 0 | 2023-04-18T19:54:04 | https://dev.to/vjnvisakh/merging-pdfs-in-nodejs-53he | ```
public static async mergeProposalPDFs(frontURL: string, backURL: string): Promise<Buffer> {
const pdfDoc = await PDFDocument.create();
const pdf1Data = await fetch(frontURL).then((res) => res.arrayBuffer());
const pdf1 = await PDFDocument.load(pdf1Data);
const copiedPages1 = await pdfDoc.copyPages(pdf1, pdf1.getPageIndices());
copiedPages1.forEach((page) => pdfDoc.addPage(page));
const pdf2Data = await fetch(backURL).then((res) => res.arrayBuffer());
const pdf2 = await PDFDocument.load(pdf2Data);
const copiedPages2 = await pdfDoc.copyPages(pdf2, pdf2.getPageIndices());
copiedPages2.forEach((page) => pdfDoc.addPage(page));
// Save the merged PDF to a new buffer
const mergedPdfFile = await pdfDoc.save();
return Buffer.from(mergedPdfFile);
}
```
The function takes in two PDF urls and returns a buffer. In our case we wanted it to be uploaded further to s3. | vjnvisakh | |
1,440,200 | Announcing Public Sharing: Share Your Drafts with the World on Contenda! | Calling all content creators, developer advocates, and storytelling savants! We’re thrilled to... | 0 | 2023-04-18T21:00:33 | https://blog.brainstory.ai/public-sharing/ | sharing, devrel, edgefunctions, astro | ---
title: Announcing Public Sharing: Share Your Drafts with the World on Contenda!
published: true
date: 2023-04-18 00:00:00 UTC
tags: sharing, devrel, edgefunctions, astro
canonical_url: https://blog.brainstory.ai/public-sharing/
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w6fhi2iwb8ar6cqtjqof.png
---
Calling all content creators, developer advocates, and storytelling savants!
We’re thrilled to announce that Contenda now allows you to share your generated blog posts publicly with the world! Previously, you would generate your content and it would be tied exclusively to your account (which is still true), but now you have the ability to share posts publicly with your teams, editors, friends, and enemies.
## How does it work?
We’ve got [a whooole separate blog about how the feature was built](https://dev.to/contenda/using-netlify-on-demand-builders-astro-edge-functions-to-make-a-blog-sharing-generator-20lf)! But, outside of that, what you need to know:
1. You create content and edit it on the Contenda platform, all willy-nilly
2. You’re logged in, and you want to share your blog publicly before publishing it
3. You hit the “share” button, which generates a temporary public URL of your blog post

1. You share that thang to your heart’s content
<video controls autoplay>
<source src="https://contenda.co//blogimages/sharepear.mp4" type="video/mp4">
</source></video>
## Wow! I love it, let me at it!
Calm down you media maestro, the feature is already live! If you haven’t yet, you can sign up for [an early access account on Contenda here](https://signup.contenda.co/). We can’t wait to see what you make!
If you have any questions or feedback about this feature, please feel free [to reach out to our team on Discord](https://discord.gg/bYda4pQz2v) or [sign up for our email list](https://contenda.ck.page/3fcf617c2e).
Happy shipping! | cassidoo |
1,440,207 | What is Open Source Software? A Beginner's Guide | Introduction If you have been in the tech industry for a while you may have heard about open source... | 0 | 2023-04-18T21:25:42 | https://www.learn-dev-tools.blog/what-does-open-source-software-mean-a-beginners-guide/ | opensource, beginners, software, codenewbie | <!-- wp:heading -->
<h2>Introduction</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>If you have been in the tech industry for a while you may have heard about open source or open source software. When we talk about open source, you should think of open collaboration, open exchange, participation transparency and community-oriented development.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>In this article, you will learn more about open source software including the difference between open source and other forms of software, key features of OSS, pros and cons of open source, how to start contributing to open source and some examples of OSS.</p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2>What is Open Source?</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Open source designed any program whose source code can be modified and shared by anyone because it’s openly accessible. </p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2>What is Source Code?</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>A source code is a set of instructions written in the form of functions, descriptions, methods, calls or any other operational statement by a programmer using a computer programming language. A source code can be open that is available to anyone to edit and redistribute or closed. Source code is the part that most software or computer users cannot see.</p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":341,"sizeSlug":"large","linkDestination":"none"} -->
<figure class="wp-block-image size-large"><img src="https://www.learn-dev-tools.blog/wp-content/uploads/2023/03/source-code-medusa-js-github-1024x567.png" alt="Example of a Source code taken from Medusa's GitHub Repository " class="wp-image-341"/><figcaption class="wp-element-caption">Example of a Source code taken from Medusa's GitHub Repository </figcaption></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p></p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2>What Does Open Source Software(OSS) Mean?</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>The term Open source software is a non-proprietary software in which code is publicly available to anyone to inspect, comment on, modify, improve and redistribute it. Open source software is created and maintained by a group of developers so if you are new to computer programming, you can begin your journey as an open source contributor.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>I have known open source since 2017 and started with my first contribution in 2020. I made my first contributions to the <a href="https://github.com/apache" target="_blank" rel="noreferrer noopener">Apache Software Foundation</a> Gihhub and <a href="https://github.com/mozilla-mobile" target="_blank" rel="noreferrer noopener">Mozilla mobile Github</a>. </p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2>What is the Difference between Open source Software and other Types of Software?</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Do not get confused with open source, free, closed software and freeware. This section highlights the differences between these types of Software.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3>Open Source vs Free Software</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Open source software is software available for anyone to edit, improve and redistribute. The source code is accessible to anyone. Open source promotes collaboration and sharing of source code. It’s an effective way for organizations, developers and individuals to collaborate. Freedom to use the software is not absolute or imposed but it’s allowed. Every open source software is not free. However, some open source licenses are restrictive. <strong>WordPress</strong> is an example of open source software</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Free Software describes software in which source code is free to use, edit and redistribute without any restrictions. With free software, Freedom is a value that is more important than any other thing. All free software is open source software. <strong>Ares is an example</strong></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Open source and free software have distinguished features. Some open source software is restrictive and all free software are open source software but not all open source software is free software. </p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3>Open Source Software vs Freeware</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p><strong><a href="https://www.learn-dev-tools.blog/what-is-a-free-tool-all-you-should-know/">Freeware</a></strong> is any software that is free to use and available to the general public. If you are a, it can be frustrating to use freeware because its source code is not open or rather, it’s closed so you can edit the software to fit your needs. You can only use the features it offers even if it it is limited. Google Chrome is an example of freeware</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>While with open source software, you have the possibility to edit, improve, and add different features to fit your needs. An example is Chromium.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Freeware source code is closed, only the developer or organization who created the software can view its source code while open source one can edit and redistribute. Freeware is copyrighted while Open source is not.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3>Open Source vs Proprietary Software</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>You do not need an authenticated license to use open source software. They function with GNU Public License. They can be installed freely on any computer system. Open source software is developed and maintained through open collaboration.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Proprietary Software is software with protected source code(closed). They cannot be installed on a computer without a valid license. So users need a valid and authenticated license to use it. They are also copyrighted like freeware but the difference is that you do not need to pay to use freeware. An example is Windows</p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2>Key features of Open Source Software</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>In this section, I will elaborate on the various key features and values that cover Open source Software.</p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><!-- wp:list-item -->
<li><strong>Transparency: </strong>Open source allows you to track and understand any changes made to software without relying on the software manufacturer. </li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Collaboration</strong>: The main goal of the open source initiative is to allow anyone to freely participate and improve the software. By working together, we can easily identify and solve problems</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Meritocracy</strong>: Including diverse perspectives from different brains can help identify the best ideas by decision-makers and could equally determine the success of a project.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Community</strong>: When many people gather to help and share ideas, it creates a community where each other can get support.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Review</strong>: Since source code is freely available to the public, different experimented community members can check, propose ideas and improve the software.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Reliability</strong>: Many collaborators can update and test the code to check if there is any bug and fix it. Open sources do not depend on one, two or three people to maintain but rather have a large community that supports them.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Flexibility</strong>: Flexibility refers to the fact that you can use open source code to address problems that are unique to your business. Plus you can rely on community help to improve an existing solution or implement a new one.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Lower cost:</strong> Most of the time, open source is free to use and download. Sometimes you may need to pay for only reinforcing security and interoperability.</li>
<!-- /wp:list-item --></ol>
<!-- /wp:list -->
<!-- wp:heading -->
<h2>What are the Benefits of Open Source?</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Choosing to use or contribute to open source software offers many benefits. Some of them include:</p>
<!-- /wp:paragraph -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li>Open source is free</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>You have more control over the software. Compare to Closed software you can edit open source software to fit your needs.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>It's good for anyone beginning computer programming. As a beginner, you can become a better programmer by sharing your work and contributing to other works</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Open source is good because it can help to enhance software security as anyone can spot and fix a security issue quickly that the original programmer may have missed</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Open source projects tend to be more stable than other types of software since there is a great community behind ready-to-deploy means to improve it.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>It promotes collaboration through the creation of communities.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Open source is flexible</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>They have Good <strong>Documentation</strong></li>
<!-- /wp:list-item --></ul>
<!-- /wp:list -->
<!-- wp:heading -->
<h2>What are the Disadvantages of Open Source Software?</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Not everyone embraces the open source philosophy. Some organizations still debate the fact that open source software is more vulnerable since their source code is open to anyone.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>I have been in the software community for 3 to 4 years now and I have noticed that they tend to be more secure since issues linked to security or not can easily be identified and fixed by community members. However, I equally notice some drawbacks of open source.</p>
<!-- /wp:paragraph -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li>Open source software can be difficult to set up, especially for beginners. Some of them may lack user-friendly interfaces imposing you to use a <strong>Terminal</strong></li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Liability problems. Open source rarely contains warrant or liability protections</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Compatibility issues. </li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Initially, there is no cost but it can be costly in the long term especially if you need immediate support. There is also training cost in case you incorporate the software into your team.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Not all open source possesses a user manual although you can seek help from the community it won't always be available immediately like commercial software where support is available 24h/7</li>
<!-- /wp:list-item --></ul>
<!-- /wp:list -->
<!-- wp:heading -->
<h2>How can you Start Contributing to Open Source?</h2>
<!-- /wp:heading -->
<!-- wp:image {"id":342,"sizeSlug":"large","linkDestination":"none"} -->
<figure class="wp-block-image size-large"><img src="https://www.learn-dev-tools.blog/wp-content/uploads/2023/03/HOW-TO-CONTRIBUTE-TO-OSS-1024x576.webp" alt="how to contribute to OSS" class="wp-image-342"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>It’s quite simple to contribute to open source software. You could do that by identifying and fixing bugs, adding features, improving documentation and <strong><a href="https://www.learn-dev-tools.blog/what-is-content-writing-an-easy-beginners-guide/">writing content</a></strong> in the form of blog posts, technical tutorials or creating Videos. You also create your own open source project to promote open source.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>To contribute to existing open source software, you should head to <strong><a href="http://github.com" target="_blank" rel="noreferrer noopener">GitHub</a>:</strong></p>
<!-- /wp:paragraph -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li>Find a project and Fork</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Clone the Project to your Local machine</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Create a new branch</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Make changes</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Commit your changes</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Push your changes to your remote repository</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Finally, Create a pull request</li>
<!-- /wp:list-item --></ul>
<!-- /wp:list -->
<!-- wp:paragraph -->
<p>If you don’t have any project ideas, then open source projects to contribute to include</p>
<!-- /wp:paragraph -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li><a href="https://github.com/medusajs/medusa" target="_blank" rel="noreferrer noopener">Medusa</a> </li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><a href="https://github.com/ToolJet/ToolJet" target="_blank" rel="noreferrer noopener">Tooljet</a></li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><a href="https://github.com/doczjs/docz" target="_blank" rel="noreferrer noopener">Docz</a></li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><a href="https://github.com/boxyhq/jackson" target="_blank" rel="noreferrer noopener">SAML Jackson</a></li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><a href="https://github.com/chatwoot/chatwoot" target="_blank" rel="noreferrer noopener">Chatwoot</a></li>
<!-- /wp:list-item --></ul>
<!-- /wp:list -->
<!-- wp:heading -->
<h2>Open Source License</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>An Open source License governs how other individuals besides the initial developer can use, modify and redistribute the software. It allows the software to be transparent and changed by members of a community or third party. According to the <a href="https://opensource.org/licenses/" target="_blank" rel="noreferrer noopener">Open Source Initiative</a> include</p>
<!-- /wp:paragraph -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li><a href="https://opensource.org/license/apache-2-0/" target="_blank" rel="noreferrer noopener">Apache License, Version 2.0</a></li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><a href="https://opensource.org/license/epl-2-0/" target="_blank" rel="noreferrer noopener">Eclipse Public License version 2.0</a></li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><a href="https://opensource.org/license/cddl-1-0/" target="_blank" rel="noreferrer noopener">Common Development and Distribution License 1.0</a></li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><a href="https://opensource.org/license/lgpl-2-0/" target="_blank" rel="noreferrer noopener">GNU Library General Public License version 2</a></li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><a href="https://opensource.org/license/mpl-2-0/" target="_blank" rel="noreferrer noopener">Mozilla Public License 2.0 (MPL-2.0)</a></li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><a href="https://opensource.org/license/bsd-2-clause/" target="_blank" rel="noreferrer noopener">The 2-Clause BSD License</a></li>
<!-- /wp:list-item --></ul>
<!-- /wp:list -->
<!-- wp:heading -->
<h2>Example of Open Source Software</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>You might have used an open source software without realising. There are many of them out there. Some popular and most-used open source software is</p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><!-- wp:list-item -->
<li>WordPress, an open source CMS for Content Management</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong><a href="https://www.learn-dev-tools.blog/what-is-the-flutter-framework-a-beginners-introduction-to-flutter/">Flutter</a></strong> for web, desktop and mobile development</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong><a href="https://www.learn-dev-tools.blog/what-is-a-css-framework-all-you-need-to-know/">CSS Frameworks</a></strong> Like Bootstrap for website layout</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>VLC Media Player for video and sound player</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Libre Office for word processing</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Open source Linux Operating System</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Python</strong> is open source too.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Open source reporting tools include Metabase, BIRT</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Open source PDF viewer include Qoppa PDF Studio, pdfFiller, and Soda PDF</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Some Java open source projects are Arduino, Mindustry, and Signal Android</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>We equally have open source development tools like Eclipse IDE, and Apache Cordova</li>
<!-- /wp:list-item --></ol>
<!-- /wp:list -->
<!-- wp:heading -->
<h2>FAQ</h2>
<!-- /wp:heading -->
<!-- wp:heading {"level":3} -->
<h3>Is Open Source Software Free?</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Most open source software is free but there are some exceptions. Some open source licenses are restrictive and do not allow the creation of a modified version. An example is Open Source Watcom</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3>Where can I find OSS?</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>You can find and download open source projects on SourceForge, OSDN, FossHub, GitHub, and F-Droid</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3>Why Should you Contribute to OSS?</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>The most important advantage of contributing to open source is that you can find and collaborate with other developers hence improving your developer network.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3>Why is Open Source Important?</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Open source improves innovations through collaboration with other organizations and developer</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3>What are Some Open Source Competitions?</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p><strong>Open Source Competitions</strong> are contests where you can learn to participate in open source projects, demonstrate your coding skills, and get feedback. Examples of Open source competitions include Open Source Contest, <a href="https://summerofcode.withgoogle.com/" target="_blank" rel="noreferrer noopener">Google Summer of Code (GSoC)</a>, <a href="https://www.outreachy.org/" target="_blank" rel="noreferrer noopener">Outreachy</a>, <a href="https://osoc.be/" target="_blank" rel="noreferrer noopener">Open Summer of Code</a>, <a href="https://www.fsf.org/volunteer/internships" target="_blank" rel="noreferrer noopener">Free Software Foundation (FSF) Internship Program</a>, <a href="https://gssoc.girlscript.tech/" target="_blank" rel="noreferrer noopener">GirlScript Summer of Code (GSSoC)</a></p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3>What is Closed Source Software?</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Closed Software is software whose source code is only available to the author. So only the author can modify and copy it.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3>What is the Open Source Initiative?</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>The Open Source Initiative is a non-profit organisation with the main aim to educate about, form, and inform the benefit of open source</p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2>Conclusion</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Open source is available for anyone to edit, use and redistribute. Some key features of open source include transparency, reliability, available community and flexibility. However, OSS can be costly in a long run. So if you are planning to incorporate OSS into your business you need to include additional or urgent costs in your project plan including the cost of training staff members to use the software. Some popular Open source projects include Apache, WordPress, and Mozilla Firefox.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>If you like this kind of content, feel free to share or subscribe to our newsletter below to get fresh content like this periodically. But if you have any questions, you can reach me via <a href="https://twitter.com/KevinegunNzapdi" target="_blank" rel="noreferrer noopener">my Twitter account</a>. I will be glad to hear from you.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p></p>
<!-- /wp:paragraph --> | learndevts |
1,440,358 | Lógica de programação não é difícil! | Muitas pessoas pensam ser difícil por exigir habilidades matemáticas avançadas ou conhecimentos... | 0 | 2023-04-19T00:06:28 | https://dev.to/ymands/a-logica-de-programacao-nao-e-dificil-3ho6 | braziliandevs, webdev, programming, softwaredeveloper |
Muitas pessoas pensam ser difícil por exigir habilidades matemáticas avançadas ou conhecimentos técnicos complexos. No entanto, essa realidade não é difícil de entender e pode ser aprendida por qualquer pessoa disposta a dedicar tempo e esforço.
A lógica de programação tem a habilidade de desenvolver soluções a problemas usando uma linguagem de programação. Ela envolve habilidade de pensar de forma sistemática e resolver problemas de forma lógica, e a lógica de programação é a base de todas as habilidades de programação e habilidade fundamental que todo programador deve ter.
**Como dominar a lógica de programação?**
- Aprenda a pensar em etapas: A lógica de programação requer uma abordagem sistemática e estruturada para resolver problemas. Divida um problema em partes menores e resolva cada parte separadamente. Ao dividir um problema em etapas menores, você torna o problema mais gerenciável e mais fácil de resolver.
- Pratique, pratique e pratique: Como em qualquer habilidade, a prática é fundamental para dominar a lógica de programação. Resolva o máximo de problemas que puder e tente abordá-los de diferentes maneiras. Quanto mais problemas você resolver, mais natural se tornará pensar em termos de lógica de programação.
- Entenda os conceitos básicos: Antes de se aprofundar em problemas mais complexos, é fundamental que você tenha uma compreensão sólida dos conceitos básicos de programação, como variáveis, loops, condicionais e funções. Esses conceitos são fundamentais para o desenvolvimento de soluções mais complexas.
>
A lógica de programação é uma habilidade fundamental que todo programador deve ter. Embora possa parecer difícil no começo, é uma habilidade que pode ser aprendida com o tempo e esforço. Ao aprender a pensar em etapas, praticar regularmente, entender os conceitos básicos e colaborar com outros programadores, estará no campo certo para dominar a lógica de programação. Não deixando cair no esquecimento os efeitos que a prática trás, pois a prática leva à perfeição, e quanto mais praticar mais natural de alcançar grandes resultados. | ymands |
1,440,380 | How to overcome the stigma of not having a degree in the software industry. | Overcoming the stigma of not having a degree in the software industry can be challenging, but it's... | 0 | 2023-04-19T00:57:29 | https://dev.to/sukr92329/how-to-overcome-the-stigma-of-not-having-a-degree-in-the-software-industry-1pg6 | Overcoming the stigma of not having a degree in the software industry can be challenging, but it's important to focus on demonstrating your skills and expertise through projects, networking, and job performance. It's also important to be transparent about your education and experience, and to highlight your motivation, passion, and dedication to the field. Collaborating with and learning from f[ull stack web developer course](https://www.geekster.in/full-stack-web-development-program), and staying up-to-date with the latest technologies and trends can also help you gain respect and recognition in the industry. | sukr92329 | |
1,440,406 | Upgrading the XM Cloud Introduction repository to Docker Compose v2 | In case you haven’t heard, Docker has slowly been pushing people to migrate the latest version of... | 0 | 2023-04-19T01:40:10 | https://robearlam.com/blog/docker-compose-v2 | docker, dockercompose, sitecore, xmcloud | ---
title: Upgrading the XM Cloud Introduction repository to Docker Compose v2
published: true
date: 2023-04-19 00:00:00 UTC
tags: docker, dockercompose, sitecore, xmcloud
canonical_url: https://robearlam.com/blog/docker-compose-v2
---
In case you haven’t heard, Docker has slowly been pushing people to migrate the latest version of Docker Compose, which is `v2`. This has culminated in the end of life for `v1`, which means that the functionality will be removed from all Docker Desktop versions as of Jun 2023.
Our team at Sitecore is responsible for the [XM Cloud Introduction](https://github.com/Sitecore/XM-Cloud-Introduction/) project, which currently uses Docker Compose for its local developer experience, so we needed to update to the latest version to ensure we could continue to work in this way. This post will cover the steps we needed to complete to migrate the solution to `v2`.
Overall, the process was fairly simple, with most of the time being taken up by testing. In the end, it boiled down to a couple of tasks that I needed to complete:
- Updating our YAML definitions.
- Updated our PowerShell scripts.
I also used this as an opportunity to tidy up a few things with the docker-related files to clean up a few warnings that were being thrown after we move to `v2`.
I’ll talk through the different changes we needed to make below, but you can see the pull request that contained the changes that were required [here](https://github.com/Sitecore/XM-Cloud-Introduction/pull/253).
## Updating the YAML definitions
The first thing we needed to do was to remove the top-level `version` parameter from our main `docker-compose.yml` file as it is no longer required when running `v2`. After completing this we then needed to update how we were scaling some of our containers. We use containers to perform build actions, but they aren’t actually used at runtime, previously this was achieved by using a `scale` parameter like this:
```
service-name:
scale: 0
```
This syntax is incorrect when running Docker Compose v2 through, and now you should use the `deploy.replicas` property instead, so this involved changing all instances of this to the following syntax:
```
service-name:
deploy:
replicas: 0
```
The final change we needed to make was to one of our EntryPoint values, the previous version used the following value for the EntryPoint of the CM container:
```
entrypoint: powershell -Command "& C:\tools\entrypoints\iis\Development.ps1"
```
When attempting to use that with `v2` though, the container failed to start as the backslashes were being stripped out. The simple fix for this was to double escape those slashes, so the new version looks like this:
```
entrypoint: powershell -Command "& C:\\tools\\entrypoints\\iis\\Development.ps1"
```
Those were all of the changes I needed to make to our docker-compose yaml definitions, now there are other changes that have been implemented by this forced migration but they didn’t affect our solution. You can read about the full set of changes on the [docker site](https://docs.docker.com/compose/)
## Updating the PowerShell scripts
Once we had our yml definitions updated the last thing we needed to complete to have compatibility was to update our PowerShell scripts used to run the solution locally. The only change here was to update how the docker compose commands were issued. We have a few different ways that our solution can be run, but they all executed the containers using a command similar to this:
```
docker-compose up -d
```
We had to change this slightly to remove the hyphen from the `docker-compose` command, forcing anyone running the repo to be executing it using the v2 format, so the above command became
```
docker compose up -d
```
## Final tidy up
After changing to run the solution using `v2` I noticed it was throwing some extra warnings about unpopulated environment variables. After looking into it, these variables were no longer actually used in our solution, so this proved to be a good time to tidy those up as well. The final task was to update the `README` for the repository to state that it was now using Docker Compose v2.
## Conclusion
Overall, migrating to Docker Compose v2 was fairly straightforward for our solution and didn’t involve too many changes, though it may be more complex for you depending on which v1 features you’re leveraging. | robearlam |
1,440,607 | AI Generated Images of rich as poor | 1. Elon Musk Elon Reeve Musk FRS is a business magnate and investor. He is the founder,... | 0 | 2023-04-19T16:08:00 | https://dev.to/jon_snow789/ai-generated-images-of-rich-as-poor-1i32 | ai, webdev, javascript, productivity | ### 1. Elon Musk
Elon Reeve Musk FRS is a business magnate and investor. He is the founder, CEO and chief engineer of SpaceX; angel investor, CEO and product architect of Tesla, Inc.; owner and CEO of Twitter; founder of the Boring Company; co-founder of Neuralink and OpenAI; and president of the philanthropic Musk Foundation.

---
See what elon musk say about **Chat GPT**
{% youtube j7Co4-jw2rE %}
---
---
### 2. Bill Gates
William Henry Gates III is an American business magnate, philanthropist and investor. He best known for co-founding software giant Microsoft, along with his late childhood friend Paul Allen.

---
### 3. Mark Zuckerberg
Mark Elliot Zuckerberg is an American business magnate, internet entrepreneur, and philanthropist. He is known for co-founding the social media website Facebook and its parent company Meta Platforms, of which he is the executive chairman, chief executive officer, and controlling shareholder.

---
### 4. Jeff Bezos
Jeffrey Preston Bezos is an American entrepreneur, media proprietor, investor, and commercial astronaut. He is the founder, executive chairman, and former president and CEO of Amazon, the world's largest e-commerce and cloud computing company.

---
### 5.Warren Buffett
Warren Edward Buffett is an American business magnate, investor, and philanthropist. He is currently the chairman and CEO of Berkshire Hathaway.

---
### 6. Mukesh Ambani
Mukesh Dhirubhai Ambani is an Indian billionaire heir to the fortune of Reliance Industries. He is the eldest son of Dhirubhai Ambani and is currently the chairman and managing director of Reliance Industries, a Fortune Global 500 company and India's most valuable company by market value.

---
### 7. Donald Trump
Donald John Trump is an American politician, media personality, and businessman who served as the 45th president of the United States from 2017 to 2021. Trump graduated from the Wharton School with a bachelor's degree in 1968.

---
### Credit [Artist Gokul Pillai](https://www.instagram.com/withgokul/)
---
---
### Best Post
1. [How to create a Scroll to top button in React](https://democoding.netlify.app/post/how-to-create-a-scroll-to-top-button-in-react)
2. [CSS 3D Isometric Social Media Menu Hover Effects](https://democoding.netlify.app/post/css-3d-isometric-social-media-menu-hover-effects)
3. [Input Box Shake on Invalid Input](https://democoding.netlify.app/post/shake-on-invalid-input)
4. [Beautiful SVG Cat Animation in React](https://democoding.netlify.app/post/beautiful-svg-cat-animation-in-react)
---
---
{% instagram CqObFa6PR0l %}
---
{% link https://dev.to/jon_snow789/11-free-tools-to-ignite-your-coding-journey-4p4p %}
---
---
### Use Our RSS Feed
```bash
https://dev.to/feed/jon_snow789
```
| jon_snow789 |
1,440,813 | SSO - Build vs Buy: Making the Right Choice for Your Business | Single Sign-On (SSO) - Build vs. Buy is the decision-making process businesses undergo to determine... | 0 | 2023-04-19T10:12:28 | https://ssojet.com/blog/sso-build-vs-buy-making-the-right-choice-for-your-business/ | cybersecurity, authentication, sso, security | **Single Sign-On (SSO) - Build vs. Buy is the decision-making process businesses undergo to determine whether to create a custom solution or purchase a ready-made one. This article explores the benefits and drawbacks of each approach, as well as key factors to consider when making this decision.**
Single Sign-On (SSO) is a popular authentication mechanism that enables users to access multiple applications with a single set of login credentials. SSO solutions have become increasingly popular in recent years, as they offer several benefits, including improved security, increased user convenience, and reduced costs.
When it comes to implementing an SSO solution, organizations have two main options: building an SSO solution in-house or buying an SSO solution from a third-party vendor. Both options have their advantages and disadvantages, and organizations need to carefully evaluate their requirements, resources, and expertise before deciding which approach to take.
In this blog post, we will explore the advantages of buying an SSO solution over building an SSO solution in-house. We will examine how buying an SSO solution can offer faster time-to-market, lower costs, expertise, scalability, and integration.
Let’s dive in and explore the benefits of buying an SSO solution in more detail.
## **What is SSO?**
SSO stands for Single Sign-On, which is an authentication mechanism that enables users to access multiple applications or systems using a single set of login credentials. With SSO, users do not need to remember multiple login credentials for different applications, which improves user convenience and productivity.
In an SSO setup, a user’s login credentials are verified by a central authentication server, which then grants access to all the authorized applications or systems. This eliminates the need for users to log in separately to each application or system, saving time and reducing the risk of password fatigue, which can lead to weak or compromised passwords.
SSO can be implemented using various authentication protocols such as OAuth, OpenID Connect, SAML, and Kerberos. These protocols enable secure communication between the authentication server and the application or system, ensuring that only authorized users can access the resources.
SSO solutions have become increasingly popular in recent years, as they offer several benefits, including improved security, increased user convenience, and reduced costs. Organizations can either build an SSO solution in-house or buy an SSO solution from a third-party vendor, depending on their requirements and resources.
### **SSO : Build vs Buy**
Single Sign-On (SSO) is a popular method of authentication that allows users to access multiple applications with a single set of credentials. It eliminates the need for users to remember multiple passwords and improves security by reducing the risk of credential theft. When it comes to implementing SSO, organizations have two options: build it in-house or buy a third-party solution. In this blog, we will explore the pros and cons of each approach to help you decide which option is right for your organization.
## **Build SSO In-House**
Building an SSO solution in-house requires a significant investment of time, resources, and expertise. The process involves developing custom code, designing authentication protocols, and integrating with existing applications. Here are some of the pros and cons of building SSO in-house:
**Pros:**
**1. Customization:** Building an SSO solution in-house allows organizations to tailor the solution to their specific needs. They can design the authentication protocols, user interface, and integration points based on their unique requirements.
**2. Control: **Organizations have complete control over the SSO solution when they build it in-house. They can modify and update the solution as needed without relying on third-party vendors.
**3. Flexibility:** An in-house SSO solution can be updated and modified as needed to meet changing business needs or to accommodate new applications or systems. This flexibility can be particularly valuable for organizations that are experiencing rapid growth or change.
**4. Security:** An in-house SSO solution can be designed with security as a top priority. By controlling the design and implementation of the solution, the organization can ensure that the solution is as secure as possible, minimizing the risk of data breaches or other security incidents.
**Cons:**
**1. Expertise:** Building an SSO solution requires specialized expertise in security, authentication protocols, and application integration. Organizations may need to hire or train staff with these skills.
**2. Time:** Developing an SSO solution in-house can take months or even years, depending on the complexity of the solution and the resources available.
**3. Maintenance:** Once the SSO solution is built, organizations are responsible for maintaining it, including updates, bug fixes, and security patches.
**4. Cost:** While building an SSO solution in-house requires a significant upfront investment, it can require more investment in the long run.
## **Buy a Third-Party SSO Solution**
Alternatively, organizations can buy a third-party SSO solution. These solutions are typically cloud-based and offer a range of features and integrations. Here are some of the pros and cons of buying a third-party SSO solution:
**Pros:**
**1. Time-to-Market:** Buying a third-party SSO solution can significantly reduce the time-to-market compared to building a solution in-house. These solutions are typically pre-built and ready to integrate with existing applications.
**2. Expertise:** Third-party SSO vendors have expertise in security, authentication protocols, and application integration. Organizations can rely on their expertise instead of building it in-house.
**3. Maintenance:** Third-party SSO vendors are responsible for maintaining the solution, including updates, bug fixes, and security patches.
**4. Saves time and resources:** Implementing a third-party SSO solution can save an organization time and resources as it eliminates the need to develop and maintain a custom-built SSO system.
**5. Improved security:** Third-party SSO solutions are usually designed to provide a high level of security, such as encryption and multi-factor authentication, which can enhance security and reduce the risk of data breaches.
**6. Integration with multiple applications:** A third-party SSO solution can integrate with a wide range of applications, including cloud-based and on-premises applications.
**Cons:**
**1. Limited Customization:** While third-party SSO solutions offer a range of features, organizations may be limited in their ability to customize the solution to their specific needs.
**2. Dependence on the third-party vendor:** When using a third-party SSO solution, organizations are reliant on the vendor for ongoing support, maintenance, and upgrades.
**3. Data privacy concerns:** When using a third-party SSO solution, an organization’s sensitive data may be stored in a third-party system, leading to potential data privacy concerns.
### **Advantages of buying SSO solution over building SSO in-house**
Buying a Single Sign-On (SSO) solution from a third-party vendor can offer several advantages over building an SSO solution in-house. Here are some of the key advantages of buying an SSO solution:
**1. Faster Time-to-Market:** Implementing an SSO solution can be a time-consuming process, requiring specialized skills and expertise. Building an SSO solution in-house can take several months or even years, depending on the complexity of the solution and the resources available. On the other hand, buying an SSO solution from a third-party vendor can significantly reduce the time-to-market. These solutions are typically pre-built and ready to integrate with existing applications, which means that organizations can implement SSO more quickly and efficiently.
**2. Lower Costs:** Building an SSO solution in-house can be expensive, requiring significant investment in resources, expertise, and infrastructure. Organizations need to hire or train staff with specialized skills, purchase software and hardware, and invest in ongoing maintenance and support. In contrast, buying an SSO solution from a third-party vendor can be more cost-effective in the long run. These solutions typically offer a subscription-based pricing model, which means that organizations can avoid upfront costs and only pay for what they use. They can also avoid ongoing licensing fees and vendor lock-in.
**3. Expertise:** Implementing an SSO solution requires specialized expertise in security, authentication protocols, and application integration. Building an SSO solution in-house can be challenging, particularly for organizations without this expertise. On the other hand, third-party SSO vendors have expertise in these areas and can provide organizations with the support and guidance they need. They can help organizations choose the right authentication protocols, ensure compliance with industry regulations, and provide ongoing maintenance and support.
**4. Scalability:** SSO solutions need to be scalable to meet the changing needs of organizations. Building an SSO solution in-house can be challenging to scale, particularly for organizations with limited resources. Third-party SSO vendors, on the other hand, have the infrastructure and resources to scale SSO solutions quickly and efficiently. They can provide organizations with the scalability they need to meet their changing requirements.
**5. Integration:** Implementing an SSO solution requires integration with existing applications, which can be a complex and time-consuming process. Building an SSO solution in-house requires expertise in application integration, and organizations need to ensure that their SSO solution integrates seamlessly with their existing applications. On the other hand, third-party SSO vendors have experience in integrating with a wide range of applications, and their solutions are typically pre-built to integrate with popular applications. This means that organizations can implement SSO more quickly and efficiently.
In conclusion, while building an SSO solution in-house may offer some advantages in terms of customization and control, buying an SSO solution from a third-party vendor can offer several advantages, including faster time-to-market, lower costs, expertise, scalability, and integration. Organizations should carefully evaluate their requirements, resources, and expertise before deciding which approach to take.
## **SSO with SSOJet**
Integrating SSOJets into your B2B SaaS product can provide your customers with a simplified and personalized Single Sign-On (SSO) experience. With just a few lines of code, your customers can configure their SSO solution on their own, without requiring extensive technical knowledge or additional support from your team.
SSOJets supports commonly-used authentication protocols like OIDC and SAML, which allows for easy integration with Identity Providers (IDPs). In addition, SSOJets also enables social login SSOs, which allows users to authenticate using their social media credentials.
To provide a seamless SSO experience, SSOJets offers customizable login boxes that can be embedded into your SaaS offering. These login boxes are designed to reduce in-app friction and enable users to authenticate smoothly, gaining quick access to your application. By leveraging SSOJet’s SSO components, you can customize the login box’s appearance, making it consistent with your brand’s identity and providing a personalized user experience.
SSOJet’s end-to-end SSO solution also ensures the security of user data by supporting multi-factor authentication and single logout, and adaptive authentication. This comprehensive solution minimizes IT overhead by enabling centralized user management and access control.
Overall, integrating SSOJets into your B2B SaaS product can help streamline the user authentication process, enhance user experience, and ensure the security of user data. With a customizable login box and a comprehensive SSO solution, SSOJets provides a complete end-to-end SSO solution for modern SaaS applications.
## **Conclusion:**
Both building an SSO solution in-house and buying a third-party solution have their pros and cons. Organizations need to evaluate their specific requirements, resources, and expertise before deciding which approach to take. In general, building an SSO solution in-house is best suited for organizations with specialized requirements and the resources and expertise to develop and maintain the solution. Buying a third-party SSO solution is best suited for organizations that require a fast time-to-market and a range of pre-built features and integrations. Whatever approach you choose, implementing an SSO solution can significantly improve security and user experience. | andrew89 |
1,440,951 | PostmanAPI Workshop SCET | The workshop started with an introduction to API and the importance of API development. Joel then... | 0 | 2023-04-19T12:38:49 | https://dev.to/weebkira/postmanapi-workshop-scet-3jjg | postmanstudent, postmanapi | The workshop started with an introduction to API and the importance of API development. Joel then provided a detailed overview of Postman and its features, including how to create requests, set up environments, and automate tests.
Attendees had the opportunity to get hands-on experience with Postman by creating their own API requests and tests. Joel also demonstrated how to use Postman to monitor API performance and troubleshoot errors.
Overall, the workshop was a great success, providing attendees with valuable insights into the world of API development and testing using Postman. Joel Jaison's expertise and engaging teaching style made the workshop both informative and enjoyable. | weebkira |
1,440,965 | Introducing your new team lead…Kedro | This post explains how Kedro can guide an analytics team to follow best practices and avoid technical... | 0 | 2023-04-19T12:59:06 | https://kedro.org/blog/introducing-your-new-team-lead-kedro | python, datascience, kedro, opensource |
This post explains how Kedro can guide an analytics team to follow best practices and avoid technical debt.
In a recent article, I explained that [following software principles can help you create a well-ordered analytics project](https://towardsdatascience.com/five-software-engineering-principles-for-collaborative-data-science-ab26667a311) to share, extend and reuse in the future. In this post we'll review how you can benefit from using Kedro as a toolbox to apply best practices to data science code.
## How data science projects fail
As data scientists, we aspire to unlock valuable insights by building
well-engineered prototypes that we can take forward into production.
Instead, there is a tendency for us to make poor engineering decisions
in the face of tight deadlines or write code of dubious quality through
a lack of expertise.
The result is [technical debt](https://www.splunk.com/en_us/data-insider/what-is-tech-debt.html) and prototype code that is difficult to understand,
maintain, extend, and fix. Projects that once looked promising fail to transition past the experimental stage into production.
> "A cycle of quick and exciting research leads to high expectations of
> great improvement, followed by a long series of delays and
> disappointments where frustrating integration work fails to recreate
> those elusive improvements, made all the worse by the feeling of sunk
> costs and a need to justify the time spent."
>
> Joe Plattenburg, Data Scientist at Root Insurance
## How to write well-engineered data science code
When you start to cut code on a prototype, you may not prioritize
maintainability and consistency. Adopting a team culture and way of
working to minimize technical debt can make the difference between
success and failure.
Some of the most valuable techniques a data scientist can pick up are
those that generations of software engineers already use, such as the
following guidelines:
**Use a standard and logical project structure**: It is easier to
understand a project, and share it with others, if you follow a standard
structure.
**Don't use hardcoded values**: instead, use precisely named constants
and put them all into a single configuration file so you can find and
update them easily.
**Refactor your code**: In data science terms, it often makes sense to
use a Jupyter notebook for experimentation. But once your experiment is
done, it's time to clean up the code to remove elements that make it
unmaintainable, and to remove accidental complexity. Refactor the code
into Python functions and packages to form a pipeline that can be
routinely tested to ensure repeatable behaviour.
> "Testing after each change means that when I make a mistake, I only
> have a small change to consider in order to spot the error, which
> makes it far easier to find and fix."
>
> Martin Fowler, Author of Refactoring: Improving the Design of Existing
> Code
**Make code reusable by making it readable**: Write your pipelines as a
series of small functions that do just one task, with single return
paths and a limited number of arguments.
Many data scientists say they've learned from their colleagues through
pair programming, code reviews and in-house mentoring that enables them
to build expertise suitable to their roles and requirements.
We see Kedro as the always-available team lead that steers the direction
of the analytics project from the outset and encourages use of a
well-organized folder structure, software design that supports regular
testing, and a culture of writing readable, clean code.
## What is Kedro?
Kedro is an open-source toolbox for production-ready data science. The
framework was born at QuantumBlack to solve the challenges faced
regularly in data science projects and promote teamwork through
standardised team workflows. It is now hosted by the [LF AI & Data
Foundation](https://lfaidata.foundation/) as an incubating project.
{% embed https://youtu.be/yEQqf3XUvzk %}
### Kedro = Consistent project structure
Kedro is built on the learnings of [Cookie Cutter Data Science](https://drivendata.github.io/cookiecutter-data-science/). It helps you to standardise how configuration, source
code, tests, documentation, and notebooks are organised with an
adaptable project template. If your team needs to build with multiple
projects that have similar structure, you can also create your own
Cookie Cutter project templates with Kedro starters.
### Kedro = Maintainable code
Kedro helps you refactor your business logic and data processing into
Python modules and packages to form pipelines, so you can keep your
notebooks clean and tidy.
[Kedro-Viz](https://demo.kedro.org) then visualises the pipelines to help you navigate .
> "People started from scratch each time, the same pitfalls were
> experienced independently, reproducibility was time consuming and only
> members of the original project team really understood each
> codebase...
>
> We needed to enforce consistency and software engineering best
> practices across our own work. Kedro gave us the super-power to move
> people from project to project and it was game-changing. After working
> with Kedro once, you can land in another project and know how the
> codebase is structured, where everything is and most importantly how
> you can help".
>
> Joel Schwarzmann, Principal Product Manager, QuantumBlack Labs, [blog
> post](https://medium.com/towards-data-science/five-software-engineering-principles-for-collaborative-data-science-ab26667a311)
### Kedro = Code quality
Kedro makes it easy to avoid common code smells such as hard-coded
constants and magic numbers. The configuration library enables your code
to be reusable through data, model, and logging configuration. An
ever-expanding data catalog supports multiple formats of data access.
Kedro also makes it keep your code quality up to standard, through
support for black, isort, and flake8 for code linting and formatting,
pytest for testing, and Sphinx for documentation.
### Kedro = Standardisation
Kedro integrates with standard data science tools, such as TensorFlow,
scikit-learn, or Jupyter notebooks for experimentation, and commonly
used routes to deployment such as Databricks.
## Summary
Kedro is an open-source Python toolbox that applies software engineering
principles to data science code. It makes it easier for a team to apply
software engineering principles to data science code, which reduces the
time spent rewriting data science experiments so that they are fit for
production.
**When you follow established best practice, you have a better chance of
success.**
Software engineering principles only work if the entire team follows
them. A tool like Kedro can guide you just like an experienced technical
lead, making it second nature to use established best practices, and
supporting a culture and set of processes based upon software
engineering.
**Look forward to greater collaboration and productivity with Kedro in
your team!**
## Find out more about Kedro
There are many ways to learn more about Kedro:
- Join our [Slack organisation](https://slack.kedro.org/) to reach out to us directly if you've a question or want to stay up to date with our news. There's an [archive of past past conversations on Slack](https://www.linen.dev/s/kedro) too.
- [Read our docs](https://docs.kedro.org/) or look at the [Kedro source code on GitHub](https://github.com/kedro-org/kedro).
- Check out our "[Crash course in Kedro](https://www.youtube.com/watch?v=NU7LmDZGb6E) video on YouTube.
Look out for an upcoming training session tailored to help your team get
on-board with Kedro.
| stichbury |
1,441,073 | Introduction to Web Animations with GSAP | GSAP, or GreenSock Animation Platform, is a powerful JavaScript animation library that is used by web... | 0 | 2023-04-19T20:33:56 | https://dev.to/topboyasante/introduction-to-web-animations-with-gsap-645 | gsap, webdev, javascript, react | GSAP, or GreenSock Animation Platform, is a powerful JavaScript animation library that is used by web developers and designers to create interactive and engaging websites. GSAP is known for its versatility, speed, and ease of use, making it a popular choice among developers worldwide.
Today, we're going to learn the basics of GSAP, and get our hands dirty with some animations.
## Prerequisites
I recently asked a fellow developer for his secret in learning GSAP. "It's just CSS!", he said.
For one to get good at GSAP, a solid proficiency in CSS is needed. The stronger your CSS, the more complex animations you can make!
We will be doing this tutorial with React, and so a good knowledge of it will be needed as well.
## Setting up GSAP in your React Project
Setting up GSAP in a React file involves a few steps, but it is a straightforward process.
1.Install GSAP into your React App:
This is done by running the `npm install gsap` from your terminal.
2.Import the GSAP Module:
In your React file where you want to use GSAP, import the gsap module at the top of the file:
`import { gsap } from "gsap";`
We will be making a really simple animation like the one shown below:

## Building the Components:
This page has two components: A Navigation Bar and a large text.
Here's what the navigation bar looks like in react:

And the Large Text:

##Adding GSAP
The next thing to do is to add your animations to your component. When working with GSAP, I discovered that GSAP can read and execute animations even when they're not written in the component being targeted. Cool right?
With that in mind, let's add our animations to our Large Text component. We'll do that in a useEffect Hook:

The first part animates the navigation bar by moving it from a position 30% off the top of the screen to its original position at the top of the screen over a duration of 1 second.
The second part animates the large text by moving it from a position 500 pixels below its original position to its original position over a duration of 1.8 seconds. The ease option is used to add a smoothing effect to the animation, making it look more natural.
The complete code is shown below:

## Understanding the Animation: The GSAP Object
The `gsap` object is the main object provided by the GSAP library. It contains all of the methods and properties needed to create powerful animations such as `gsap.to()`, `gsap.from()`, and `gsap.fromTo()`, and is the primary way to interact with the GSAP library in your JavaScript code.
`gsap.fromTo()` is a method provided by the GSAP library that allows you to define the starting and ending states of an animation in a single call. This method is a convenient way to create more complex animations that require multiple changes to an element's properties.
The `fromTo()` method takes three arguments:
- The target element(s) to animate, specified as a CSS selector or DOM element.
From our animation, our target elements are our navigation bar element with a class of ".navbar", and our large text, with a class of ".header".
- An object defining the starting properties of the animation, which in our case would be:
`{y:"-30vh"}` for our navigation bar, and:
`{y:500}` for our large text.
- An object defining the ending properties of the animation, which in our case would be:
`{ y:0, duration:1,}` for our navigation bar, and:
`{ y: 0,ease: "power3.out"}` for our large text.
- gsap.timeline()
`gsap.timeline()` is a method provided by the GSAP library that creates a timeline instance for sequencing multiple animations. The timeline allows you to control the timing and sequencing of multiple animations as a group, and you can add, remove or adjust individual animations within the timeline.
##Conclusion
GSAP is a powerful JavaScript library that enables web developers to create complex and engaging animations and interactions. It offers a range of animation tools and features, such as timeline controls and easing functions, and is easy to learn and use. Whether you're a beginner or an experienced developer, GSAP is a versatile and reliable animation library that can enhance the user experience of your web projects.
| topboyasante |
1,441,075 | Five Minute Maintainers: Meet Seth Michael Larson! | Our new five minute maintainer series shares the journeys and opinions of the maintainers behind some... | 0 | 2023-04-19T14:17:54 | https://community.tidelift.com/tidelift/five-minute-maintainers-meet-seth-michael-larson-3oeh | opensource, python, github | Our new five minute maintainer series shares the journeys and opinions of the maintainers behind **some of the most widely used projects in open source.** We ask five big questions in five minutes, with the goal of elevating the voices of open source maintainers who create the open source projects we all rely on.
{% embed https://www.youtube.com/watch?v=Yz4LiydGctg %}
**Introducing Seth Michael Larson!** Seth maintains urllib3 and a host of useful packages within the Python ecosystem. Watch to learn more about Seth and his journey in open source!
**Are you an open source maintainer who would like to be featured in our Five Minute Maintainer series?** Reach out to Lyn Muldrow, Senior Maintainer Advocate, at lyn@tidelift.com.
| lynmuldrow |
1,441,170 | Cross Account MSK Connectivity using AWS PrivateLink | Sometime back, a couple of months ago actually, I got into a call, and someone presented me with an... | 0 | 2023-04-19T15:00:00 | https://dev.to/kingastra/cross-account-msk-connectivity-using-aws-privatelink-1lb3 | aws, security, kafka, bigdata | Sometime back, a couple of months ago actually, I got into a call, and someone presented me with an interesting problem. In my line of work, I do more of risk and compliance, and that usually means that I get to know about a lot of stuff. I love working with AWS services, and when I got into this call and saw the architecture diagrams, my eyes lit up, this is exactly what I needed. I quickly opened my bookmarked AWS well architected framework and was ready to roll.
**The Problem.**
The team was working on a PoC architecture that involved AWS Managed Apache Kafka. The proposed solution required the MSKs to be set up in three regions, with one region acting as a source for the other two remaining regions. Sounds simple enough. I had not worked with AWS MSKs before and I did not know if this was really accurate. On top of that, I had to ensure that the solution adheres to domestic and international privacy laws such as GDPR, and even FedRAMP. This was an interesting problem. So I ended the call and got into another call with one of my teammates who is much more experienced as a security architecture, she always has the answers. We brainstormed for a bit and got some really interesting stuff. The whole point of me sharing this is to share and hopefully someone out there finds this useful someday.
**What Exactly is AWS MSK?**
“…Apache Kafka is an open-source, distributed event streaming platform commonly used for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. However, Apache Kafka clusters can be challenging to set up, scale, and manage in production. When you run Apache Kafka on your own, you need to provision servers, configure Apache Kafka manually, replace servers when they fail, orchestrate server patches and upgrades, architect the cluster for high availability, ensure data is durably stored and secured, set up monitoring and alarms, and carefully plan scaling events to support load changes…” This is the definition of AWS MSK from the AWS Big Data [Blog](https://aws.amazon.com/blogs/big-data/how-goldman-sachs-migrated-from-their-on-premises-apache-kafka-cluster-to-amazon-msk/). In Essence, MSK allows one to: ingest and process log and event streams, form real-time, centralized, and privately accessible data buses, and to generally power even driven systems.
**Our Solution to the challenge.**
In the end, we came across a blog post that described How Goldman Sachs had done an implementation of the MSK in their environment. Earlier, I had thought that simply doing a VPC peering and the magic of lambdas would solve our problem. My colleague analyzed this approach, however, and concluded that this was a bad practice since VPC peering is more suited for environments that have a high degree of trust between the parties that are peering the VPCs.
This is mostly because after a VPC peering connection is established, the peered networks have broad access and trust between them, with resources in either VPC able to initiate a connection, this if from the AWS Big data [blog](https://aws.amazon.com/blogs/big-data/how-goldman-sachs-migrated-from-their-on-premises-apache-kafka-cluster-to-amazon-msk/). This sounds bad, right? Potential recipe for GDPR violations perhaps? We are responsible for implementing fine-grained network access controls with SGs to make sure that only specific resources intended to be reachable are accessible over a VPC peering connections. There were other consideration against the establishment of a peering and we just had to abandon this approach. Read the AWS [Blog](https://aws.amazon.com/blogs/big-data/how-goldman-sachs-migrated-from-their-on-premises-apache-kafka-cluster-to-amazon-msk/) to find out more.
To eliminate this overhead, the [publication](https://aws.amazon.com/blogs/big-data/how-goldman-sachs-migrated-from-their-on-premises-apache-kafka-cluster-to-amazon-msk/) on how Godman Sachs built a cross account connectivity to their amazon MSK clusters with AWS private link came in handy. This was extremely useful and gave us an easier alternative to implement this MSK broker in a compliant and secure manner. Since then, I have read quite a lot about MSKs, I feel like an MSK expert! I am kidding, I prefer the security side of things! That it, that’s the story, for now.

| kingastra |
1,441,182 | What is Object Reference in JavaScript | If you are yet to get a full grasp of object reference in JavaScript, this article will break it down... | 0 | 2023-04-20T12:03:56 | https://dev.to/odudev/what-is-object-reference-in-javascript-43oc | object, javascript, oop, coding | If you are yet to get a full grasp of object reference in JavaScript, this article will break it down for you. Object are regularly used by developers and thus, its usage needs to be properly understood.
## Primitive data types
Primitive data types (undefined, null, string, number, boolean and symbol) are usually passed as values. In contrast, objects are passed as reference. This thin difference can have a lot of effect in your code if misused at any point in the code block.
Let's quickly have a look at how strings are passed for example.

Looking at the image, the variables `fName` and `firstName` both consist of a string `John`. This also means that `fName` and `firstName` are equal. Looking at this in code:
```
let fName = "John"
let firstName = "John"
console.log (fName === firstName) // true
```
Therefore, we can conclude that primitive data types are value-based such that if values are equal then the variables will also be equal. We used string as an example in the above code block, you can also try other primitive data types.
## Object reference
As opposed to primitive data types, objects are reference-based which means that object properties are passed to variables by reference.

The object properties are stored in the computer memory. These properties can only be accessed through a reference. You can view reference as the door to a stored property in the computer memory. From the image, one would think that `obj1` should be equal to `obj2`. Let's look at this in a code:
```
const obj1 = {
fName : "Doe"
}
const obj2 = {
fName : "Doe"
}
console.log (obj1 === obj2) // false
```
The above code snippet gives a `false` boolean result, because the reference to the object properties are different. You can also see this scenario as two persons with the same name "John" with a lot of physical resemblance but they are from different family background (reference).
If you change the value of `obj1`, then you realize that `obj2` doesn't change:
```
const obj1 = {
fName : "Doe"
}
const obj2 = {
fName : "Doe"
}
obj1.fName = "Afees"
console.log ( obj1.fName ) // Afees
console.log ( obj2.fName ) // Doe
```
## Object reference copy
If you then go ahead to copy a reference from one variable to another, then you have also copied the object properties.

In code:
```
const obj1 = {
fName : "Doe"
}
const obj2 = obj1 // obj1 reference copied to obj2
console.log (obj1 === obj2) // true
```
By changing the properties of `obj1`, you have also changed the properties of `obj2`:
```
const obj1 = {
fName : "Doe"
}
obj1.fName = "Afees"
console.log ( obj1.fName ) // Afees
console.log ( obj2.fName ) // Afees
```
Finally, variables might be equal or unequal depending on whether they share the same reference or not. However, the properties in the computer memory (if they are the same) will always be equal when compared. For example:
```
const obj1 = {
fName : "Doe"
}
const obj2 = {
fName : "Doe"
}
console.log ( obj1 === obj2 ) // false
console.log ( obj1.fName === obj2.fName) // true
```
## Wrapping up
In this article, we have been able to explicitly captured the concept of reference and reference copy for JavaScript object. The more you use object in your code, the better your understanding of object reference. Happy coding!!!
| odudev |
1,441,215 | A Beginners Guide on How to connect a Raspberry Pi Simulator to a Device on IoT Hub | Internet of Things IoT, Internet of Things, is the ability of connected devices to communicate with... | 0 | 2023-04-28T15:09:20 | https://dev.to/yemmyoye/a-beginners-guide-on-how-to-connect-a-raspberry-pi-simulator-to-a-device-on-iot-hub-36bm | iot, iothub, raspberrypisimulator, azureiothub | **Internet of Things**
IoT, Internet of Things, is the ability of connected devices to communicate with each other.
Azure IoT Hub is a bidirectional messaging centre that is hosted in the cloud where messages from an IoT application and its attached IoT devices are collected over the internet. The internet can be used to connect any device that can be compatible with the required sensor and software.
Lets move on to our task of the day!
**STEPS**
(1) Log in to your Azure Portal with your azure account

(2) Type IoT Hub in the Search box and click on the IoT hub displayed

(3) Click on Create

(4) Complete the following details as outlined:
**Project Details**
**Subscription:** Azure Pass Subscription(as the case may be)
**Resource Group:** Create a resource group .I will name it AprilWorkload

**Instance Details:**I will use YemmyIoT
**Region:** Select the Region where you want the IoT Hub to reside. Here, I will select East US

**Leave the **Tier and Daily Message Limit** at default
I.e **Tier:** Standard
**Daily Message Limit:**400,000

(5) Click on Review and Create

(6) Click on Create and wait for it to be fully deployed


(7) Go To Resource

At this point note the followings:
IoT Hub Usage:
Message Used Today:0
IoT Device:0
These indicates that no message has been sent and no device has been connected

(8) On the IoT Hub created, click on Devices and then click Add Device


Under the Device ID, create a name for the device.I will name it Abby
Leave other parameters at default and click on Save

(9) Click on IoT device **Abby** just added to the IoT Hub

(10) Go to the Primary Connection String, click on the eye symbol to view the Connection String and then copy the String

(11) At this point we will need to connect a Raspberry Pi Simulator. Go to your web browser and search for it.
Select "Raspberry Pi Azure IoT Web Simulator".

A Raspberry Pi Simulator enables you to test out your projects without the the use of a physical Raspberry Pi. This is because the Raspberry Pi and other electronic components in your project are digitally emulated.It is economical while it also saves time and effort.
(12) Go to the Raspberry Pi Simulator opened on your browser, and replace the placeholder in **Line 15** with the Primary Connection String you copied in Step 10 above



(13) Click on Run so that the Raspberry Pi Simulator start sending messages to the IoT Hub

The Red light of the Simulator will start blinking indicating that messages is being sent to the Azure IoT Hub
(14) For confirmation of the messages sent as evidence that the Raspberry Pi has been connected to the Azure IoT Hub, go back to the Overview page on the IoT Hub, check the IoT Hub Usage

Message Used Today: 239
IoT Device: 1
Ohla! Messages has been sent. The Raspberry Pi Simulator has been connected to the Azure IoT Hub
| yemmyoye |
1,441,455 | From Idea to Deployment: How I Built Devlinks | I recently finished building devlinks, a developer focused link aggregator similar to Linktree and... | 0 | 2023-04-23T13:19:46 | https://dev.to/killswitchh/from-idea-to-deployment-how-i-built-devlinks-4no | svelte, hackathon, programming, webdev |
I recently finished building [devlinks](https://www.devlink.pro/), a developer focused link aggregator similar to Linktree and Beacons. I would like to share the process of how I converted this idea I had into a fully functional website within a month. I will try to keep this as beginner friendly as possible so everyone despite their proficiency in coding can follow along.
## The Idea
Honestly this is the hardest phase for any project. The process for this is very simple. Start by finding a problem that you are facing, and then ideate on what you could do for solving your problem.
With software engineering you are never solving a problem for the first time. Break down your problem into smaller sub-problems and chances are someone has already encountered that sub-problem and know their way around it.
## The Problem
I'm a person active on Github, Leetcode and Github and I wanted to showcase my all of them in my resume. I started to question, Is there some product that can help me put all of my links in a single page, which I can then link in my resume.
There were a bunch of websites that did the same, like Linktree, beacons, Kofi. But all of these offerings missed something, They all catered to common audience and were not specific to a group of people.
Whenever you widen the scope for anything, you wont be able to delve deep into it. I wanted something like Linktree, but focused for developers. I wanted to show more than just the link name in the landing page. I wanted the visitor to see more details about my Github and other developer related socials.
Now that I have a defined problem statement, I can start working towards a solution.
## Development
Development of any product has a few stages
1. Feature Extraction
2. Design / Architecture
3. Implementation
4. Deployment
## Feature Extraction
Any part of application development starts by breaking the product into a set of features. Here are the features I wanted in my product.
```
1. User should be able to login and create a profile
2. They should be able to create and update a link page with their desired name
3. They should be able to add links to the page.
4. They should be able to view their created page in `<website_name>/<page_name>`.
5. If a link added is a developer related link (Github, Stack Overflow, Leetcode), the page should display additional information.
```
Now that we have an overall idea of what we want in our application.
## Architecture
This stage of an application is where you decide what you build the application with and how the components of your applications are going to work together to solve your problem.
From the feature extraction step, I was able to isolate a few components for my application
1. A Database to store the user and link details
2. A back-end server that can talk to the database.
3. A customer facing front-end application to provide the user with an interface to talk to the backend.
### Database
We have some data to store, how we want to store the data is the next question we have to answer.
Since I didn't have many relations in my feature, I decided to go with a document based DB. But going ahead with a relational Database also wouldn't be a bad choice here.
Now that we decided the type of database, we should pick a database provider. I picked [Mongo Atlas](https://www.mongodb.com/atlas/database) because I already have an account there. But there are other viable options like [Planet Scale DB](https://planetscale.com/) or [Supabase](https://supabase.com/) which have a generous free tier offering.
We now want an ORM to talk to the database for us, [Prisma](https://www.prisma.io/) has been around this space for a while and it has a very robust integration with a lot of database providers including MongoDB so it was a no brainer to not pick this.
### Framework
Now that the database is defined, we need a backend and a front-end to talk to the database. I'd like to point you to another [article](https://dev.to/killswitchh/building-an-app-using-5-javascript-frameworks-introduction-phc) I wrote which has me trying out different frame works.
I got extremely impressed by [SvelteKit]
(https://kit.svelte.dev/) while trying out different frameworks and wanted to build all of my apps using that. Try it out if you haven't and give the creators some love.
Svelte kit is a framework which supports Server side rendering, So I used the server part of the application to integrate with the Database and the client side of the application to interact with the User.
## Implementation
Great, now we have a backend layer and an ORM to talk to the database, all that's left is to setup the frontend interface for the same and start building.
Lets tackle our feature list one by one.
For the authentication, I used Supabase. I wanted users to login via Github and Supabase has a really good SvelteKit integration to handle the same.
The implementation part will have a lot of places where you question your sanity / your ability to code. These are phases every developer goes through and the best ones don't give up.
Here are a few learnings I thought will be useful for every developer
1. Be willing to unlearn
2. Deleting code is almost always better.
3. First, make it work then make it better.
## Deployment
I have the app ready and running in my local, I want to setup deployment so that I the site is live and available for anyone to use.
We have a lot of cloud providers who facilitate deployment. AWS, GCP, Azure have their own services that allow developers to deploy their site. We also have services like Netlify and Vercel that integrate with these cloud providers simplifying the deployment process.
[Vercel](https://vercel.com/) was my choice for this, since they have built-in support for SvelteKit integration, and also offer a generous free tier for all personal projects. I linked the deployment to the domain I bought from [hostinger.com](https://www.hostinger.com/)
And there we go! that's how [devlinks](https://www.devlink.pro/) was made. It so happened that svelte was organizing a [hackathon ](https://hack.sveltesociety.dev/) at the same time, so I used this as an entry for the same.
| killswitchh |
1,441,682 | Day 7. Singly Linked List | 876. Middle of the Linked List Given the head of a singly linked list, return the middle... | 0 | 2023-04-20T03:01:00 | https://dev.to/sosunnyproject/day-7-singly-linked-list-2om | leetcode, javascript |
## [876. Middle of the Linked List](https://leetcode.com/problems/middle-of-the-linked-list)
Given the head of a singly linked list, return the middle node of the linked list.
If there are two middle nodes, return the second middle node.
```js
Input: head = [1,2,3,4,5]
Output: [3,4,5]
Explanation: The middle node of the list is node 3.
Input: head = [1,2,3,4,5,6]
Output: [4,5,6]
Explanation: Since the list has two middle nodes
with values 3 and 4, we return the second one.
```
### Pseudo Code
- something I didn't know was that `head.next` returns the rest of the values, not the next value.
- `head = [1,2,3,4,5], head.val = 1, head.next=[2,3,4,5]`
```js
var middleNode = function(head) {
// check the total length of initial ListNode
// middle index is totalLen/2
// get the ListNode from middle to the end
// console.log(head, head.val, head.next)
// head: val ~ rest
// head.val : val only
// head.next: rest only (no val included)
}
var checkLength = function(head) {
// init counter
// until current value is null OR
// until there's no more next, increment the counter
// return the counter
}
```
### My Attempt
**Things about ListNode (different from Array)**
1. You cannot access the length of the entire nodelist, so we need to traverse the entire ListNode and increment the counter
2. Be mindful about the LinkedList definition given in the problem set
3. Be mindful of while loop condition(when to stop the counter), especially inside `checking the length function`.
```js
// Definition for singly-linked list.
function ListNode(val, next) {
this.val = (val===undefined ? 0 : val)
this.next = (next===undefined ? null : next)
}
```
```js
/**
* @param {ListNode} head
* @return {ListNode}
*/
var middleNode = function(head) {
let totalLen = checkLength(head);
let mid = ~~(totalLen/2);
// ~~: same as Math.floor, but efficient
let counter = 0;
while(counter < mid) {
head = head.next; // update to next ListNode
counter += 1;
}
return head;
};
var checkLength = function(head) {
let counter = 1;
// start with 1 because
// while loop ends before counting the last element,
// which has null as next
while(head.next !== null) {
counter += 1;
head = head.next;
}
return counter;
}
```
### Improve
- So I kind of got caught up with the fact that this is `singly linked list` and didn't think too much about `two pointers theme`.
- There seems to be much simpler way to solve this.
1. Array
2. two pointers: slow and fast
### Relevant problem sets
[2095. Delete the Middle Node of a Linked List](https://leetcode.com/problems/delete-the-middle-node-of-a-linked-list/description/)
[2130. Maximum Twin Sum of a Linked List](https://leetcode.com/problems/maximum-twin-sum-of-a-linked-list/description/) | sosunnyproject |
1,441,903 | How to integrate ChatGPT in the .NET project: A step-by-step guide | ChatGPT is a comprehensive language model trained by brilliant minds at OpenAI. It uses deep learning... | 0 | 2023-04-20T08:37:50 | https://www.ifourtechnolab.com/blog/how-to-integrate-chatgpt-in-the-net-project-a-step-by-step-guide | webdev, chatgpt, dotnet, beginners | ChatGPT is a comprehensive language model trained by brilliant minds at OpenAI. It uses deep learning to generate human-like responses to natural language inputs. One area where ChatGPT has the potential to provide significant benefits is its integration with .NET, a popular software framework used for building applications for Windows, mobile, web, and gaming platforms. This powerful combination can enable software companies to tap into the remarkable capabilities of AI and provide customers with an unparalleled user experience. The possibilities are endless, from enhancing customer support to automating tasks, and even enabling more intuitive interactions with software applications.
Artificial intelligence has emerged as a new means for fostering customer experience and corporate efficiency. Businesses have begun using Chatbots to improve customer service and give clients an easy way to interact with a firm without any human support. The ChatGPT language model was built with the same logic to respond the user queries in conversational English.
As we've seen a number of platforms, such as DOTNET, React.js, and Angular, acquiring popularity in the market, now it's time to evaluate ChatGPT's compatibility with them.
In this blog, we will examine the integration of ChatGPT with the .NET framework and how it may help in simplifying [.NET software development](https://www.ifourtechnolab.com/dot-net-development-company).
## Benefits of integrating ChatGPT into your .NET project
Integrating ChatGPT into your .NET project can offer several benefits, including:
**1. Increased Efficiency:** With its Artificial intelligence, you can automate repetitive tasks, such as answering FAQs, handling users with L2 support, and so on. Thus, it helps your team to focus on more complex tasks.
**2. Improved Customer Support:** ChatGPT enables the product support team to provide instant and accurate responses to customer queries and improve customer satisfaction.
**3. Enhanced Personalization:** By integrating ChatGPT into your .NET project, you can provide personalized experiences to your customers, based on their preferences and behaviors.
The below figure highlights the main advantages of integrating ChatGPT with .NET.

**4. Cost Savings:** Automating customer support tasks with ChatGPT can help reduce costs associated with manual labor and errors.
**5. Improved Data Analytics:** This AI platform can collect and analyze customer data, providing valuable insights into customer behavior and preferences, which can inform business decisions.
**6. Multilingual Support:** It can provide support in multiple languages, allowing you to serve customers from different regions and cultures.
**7. 24/7 Availability:** ChatGPT can provide 24/7 support to your customers, improving their experience and increasing customer loyalty.
Thus, integrating ChatGPT into the .NET project is the best option for [custom software development companies](https://www.ifourtechnolab.com/custom-software-development-company) to boost their development process. In addition, it can help improve customer experience, increase efficiency, and provide valuable insights into customer behavior, while reducing costs associated with manual labor.
## Tools used in this process
- Visual Studio
- .NET Framework
- ChatGPT API
## ChatGPT integration in the .NET development project
The ChatGPT language model from OpenAI was designed to respond to input in natural language and produce writing that resembles that of a person. Using transformer-based language modeling, ChatGPT was trained on a vast corpus of online content, including books, articles, news, and web pages.
You may build sophisticated chatbots, text analyzers, and even code generators using ChatGPT to enhance your apps with natural language processing capabilities.
It is capable of
- Creating textual documents like emails
- Script Python
- Create conversational AI
- Answering questions
- A user interface in natural language should be included in your software
- An instructor who teaches a multitude of subjects
- Determine the papers in a list of inquiries
- Among other things, constructs characters for video games
- Creating code samples
- Summarizing text
- Even modernizing the existing business application
A .NET library specifically designed to work with OpenAI APIs. By using these, you may simplify the integration process and allow apps to fully utilize Chat-GPT's capabilities.
### Read More: [Best Tips and Practices for .NET Performance Optimization and Scalability](https://www.ifourtechnolab.com/blog/best-tips-and-practices-for-net-performance-optimization-and-scalability)
Create an account on OpenAI Chat
We need to sign up for an OpenAI account before we can begin using ChatGPT.
Given below are the steps to register for a ChatGPT account.
Step – 1: Go to the OpenAI website (https://chat.openai.com/auth/login)
Step – 2: You can signup and Log-in from the above website
Step – 3: You may create an account using your phone number, email address, Google or Microsoft account, or you can sign in if you already have one.

Step – 4: You will be sent to a screen where you must enter your password when the account creation is complete.
### Want to [hire .NET developers](https://www.ifourtechnolab.com/hire-dot-net-developer) for your project?
Step – 5: Use a secure password, and make sure your email address is legitimate to prevent misunderstandings.
Step – 6: Now that you have logged into ChatGPT, you may examine the ChatGPT performance and chat samples.
### Create an OpenAI key
• Create an account on OpenAI platform for generating your own OpenAI key.
To use the API, you may require an API key from OpenAI.

When you get the API, you may use it to authenticate API requests.
Next, you need to create a new secret key as shown in the following figure.

Once the secret key has been generated, you can review the details of your secret key, including the name, private key, date it was produced, and the last time it was used as shown in the following figure.

### Create an ASP.NET application
To create the ChatGPT demo application, first, you need to create a new C# project in Visual Studio.
### Installation of OpenAI API
Installing the OpenAI C# SDK is the first step towards integrating ChatGPT. To do this with the NuGet package management, use the following command in the Package Manager Console:
#### Install-Package OpenAI
### Initialize the SDK
You must initialize the OpenAI C# SDK after installing it by providing your OpenAI API key. You may do this by creating an instance of the OpenAIClient class and passing it your API key as a parameter.
```
using OpenAI_API;
var openAi = new OpenAIAPI("YOUR_API_KEY");
```
## Start to add ChatGPT code in the application
The Chat API is accessed via OpenAIAPI.Chat. The ChatGPT API has been deployed, thus we can now contribute code to the application. All you need to do is swap out YOUR API KEY for your OpenAI API Key.
There are two ways to use the Chat Endpoint,
- Using simplified conversations
- With the full Request/Response methods.
### Using simplified conversations
The Conversation Class enables you to easily interact with ChatGPT by adding messages to a conversation and asking ChatGPT to respond.
```
using System;
using System.Threading.Tasks;
using OpenAI_API;
class Program
{
static async Task Main(string[] args)
{
// Instances of the APIAuthentication class can be created using your API key.
var authentication = new APIAuthentication("YOUR_API_KEY");
// APIAuthentication object used to create an instance of the OpenAIAPI class
var api = new OpenAIAPI(authentication);
// ChatGPT lets you start a new chat.
var conversation = api.Chat.CreateConversation();
// Add user input and receive a reply from ChatGPT
conversation.AppendUserInput("YOUR_INPUT_HERE");
var response = await conversation.GetResponseFromChatbot();
Console.WriteLine(response);
// Before closing the terminal window, await user input.
Console.ReadLine();
}
}
```
To test this code, you have to write the input text at “YOUR_INPUT_HERE”.
### Using the full Request/Response methods
Using OpenAIAPI.Chat, you may have access to all of the Chat API's controls and functions that go along with CreateChatCompletionAsync(). Use its function to retrieve a ChatResult that mostly contains metadata.
If you only need the prompt answer text, you may use toString() to obtain it. Check out this example where the ChatGPTTurbo Model is used.
```
async Task<chatresult> CreateChatCompletionAsync(ChatRequest request);
// for example
var result = await api.Chat.CreateChatCompletionAsync(new ChatRequest()
{
Model = Model.ChatGPTTurbo,
Temperature = 0.1,
MaxTokens = 50,
Messages = new ChatMessage[] {
new ChatMessage(ChatMessageRole.User, "Hello!")
}
})
// or
var result = api.Chat.CreateChatCompletionAsync("Hello!");
var reply = results.Choices[0].Message;
Console.WriteLine($"{reply.Role}: {reply.Content.Trim()}");
// or
Console.WriteLine(results);
</chatresult>
```
### Read More: [ChatGPT won't replace custom Add-in developers: 10 reasons why?](https://www.ifourtechnolab.com/blog/chatgpt-won-t-replace-custom-add-in-developers-10-reasons-why)
Azure OpenAI
When using the Azure OpenAI Service, you must include both your model deployment id and the name of your Azure OpenAI resource.
Configuration should look something like this for the Azure service:
```
OpenAIAPI api = OpenAIAPI.ForAzure("YourResourceName", "deploymentId", "api-key");
```
After that, you may utilize the API object as usual. Any of the other methods mentioned in the Authentication section above may also be specified as the APIAuthentication. This library does not yet support the AD-Flow; it only supports the API-key flow.
This brings us to the end of this blog. We hope this article has helped you understand the best practices for .NET integration with ChatGPT.
#### Conclusion
This article has gone through the ways in which ChatGPT can be integrated with the .NET framework and how it helps in simplifying [custom software development](https://www.ifourtechnolab.us/custom-software-development-company-usa). By following these easy steps, you should be able to integrate ChatGPT into your C# code and produce text completions using the model. Just keep in mind that the code above is an example; you may change it to suit your requirements.
The ChatGPT model is powerful with many cutting-edge features, such as the ability to control prompt duration, temperature, and top-p. These variables can be modified to enhance the completions the model generates. You may also utilize various models from the OpenAI API as per your needs.
| ifourtechnolab |
1,441,937 | Enhanced Security in Node.js v20: The New Permission Model | As Node.js continues to grow in popularity, security has become an increasingly important aspect of... | 0 | 2023-04-22T07:31:46 | https://dev.to/andreasbergstrom/introducing-the-nodejs-permission-model-enhanced-security-and-granular-control-3md0 | node | As Node.js continues to grow in popularity, security has become an increasingly important aspect of application development. With the release of Node version 20, a new experimental security feature has been introduced: the Permission Model. In this blog post, we will dive into the Permission Model, its capabilities, and how it can help you create more secure Node.js applications.
For a web service or background worker that mostly relies on network I/O, it makes little sense to let it run unrestricted as most tend to do. While this might be a already contained problem in production where it is (hopefully) running in its own container, it is a whole other issue when running on your local machine where any 3rd party library (that you are knowingly or unknowingly using) has access to your entire filesystem.
While this can and should already be solved by running node in a restricted user shell, most do not. And besides file system access we also want control over how and when child processes and worker threads are created, as well as any native addons our node code uses.
## What is the Node.js Permission Model?
The Permission Model is an experimental mechanism that allows developers to restrict access to specific resources during execution. This feature aims to provide more control over various aspects of your Node.js applications, including:
- Restricting access to the file system (read and write)
- Restricting access to child_process
- Restricting access to worker_threads
- Restricting access to native addons
## Getting Started with the Permission Model
To start using the Permission Model, you will need to enable it with the --experimental-permission flag when running your Node.js application. This flag will restrict access to the file system, spawn processes, and use node:worker_threads by default.
```bash
node --experimental-permission index.js
```

<figcaption>When enabling the permission model, node won't even be able to read the file you specifically asked it unless you also grant it a matching fs-read permission.</figcaption>
## Granular Control with Flags
The Permission Model comes with several flags that allow you to grant specific permissions for your application:
`--allow-fs-read` and `--allow-fs-write`: Grant read and write access to the file system.
Example: Allow read and write access to the entire file system.
```bash
node --experimental-permission --allow-fs-read=* --allow-fs-write=* index.js
```
However, testing this on zsh and MacOS at the moment will just result in a:
`zsh: no matches found: --allow-fs-read=*`
Instead I had to use `--allow-fs-read-/` to make it function as intented.
Example: Allow write access to the /tmp/ folder and read access to the /home/index.js file.
```bash
node --experimental-permission --allow-fs-write=/tmp/ --allow-fs-read=/home/index.js index.js
```
## `--allow-child-process`: Grant access to the child_process module.
```bash
node --experimental-permission --allow-child-process index.js
```
`--allow-worker`: Grant access to worker_threads.
```bash
$ node --experimental-permission --allow-worker index.js
```
## Checking Permissions at Runtime
When the Permission Model is enabled, you can use the new permission property of the process object to check if a certain permission has been granted at runtime:
```javascript
process.permission.has('fs.write'); // true
process.permission.has('fs.write', '/home/nodejs/protected-folder'); // true
```
## Comparing with Deno
With the introduction of the Permission Model in Node.js version 20, it's only natural to draw comparisons with Deno, which has built-in permissions from the get-go. In this section, we'll explore the similarities and differences between the Node.js Permission Model and Deno permissions, and how they impact security in application development.
### Node.js Permission Model
As we've previously discussed, the Permission Model in Node.js is an experimental feature that grants developers granular control over access to resources such as the file system, child_process, worker_threads, and native addons. With various flags like --allow-fs-read, --allow-fs-write, --allow-child-process, and --allow-worker, developers can specify paths, use wildcard patterns, and check permissions at runtime.
### Deno Permissions
Deno, a runtime for JavaScript and TypeScript, was designed with security in mind from the beginning. As a result, it has a built-in permission system that enables developers to control access to various resources. The permissions in Deno are explicitly granted through flags, such as:
`--allow-read`: Grant read access to the file system.
`--allow-write`: Grant write access to the file system.
`--allow-net`: Grant network access.
`--allow-env`: Grant access to environment variables.
`--allow-plugin`: Grant permission to load plugins.
`--allow-hrtime`: Grant permission to use high-resolution time measurement.
### Comparing Node.js Permission Model and Deno Permissions
**Maturity**: The Node.js Permission Model is still an experimental feature, and its implementation may change in future releases. On the other hand, Deno permissions have been a part of the runtime since its inception and are more mature and stable.
**Security Focus**: Deno was designed with a strong focus on security, which is evident in its default permissions. By default, Deno scripts run in a sandbox without access to the file system, network, or environment variables. In contrast, Node.js has historically granted applications more access by default, and the Permission Model is an effort to mitigate potential security risks.
**Permission Types**: While both Node.js and Deno provide granular control over access to resources, Deno permissions are more extensive, covering areas such as network access, environment variables, and plugin loading. Node.js, on the other hand, currently focuses on the file system, child_process, worker_threads, and native addons.
**Checking Permissions**: Both Node.js and Deno allow developers to check permissions at runtime. In Node.js, you can use the process.permission property, while in Deno, you can use the Deno.permissions.query() method.
The Node.js Permission Model brings more control and security to your applications by allowing you to restrict access to specific resources during execution. While this feature is still experimental and may change in future releases, it shows the commitment of the Node.js community to create more secure applications.
### Futher reading
https://nodejs.org/en/blog/announcements/v20-release-announce
https://nodejs.org/api/permissions.html#process-based-permissions | andreasbergstrom |
1,442,011 | DataOps 2023: An Agile Future of Data Science | DataOps is a methodology that focuses on the management and operations of data science projects. As... | 0 | 2023-04-20T09:45:53 | https://dev.to/tanvisinghania/dataops-2023-an-agile-future-of-data-science-2gfc | dataops, datascience, bigdata | DataOps is a methodology that focuses on the management and operations of data science projects. As organizations are collecting more data than ever before, it is becoming increasingly important to manage and analyze it effectively, and that's where DataOps comes in.
DataOps is a relatively new approach to data management that is gaining a lot of attention in the field of data science. It combines agile development and data science principles to streamline the data management process. Its clear goal is to improve the speed and efficiency of data analysis projects by automating and streamlining the data pipeline.
**Why is DataOps Important?**
With the growing amount of data being generated, it is becoming increasingly challenging for organizations to manage and analyze it effectively.
1. DataOps allows data scientists to focus on what they do best - analyzing data - while leaving the more mundane tasks to automated processes.
2. It assists in improving the accuracy and consistency of data analysis by implementing standard procedures and checks.
3. It plays an important role in developing machine learning models. It allows data scientists to quickly test and iterate on different models, which can lead to more accurate and effective models.
4. DataOps can make model deployment easier.
5. It can help to improve the speed and efficiency of data analysis projects, while also improving the accuracy and consistency of data analysis.
6. It is a powerful tool for data science professionals looking to advance their careers by enhancing the quality and speed of their work.
7. DataOps automates and streamlines the data pipeline, which can include things such as data quality checks, data integration, and model deployment.
**How do Data Science Professionals Benefit from DataOps?**
The benefits of DataOps for data science professionals are significant.
• DataOps can help them improve their skills in data management and analysis, which adds more value to the organizations.
• It helps data science professionals work more efficiently and effectively, which can lead to better job satisfaction and career opportunities.
• By working closely together across teams, they can ensure that data is accurate and reliable and analysis is conducted in a timely manner. This proves beneficial for both teams and can lead to better outcomes for the organization.
DataOps is an incredible methodology that can assist data science professionals to improve the quality and speed of their work, which can lead to better career opportunities.
## **Top Data Science Certification**
If you are interested in pursuing a career in data science, it is critical to possess a good understanding of DataOps. And one way to gain this knowledge is through top data science certifications. There are several certifications available that can help you gain the skills you need to work effectively in DataOps.

The most-trusted name in the field of Data Science Certification is the **Certified Data Scientist Program (CDSP™)** offered by the **[United States Data Science Institute (USDSI®)](https://www.usdsi.org/)**. The certification is highly respected in the industry and empowers you with a competitive edge in your job search.
The **[CDSP™ certification](https://www.usdsi.org/data-science-certifications/certified-data-science-professional)** is a rigorous program that covers a wide range of data science topics, including data management, machine learning, and statistical analysis. It is designed for beginners and fresh graduates with possibly no background in data science who aspire to earn their skills in data science. These enable a data science aspirant to demonstrate to potential employers your unfathomed commitment to your envisioned career.
USDSI® is not just about the said certification. They also cater to mid-level data science professionals as well as seasoned data science officials, aspiring to upskill their skillset or even non-industry professionals wishing to switch to a rewarding data science career trajectory. **[Certified Lead Data Scientist (CLDS™)](https://www.usdsi.org/data-science-certifications/certified-lead-data-scientist)** and **[Certified Senior Data Scientist (CSDS™)](https://www.usdsi.org/data-science-certifications/certified-senior-data-scientist)** are two of the other coveted programs that garner worldwide attention with massive enrolments.
By offering such graded certifications in data sciences, USDSI® has earned its name of repute in the worldwide market as a leading and most-trusted name among **Data Science Certifications** Providers. These certifications are not only geared to pivot you on a career path that is a success; but also allow you to land a meaty data science role. No doubt, beginning with world leaders in chalking out an amplified career in data science is a future-proof decision to make today!
## Conclusion
Beginning right has always been a struggle! With essential USDSI® certifications, you are sorted with a heightened career trajectory in Data Sciences. As per US BLS, data scientist jobs are predicted to experience a 36% surge between a decade period of 2021-2031. DataOps is no-doubt an agile and futuristic sub-domain of data science that is gaining a lot of attention in the field. It is a methodology that combines the principles of Agile development and data science to streamline the data management process. By utilizing DataOps, data science professionals can work more efficiently, improve their data management skills, and make themselves more valuable to organizations.
| tanvisinghania |
1,442,097 | The key things you need to successfully break into tech. | It can be difficult to break into the tech industry, especially for individuals who are just starting... | 0 | 2023-04-20T10:47:29 | https://dev.to/chidiebereomasi/the-key-things-you-need-to-successfully-break-into-tech-5gjp | webdev, javascript, programming, productivity | It can be difficult to break into the tech industry, especially for individuals who are just starting out. However, it is possible to find employment in the tech sector if you have the correct attitude, abilities, and resources. We'll talk about the requirements in this article if you want to succeed in the tech industry.
**1. A Passion for Technology**
You must truly love technology if you want to succeed in the tech business. It follows that you should be interested in how technology functions, how it may be applied to problems, and how it can improve people's lives. Your motivation and attention will be sustained if you have a passion for technology.
**2. Technical Skills**
Anyone looking to enter the IT sector needs to possess strong technical skills. Programming languages, algorithms, data structures, and software development techniques must all be thoroughly understood. Additionally, it's critical to keep up with the most recent developments in the field's trends and technologies. Technical skills can be learned independently, online, in boot camps, or as part of a computer science degree.
**3. A Strong Portfolio**
A portfolio is an assortment of your finest work that demonstrates your talents and expertise. You may differentiate yourself from the competition and show that you have the problem-solving skills needed for the job with a great portfolio. Personal projects, contributions to open-source projects, as well as work you've done for clients or prior employers, can all be included in your portfolio.
**4. Networking Skills**
Networking is crucial for getting a job in the technology sector. You need to network with other businesspeople in the field, go to tech events, and take part in online forums. You can find out about employment openings, receive references, and gain expert advice by networking.
**
5. Soft Skills**
Non-technical talents known as "soft skills" are crucial for entering into the tech business. These include working as a team, communicating, resolving issues, and managing time. These abilities are necessary for teamwork and collaboration with other business experts.
**6. A Growth Mindset**
And finally, you need to have a growth attitude if you want to succeed in the tech business. This implies that you should be willing to try new things, accept difficulties, and view failure as a chance to grow. Your ability to adapt and stay motivated will help you get beyond the challenges you'll encounter as you seek a career in technology.
In conclusion, a combination of technical and non-technical abilities is needed to succeed in the tech industry. You must possess a solid portfolio, networking skills, a passion for technology, technical proficiency, soft abilities, and a growth mentality. You can enter the technology sector and have a fulfilling career by honing these abilities and being dedicated to your objectives. | chidiebereomasi |
1,442,103 | 𝐏𝐲𝐭𝐡𝐨𝐧 𝐑𝐨𝐚𝐝𝐦𝐚𝐩 🛣 | Beginner --->>> Expert Python is a powerful and versatile programming language that has... | 0 | 2023-04-20T11:03:18 | https://dev.to/cybertech/-483i | Beginner --->>> Expert
Python is a powerful and versatile programming language that has become popular for its readability, ease of use, and wide range of applications. Whether a beginner or an experienced programmer, many resources are available to help you learn and improve your Python skills.
 | cybertech | |
1,443,447 | CloudTrail in AWS | As more and more businesses are migrating their infrastructure and applications to the cloud, the... | 0 | 2023-04-21T11:44:39 | https://dev.to/shivamchamoli18/cloudtrail-in-aws-5044 | cloudcomputing, cloudsecurity, aws, infosectrain | As more and more businesses are migrating their infrastructure and applications to the cloud, the need for security and compliance has become paramount. To meet these requirements, AWS offers a service called CloudTrail, which provides a detailed record of all API calls made within an AWS account. This article explores its key features and benefits.

## **What is AWS CloudTrail?**
AWS CloudTrail refers to a service that provides a record of all API calls made within an AWS account, enabling logging, monitoring, troubleshooting, and compliance.
CloudTrail provides a comprehensive audit trail of all the API activity within an AWS account, including who performed what action when the action was performed, and what resources were affected by the action. This information can be used to monitor and detect security breaches, troubleshoot operational issues, and demonstrate compliance with industry and regulatory standards.
## **How does AWS CloudTrail work?**
AWS CloudTrail records and logs all API calls and related events within an AWS account. These logs can then be stored in Amazon S3 or sent to cloud watch and can be used for auditing, compliance, and security purposes. CloudTrail can also be configured with various other tools in AWS to trigger alerts and automate actions based on certain events, and its logs can be integrated with other AWS services for analysis and visualization.
## **Features of AWS CloudTrail:**
The following are some features of AWS CloudTrail:
- Provides a thorough audit trail of all API calls made within an AWS account
- Enables monitoring, troubleshooting, and compliance
- Logs can be stored in Amazon S3 and can be easily accessed and analyzed using various tools
- Highly scalable and customizable to meet the needs of any organization
- Can be integrated with other AWS services, such as CloudWatch and Lambda, to trigger automated actions based on certain events
- Can be secured through features such as encryption at rest using AWS KMS and access control using AWS IAM
- Can track changes to AWS resources using CloudTrail Insights, enabling users to identify and respond to unauthorized modifications
- Provides visibility into account activity across multiple AWS regions and accounts
## **Benefits of AWS CloudTrail:**
The following are some benefits of AWS CloudTrail:
- Provides a comprehensive audit trail of all activity within an AWS account
- Enables monitoring, troubleshooting, and compliance with AWS infrastructure
- Supports integration with other AWS services, such as CloudWatch and Lambda, for automation
- Scalable and customizable to meet the needs of any organization
- Logs can be stored in Amazon S3 for easy access and analysis
- Logs can be encrypted and access restricted for security
- Can track changes to AWS resources using CloudTrail Insights
## **Final words:**
AWS CloudTrail is a powerful tool for monitoring and auditing activity within an AWS account. Its detailed logging capabilities provide valuable insights and help ensure the security and compliance of cloud-based applications and infrastructure. [InfosecTrain](https://www.infosectrain.com/)'s [AWS](https://www.infosectrain.com/cloud/) training course provides a comprehensive understanding of how to enable, configure and analyze CloudTrail logs for AWS security. | shivamchamoli18 |
1,442,209 | Errors while Upgrading from springboot 2.5.8 to 2.6.0 | Hi guys, I'm working on a springboot project. it's working perfectly fine when i do a mvn clean... | 0 | 2023-04-20T14:12:45 | https://dev.to/amadou_badiane_0e4ebc2675/errors-while-upgrading-from-springboot-258-to-260-3l6c | java, spring, springboot | Hi guys,
I'm working on a springboot project. it's working perfectly fine when i do a mvn clean verify with the springboot 2.5.8. But as soon as i upgrade to 2.6.0 in my pom.xml, it crashes during the tests (skiptests works fine in 2.6.0). Can you help we with that please ? (My goal is to upgrade to springboot 3. But i'm stuck in 2.6.0. Here is the error I'm getting :
```
# Created at 2023-04-20T16:09:58.241
System.exit() or native command error interrupted process checker.
java.lang.IllegalStateException: error [STOPPED] to read process 4920
at org.apache.maven.surefire.booter.PpidChecker.checkProcessInfo(PpidChecker.java:145)
at org.apache.maven.surefire.booter.PpidChecker.isProcessAlive(PpidChecker.java:116)
at org.apache.maven.surefire.booter.ForkedBooter$2.run(ForkedBooter.java:214)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
```
| amadou_badiane_0e4ebc2675 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.