id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,743,608 | Building Serverless REST APIs on AWS: A Step-by-Step Guide with Lambda, API Gateway, and DynamoDB | Hello Devs, Today we are going to see how we can use AWS serverless infrastructure to create... | 0 | 2024-01-29T14:55:19 | https://dev.to/harsh_gajjar/building-serverless-rest-apis-on-aws-a-step-by-step-guide-with-lambda-api-gateway-and-dynamodb-12ce | aws, restapi, lambda, dynamodb |
Hello **Devs**,
Today we are going to see how we can use AWS serverless infrastructure to create RESTAPIs. This is just basic demo, we cant use it in project but can be used for mini projects. Also this is in free tier 🤟🏻
lets start with basic intro of all the service we are going to use.
## 1. AWS lambda function
- AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). Serverless computing allows you to run code without provisioning or managing servers. With AWS Lambda, you can execute your code in response to specific events, such as changes to data in an Amazon S3 bucket, updates to a DynamoDB table, or an HTTP request via API Gateway.
More information : [here](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html).
## 2. API Gateway
- Amazon API Gateway is a fully managed service provided by Amazon Web Services (AWS) that allows you to create, deploy, and manage APIs (Application Programming Interfaces) at scale. It acts as a gateway between your applications and the backend services, enabling you to create RESTful or WebSocket APIs.
More information : [here]((https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html))
## 3. DynamoDB
- Amazon DynamoDB is a fully managed NoSQL database service provided by Amazon Web Services (AWS). It is designed to provide fast and predictable performance with seamless scalability. DynamoDB is suitable for a variety of use cases, ranging from simple key-value stores to more complex applications with high read and write throughput requirements.
More information : [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html)
## Now lets start by doing 🤞🏼

**STEPS**
1. Create a Permission Policy
2. Create Execution Role for Lambda function
3. Create Function
4. Create RESTAPI using API Gateway
5. Create Resources on Your RESTAPI
6. Create a DynamoDB table
7. Write and understand Code
8. Test the integration of API Gateway, Lambda, and DynamoDB.
**Optional**
9.Deploy (not on your domain because idk how to yet😉)
## **1. Create a Permission Policy**
- Go to Your AWS console and search IAM

- Click on **Polices**

- Create **New Policy**
- Select JSON and paste the following code
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Action": [
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:UpdateItem"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "",
"Resource": "*",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow"
}
]
}
```
- So what does it says, it says to whichever **service** it is attached to will have **access** to **DynamoDB** and to **create Logs**.
- Click next and give name to it, I am giving **lambda-apigateway-policy**
- You can also add tags if you can.
##2. Create Execution Role for Lambda function##
- Click on **roles**

- Click on create new role
- Select AWS service and and Service use case as **lambda** and click next

- Click next and **search** the Policy we created earlier

- Click next and name the role with any name as per your ease of understanding, i am giving **lambda-apigateway-role**.
## 3. Create Function
- Search **lambda**
- Click Functions and create.

- Give name of function I am naming **LambdaFunctionOverHttps**
- **Runtime environment as Node.js 18.x**
- Select **role** we created earlier

- Leave everything **default** and create lambda function
- Will create the function at last because will try to understand it little.
## 4. Create RESTAPI using API Gateway
- Search **API gateway **
- In APIs section create new API with **RESTAPI**(**Not private one**)
- Select new API and name it I am naming it **DynamoDBOperations**
- Leave everything **default**

## 5. Create Resources on Your RESTAPI

- Name it, i am naming **getallusers**.

- Click on **getallusers**, make sure its **highlighted** then click create method.

- In create method select the method type as **GET**.
- Also make sure you select **lambda function** we created earlier and enable **<u>lambda proxy integration</u>**.
- With **Lambda Proxy Integration**, the Lambda function receives the entire request object from the API Gateway, including information such as headers, query parameters, path parameters, and the request body. The Lambda function then processes the request and returns a response object, which includes the status code, headers, and the response body.

- Now we will create another **endpoint** or say **resources** with four method **GET POST PUT DELETE**, i am naming as **user** make sure you have **lambda proxy integration enabled**, it will look something like this.

## 6. Create a DynamoDB table
- Search **DynamoDB**.
- Click on **Tables**.
- Create a Table with whichever name you want I am naming **Users**
- Set **Partition key** as **<u>id</u>** with type **<u>String</u>**.
- Leave everything **default** and click on create table.

## 7. Write and understand Code
- You can find **code** here : [repo](https://github.com/harshgajjar02/RESTAPIs-lambda-apigateway-dynamodb.git)
- lets start with installing node modules and AWS SDK(Software development kit)
- We need following modules
1. **aws-sdk/lib-dynamodb**. More Details [here](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-lib-dynamodb/) and [here](https://www.npmjs.com/package/@aws-sdk/lib-dynamodb)
2. **aws-sdk/client-dynamodb**. More Details [here](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/dynamodb/) and [here](https://www.npmjs.com/package/@aws-sdk/client-dynamodb).
**Lets start understanding code**
- First we **Import** what we need
```
import { DynamoDBDocumentClient, PutCommand, GetCommand, UpdateCommand, DeleteCommand, ScanCommand } from "@aws-sdk/lib-dynamodb";
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
const region = "us-east-1";
const ddbClient = new DynamoDBClient({ region: region });
const ddbDocClient = DynamoDBDocumentClient.from(ddbClient);
const tablename = "Users";
```
- First we create **ddbClient** , **DynamoDBClient** with specified **AWS region**. This **client** will be used to interact with **DynamoDB**
- The **DynamoDBDocumentClient** is a higher-level client provided by the AWS SDK for JavaScript. It simplifies working with DynamoDB by providing a more JavaScript-friendly API, and it is built on top of the lower-level **DynamoDBClient**. It is created from the previously created **DynamoDBClient** instance.
- Please add your region in which you have created all the resources. And also tablename.
```
export const handler = async (event) => {
..
..
..
}
```
- This Block is provided by **lambda function** in which we can respond to specific event.
- **Event** is an **object** which is returned by lambda function if we enable **lambda proxy** during we created method in API resource creation section.
- If you want to see what that contains then just return event in response, will see later.
- Here we have specified two paths **getalluserPath** and **userpath** as per our endpoints.
- We call a specific **javascript function** based on **httpMethod** and **endpoint path**.
- We get details of both from the **event object**.
- You should once return this event object as a response so that whole concept will be much clearer.
- Make sure you
```
export const handler = async (event) => {
let response;
const getallusersPath = "/getallusers";
const userpath = "/user";
const body = JSON.parse(event.body);
// event.payload.TableName = tablename;
switch (true) {
case event.httpMethod === "GET" && event.path === getallusersPath:
response = getallusers();
break;
case event.httpMethod === "GET" && event.path === userpath:
response = getSingleuser(event.queryStringParameters.id);
// response = buildResponse(200, "hello there");
break;
case event.httpMethod === "POST" && event.path === userpath:
// response = buildResponse(200 , event.queryStringParameters.id);
response = saveUser(body);
break;
case event.httpMethod === "PUT" && event.path === userpath:
response = updateUser(body);
break;
case event.httpMethod === "DELETE" && event.path === userpath:
// response = buildResponse(200,bod1.payload);
response = deleteUser(body.payload);
// response = buildResponse(200, "hello there");
break;
default:
response = buildResponse(404, "404 Not Found");
}
return response;
};
```
- I will explain one function and rest is same.
- What we are doing here is creating an object of the required method and then sending that object to client, we created earlier so that it can execute our query.
- lets take example of **saveUser** function which saves the user in the **Dynamodb**.
- Here i have specific structure of request body.
- The id is user specified right now but it should be **Dynamically** created Sorry for that 😶
```
{
"payload" : {
"Item": {
"id": "1255",
"username":"Bruce Wayne ",
"description":"I love Vadapav :)"
}
}
}
```
- Anything we add under the **Item** section will be added to database
- In **PutCommand** object below we have three keys one is specifying **TableName**, **Item** to be added, and a **ConditionExpression** in which it is define a condition that if **<u>id</u>** is already present than dont add it and return **400** error code.
- There are plenty of such conditions **DynamoDB** has.
More Information : [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html)
```
async function saveUser(body) {
if (!body.payload) {
return buildResponse(400, "payload not found");
} else {
const command = new PutCommand({
TableName: tablename,
Item: body.payload.Item,
ConditionExpression: "attribute_not_exists(id)",
});
try {
let response = await ddbDocClient.send(command);
return buildResponse(200, response);
} catch (error) {
return buildResponse(500, error);
}
}
}
```
- Each command object has a specific format to be followed majority times.
- The **update** command will be somewhat difficult to understand because as we have **NoSQL** database, i made a dynamic query means as we have no specific schema any record can have different attributes so this will automatically handle any number of update attributes or say columns
```
const updateCommand = new UpdateCommand({
TableName: "YourTableName",
Key: {
id: "1234ABCD", // Assuming "id" is the primary key
},
UpdateExpression: "SET firstName = :newFirstName, lastName = :newLastName, age = :newAge",
ExpressionAttributeValues: {
":newFirstName": "John",
":newLastName": "Doe",
":newAge": 30,
},
ReturnValues: "ALL_NEW",
});
```
- **UpdateExpression** contains all the attribute one want to update.
- **ExpressionAttribute** contains the value of attribute to change;
- **ReturnValues** return the response of newly updated record.
- If you want to understand more in detail go through document or videos but please make sure you watch latest video because AWS changes many things and you will get errors. I suggest documentation much more.
- This is all now we will test it
## 8. Test the integration of API Gateway, Lambda, and DynamoDB
- Copy the code and paste it in index.mjs of lambda function.

- If you getting error because of node module then upload the zip file containing all the files.
- **Go to APIGateway and click on the resource method of specific endpoint let us start with getalluser.
**
- Make sure of your resource path.
- Mine is somewhat different because i nested all above paths under DynamoDBManger.
- so my path is **/DynamoDBManager/getallusers**(yours should be /getallusers).
- And my **index.mjs** file also has same path as above so make sure to update as per your resource.

- Click on **Test** section

- Click on **Test** and you will get **response** along with the some logs
- You wont get any response as your table doesn't have any data

- Now lets create some **records**
- Click on **POST** of /user resource and go to test section
- In request body paste following **JSON**
- Here **id** is mandatory and **attributes** can be anything.
```
{
"payload" : {
"Item": {
"id": "1255",
"username":"MS dhoni",
"description":"Best of all time"
}
}
}
```

- You will get following response if **id** already **exists**
- Here you can see we have **ConditionalCheckFailedException** because we had condition mentioned in our code that id should not exist in table

**- GET user with id we created**
- For **GET** request we are using Query strings with id to our table id

- Will get following response if **id** is not available.

**- UPDATE the user with id we created**
- For **update** use the following structure or JSON
```
{
"payload" : {
"id":"1200",
"attributes" : {
"username":"Sachin Tendulkar"
}
}
}
```
- Here attribute name should be same as the original one and the value you wanna update can be different.

- You can **explore** what errors you can have.
## 9.Deploy
- Go to **APIgateway** and select your **API** and than click **deploy**.
- Select NEW stage, name it **dev** or anything.
- Now select the **method** and you will get the link

- You can use **postman** on this links
- By deploying you will get **link** which can be accessed by public.
- I recommend don't share it yet.
##🍰 Conclusion 🍧
- If you reached here than thank you for reading my naive blog.
- It took me 2 days to create this because I used old AWS-SDK on node20 runtime environment😂 and got many errors.
- Also I used same lambda function for all APIs which not good at all.
- Again I exposed ID to user which is not good, so I did some wrong practice which is not right and will update this in next project in which will try to integrate the AWS Cognito along with this.
- There is auto-incrementing attribute for id in DynamoDB you can modify and add it to your code as a part of exploration
- Avoid Hardcoding some variable like DB name, AWS-region instead consider using environment variables or AWS Systems Manager Parameter Store to manage such configurations.
- This is overview project to get familiar with AWS services.
- If I did something wrong tell me so can avoid it later.
- console.log("See you later 😉");
| harsh_gajjar |
1,743,704 | Unleashing the Power of Google Cloud Platform Compute Services: A Comprehensive Overview and Use Cases | Introduction: Google Cloud Platform (GCP) offers a robust suite of compute services that... | 0 | 2024-01-28T14:34:44 | https://dev.to/abdulrazzak_trabulsi/unleashing-the-power-of-google-cloud-platform-compute-services-a-comprehensive-overview-and-use-cases-1gi4 | cloudcomputing, gcp, aws, devops | ## Introduction:
Google Cloud Platform (GCP) offers a robust suite of compute services that cater to a wide range of application deployment scenarios. In this article, we will explore five key compute services provided by GCP: Cloud Run, App Engine, Google Kubernetes Engine (GKE), Cloud Functions, and Compute Engine. We'll delve into the unique features of each service and present real-world use cases to highlight their versatility and effectiveness.
## 1. Cloud Run:
Serverless Containers for Any Environment
**Overview:**
Cloud Run is a fully managed compute platform that enables developers to deploy containerized applications effortlessly. It abstracts away infrastructure management, allowing developers to focus solely on their code.
**Use Cases:**
- Microservices Architecture:
Cloud Run is ideal for deploying microservices, enabling independent development, scaling, and deployment of services.
- Event-Driven Workloads:
Handle bursty workloads efficiently by automatically scaling up or down based on incoming requests.
## 2. App Engine:
Platform as a Service (PaaS) Simplified
**Overview:**
App Engine is a fully managed platform that abstracts away infrastructure, allowing developers to focus on building scalable applications. It supports multiple programming languages and automatically handles application scaling.
**Use Cases:**
- Web Applications:
Quickly deploy and scale web applications without managing the underlying infrastructure.
- API Backends:
Easily build and deploy API backends that scale based on demand.
## 3. Google Kubernetes Engine (GKE):
Orchestration for Scalability
**Overview:**
GKE is a managed Kubernetes service that simplifies the deployment, management, and scaling of containerized applications using Kubernetes.
**Use Cases:**
- Containerized Workloads:
Run containerized applications at scale while benefiting from the flexibility and power of Kubernetes.
- Hybrid and Multi-Cloud Deployments:
Seamlessly manage and orchestrate containers across on-premises and multi-cloud environments.
## 4. Cloud Functions:
Event-Driven Serverless Functions
**Overview:**
Cloud Functions allows developers to build and deploy serverless functions that automatically scale in response to events.
**Use Cases:**
- Event-Driven Automation:
Execute code in response to events such as file uploads, database changes, or HTTP requests.
- Real-Time Data Processing:
Process streaming data in real-time without worrying about server provisioning.
## 5. Compute Engine:
Infrastructure as a Service (IaaS) with Customization
**Overview:**
Compute Engine provides virtual machines (VMs) that can be customized for various workloads, offering full control over the infrastructure.
**Use Cases:**
- High-Performance Computing (HPC):
Run computationally intensive workloads requiring high computational power and specific configurations.
- Legacy Application Migration:
Lift and shift existing applications to the cloud with full control over the underlying infrastructure.
## Conclusion:
Google Cloud Platform's compute services offer a comprehensive set of solutions for diverse application deployment scenarios. Whether you're looking for serverless simplicity, container orchestration, or full infrastructure control, GCP's compute services provide the flexibility and scalability needed to meet your unique requirements. By leveraging these services, businesses can achieve faster time-to-market, improved resource utilization, and enhanced overall agility in the cloud computing landscape. | abdulrazzak_trabulsi |
1,744,958 | Navigating the Process: Charting Your Course to Custom Software Solutions | In the world of computers today, regular software might not be the perfect fit for your business.... | 0 | 2024-01-29T16:04:00 | https://dev.to/marufhossain/navigating-the-process-charting-your-course-to-custom-software-solutions-1hh9 | In the world of computers today, regular software might not be the perfect fit for your business. It's like trying to fit a square block into a round hole. That's where custom software comes in—it's like having tools made just for you. But starting this journey might seem a bit scary. Don't worry! This guide will help you understand and handle the process, making sure your custom software journey goes smoothly.
**Charting Your Course: Defining Your Needs**
Before setting sail, chart your course. Clearly define your needs and goals. What problem are you trying to solve? What functionalities are essential? Who are your target users? The more detailed your roadmap, the easier it is for your [software development team](https://www.clickittech.com/developer/software-development-team/) to translate your vision into reality.
**Assembling Your Crew: Choosing the Right Team**
Your software development team is your trusty crew, navigating the technical currents and code krakens. Research, interview, and compare potential partners. Look for teams with expertise in your specific needs, a proven track record of success, and a collaborative, communicative approach. Remember, the right team feels like a seamless extension of your own, not a hired hand.
**Setting Sail: Planning and Development**
With your crew assembled, it's time to hoist the sails! The development process involves collaboration and iteration. Define key milestones, communication channels, and project management tools. Be prepared to provide feedback and adapt to changing needs. Remember, agility is key – embrace the unexpected winds and adjust your course as necessary.
**Testing the Waters: Quality Assurance and Refinement**
Before launching your software into the digital ocean, test it rigorously. Conduct thorough quality assurance checks to identify and eliminate bugs. Involve your target users in beta testing to gather real-world feedback. This iterative process ensures your software is polished, user-friendly, and meets your initial goals.
**Deployment and Beyond: Launching and Maintaining Your Solution**
With a seaworthy vessel, it's launch time! Deploy your software smoothly, ensuring accessibility and user onboarding. But the journey doesn't end there. Implement ongoing maintenance and updates to keep your software secure, efficient, and aligned with evolving needs. Remember, your software is a living entity, and a dedicated team is essential for its long-term health and success.
**Navigating the Currents: Common Challenges and Tips**
Unforeseen storms can arise on any voyage. Budget constraints, changing priorities, and communication breakdowns can threaten your course. Stay calm, communicate openly with your team, and adapt your approach as needed. Remember, flexibility and resilience are your anchors in turbulent times.
**Reaching Your Destination: Success and Beyond**
With careful planning, a skilled crew, and a spirit of collaboration, you'll reach your destination – a custom software solution that empowers your business and delights your users. But the journey doesn't end there. Continue to learn, adapt, and innovate. The digital landscape is constantly evolving, and your custom software should be too.
Developing custom software solutions is an exciting adventure, a chance to craft a digital tool that perfectly aligns with your vision. By following this guide, choosing the right team, and embracing a collaborative spirit, you can navigate the process with confidence and arrive at a destination that exceeds expectations. So, set your sails, chart your course, and embark on your software development journey – the digital world awaits your unique masterpiece!
**SEO Optimization:**
* Title: Navigating the Process: A Guide to Developing Custom Software Solutions
* Keywords: custom software development, software development process, software development team, software development tips, software development challenges
* Meta Description: Chart your course to success with this comprehensive guide to developing custom software solutions. Learn the process, choose the right team, and navigate challenges to create a tailor-made digital tool that empowers your business.
* Internal Links: Link to relevant articles on your website, such as "Choosing the Right Software Development Team" or "Common Challenges in Software Development."
This article provides a comprehensive overview of the custom software development process, including key stages, challenges, and tips for success. Remember to tailor the content to your specific audience and website's branding for optimal SEO results.
I hope this is helpful! Please let me know if you have any other questions.
| marufhossain | |
1,743,711 | Understanding SQL: The Language of Relational Databases | Introduction Structured Query Language, commonly known as SQL, is a cornerstone in the world of data... | 26,219 | 2024-01-31T15:43:00 | https://dev.to/bshadmehr/sql-and-ddl-understanding-the-backbone-of-database-structure-and-manipulation-56mo | sql, programming, database | **Introduction**
Structured Query Language, commonly known as SQL, is a cornerstone in the world of data management and database systems. It's the standard language used for accessing and manipulating data stored in relational database management systems (RDBMS). This article aims to provide a comprehensive overview of SQL, exploring its functionalities, syntax, and pivotal role in database interactions.
**What is SQL?**
SQL is a domain-specific language used in programming and designed for managing and manipulating data held in a relational database. It is not only a tool for querying data but also for defining the structure of the database, modifying data, and setting permissions. SQL has become a standard for database management, recognized and implemented by most relational database systems.
**Core Components of SQL**
SQL can be divided into several components, each serving a distinct function:
1. **Data Query Language (DQL)**: The component of SQL used to query the database for specific information. The primary command used in DQL is SELECT.
2. **Data Manipulation Language (DML)**: This part of SQL involves commands that manipulate data in existing tables. The most common DML commands are INSERT (to add new records), UPDATE (to modify existing records), and DELETE (to remove records).
3. **Data Definition Language (DDL)**: DDL involves commands that define the structure of the database itself. This includes commands like CREATE (to create new tables or databases), ALTER (to modify existing database structures), and DROP (to delete tables or databases).
4. **Data Control Language (DCL)**: DCL includes commands related to the rights and permissions in the database system, like GRANT (to give access privileges) and REVOKE (to remove access privileges).
5. **Transaction Control Language (TCL)**: TCL commands are used to manage transactions within the database. This includes COMMIT (to save the work done), ROLLBACK (to undo transactions not yet committed), and SAVEPOINT (to create points within groups of transactions in case of a rollback).
**SQL Syntax**
SQL syntax is the set of rules that defines the combinations of symbols and keywords that can be used in SQL statements. It is relatively straightforward, making it accessible for users ranging from novice programmers to advanced database administrators. A typical SQL statement might look something like this:
```sql
SELECT column1, column2 FROM table_name WHERE condition;
```
**Importance of SQL in Database Management**
1. **Universal Language for RDBMS**: SQL is the standard language for all relational database systems, making it an essential skill for database professionals.
2. **Data Manipulation and Retrieval**: It provides powerful tools for data retrieval, manipulation, and transformation.
3. **Data Integrity and Security**: SQL allows for setting up rules that ensure data integrity and security within the database.
4. **Interactive and Scripted Use**: SQL can be used interactively or scripted in stored procedures, offering flexibility in database management and automation.
5. **Cross-Platform Support**: Being a standard, it is supported across various database platforms, ensuring portability of skills and solutions.
**Conclusion**
SQL is an indispensable tool in the realm of database management. Its comprehensive functionality for querying, manipulating, and managing data makes it a fundamental skill for anyone working with relational databases. The versatility and standardization of SQL underscore its importance in a wide array of applications, from simple data retrieval to complex database management tasks. As data continues to play an ever-increasing role in decision-making and operations across industries, the utility and relevance of SQL remain paramount.
---
This article provides an insightful understanding of SQL, its components, functionalities, and significance in the management of relational databases. It aims to highlight the integral role of SQL in modern data management, underlining its importance as a standard language for database interactions. | bshadmehr |
1,743,717 | Mistral AI API | Mistral AI API from mistralai.client import MistralClient from mistralai.models.chat_completion... | 0 | 2024-01-28T14:48:18 | https://dev.to/jhparmar/mistral-ai-api-41c | **Mistral AI API**
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
import os
api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-small" # Use "mistral-tiny" for "Mistral-7B-v0.2"
client = MistralClient(api_key=api_key)
messages = [
ChatMessage(role="user", content="Give me a meal plan for today")
]
# No streaming
chat_response = client.chat(
model=model,
messages=messages,
)
print(chat_response.choices[0].message.content)
# With streaming
for chunk in client.chat_stream(model=model, messages=messages):
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
import gradio as gr
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
import os
def chat_with_mistral(user_input):
api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-small" # Use "Mistral-7B-v0.2" for "mistral-tiny"
client = MistralClient(api_key=api_key)
messages = [ChatMessage(role="user", content=user_input)]
chat_response = client.chat(model=model, messages=messages)
return chat_response.choices[0].message.content
iface = gr.Interface(
fn=chat_with_mistral,
inputs=gr.components.Textbox(label="Enter Your Message"),
outputs=gr.components.Markdown(label="Chatbot Response"),
title="Mistral AI Chatbot",
description="Interact with the Mistral API via this chatbot. Enter a message and get a response.",
examples=[["Give me a meal plan for today"]],
allow_flagging="never"
)
iface.launch()
Categories
| jhparmar | |
1,743,775 | AI What is Grok? | What is Grok? Grok: Elon Musk’s new LLM, by xAI company. Subscription: Twitter Premium+ at... | 0 | 2024-01-28T15:34:44 | https://dev.to/jhparmar/aiwhat-is-grok-28b2 | - **What is Grok?**
- Grok: Elon Musk’s new LLM, by xAI company.
- Subscription: Twitter Premium+ at $16/month.
- Versions: Grok-0 (33B), Grok-1 (advanced).
- Competition: Beats GPT-3.5 on benchmarks.
- Access: Real-time, via Twitter.
- Style: Sarcasm, humor, swears.
- Uniqueness: No cliché disclaimers.
- Origin: Named for sci-fi “Stranger in a Strange Land”.
- Development: Fast, well-funded.
- Integration: Twitter, for instant updates.
- Audience: Non-pros, X subscribers.
- Persona: Targets AI assistants’ market.
- Appeal: Sci-fi vibes, non-doomsday.
| jhparmar | |
1,743,783 | ChatGPT Vision API – Video | ChatGPT Vision API – Video import openai client = openai.OpenAI() import cv2 import base64 video =... | 0 | 2024-01-28T15:40:15 | https://dev.to/jhparmar/chatgpt-vision-api-video-57lp | **ChatGPT Vision API – Video**
import openai
client = openai.OpenAI()
import cv2
import base64
video = cv2.VideoCapture("video.mp4")
base64Frames = []
while video.isOpened():
success, frame = video.read()
if not success:
break
_, buffer = cv2.imencode(".jpg", frame)
base64Frames.append(base64.b64encode(buffer).decode("utf-8"))
video.release()
response = client.chat.completions.create(
model="gpt-4-vision-preview",
messages=[{"role": "user", "content": [{"image": frame} for frame in base64Frames[0:5]]}]
)
print(response.choices[0].message.content)
| jhparmar | |
1,762,444 | Bromantane | Common mistakes in programming include: Not testing code thoroughly: Failure to test code properly... | 0 | 2024-02-15T17:35:09 | https://dev.to/wetav/bromantane-10oj | Common mistakes in programming include:
Not testing code thoroughly: Failure to test code properly can lead to bugs and errors that may not be caught until much later, causing delays and frustration.
Poor code organization: Lack of proper structure and organization in code can make it difficult to understand, maintain, and debug.
Ignoring error handling: Neglecting to handle errors and exceptions can result in unexpected program behavior or crashes.
Overlooking performance optimization: Failing to optimize code for efficiency can lead to slow execution times and inefficient resource usage.
Inadequate documentation: Insufficient documentation makes it challenging for others (and even oneself) to understand and use the code effectively.
Copying and pasting without understanding: Blindly copying code from the internet or other sources without understanding it can introduce vulnerabilities and unexpected behaviors.
Not using version control: Working without version control systems like Git can make it difficult to track changes and collaborate effectively on projects.
Ignoring security considerations: Neglecting security practices leaves systems vulnerable to attacks and compromises sensitive data.
Relying too heavily on comments: While comments are helpful, relying solely on them instead of writing clear and self-explanatory code can lead to confusion and misunderstandings.
Failing to refactor code: Not refactoring code regularly can result in accumulation of technical debt, making it harder to maintain and extend the codebase over time.
[Bromantane](https://modafinil.pl/) | wetav | |
1,743,787 | ChatGPT Text to Speech API | ChatGPT Text to Speech API import openai client = openai.OpenAI() speech_file_path =... | 0 | 2024-01-28T15:42:36 | https://dev.to/jhparmar/chatgpt-text-to-speech-api-1onj | **ChatGPT Text to Speech API**
import openai
client = openai.OpenAI()
speech_file_path = "speech.mp3"
response = client.audio.speech.create(
model="tts-1",
voice="alloy",
input="Hi Everyone, This is Mervin Praison"
)
response.stream_to_file(speech_file_path) | jhparmar | |
1,743,831 | Access EC2 Instances Privately Using AWS Systems Manager | When it comes to managing and accessing EC2 instances on the AWS platform, security is of utmost... | 0 | 2024-01-28T16:29:46 | https://dev.to/samsumansarkar/access-ec2-instances-privately-using-aws-systems-manager-57og | When it comes to managing and accessing EC2 instances on the AWS platform, security is of utmost importance. The traditional method of connecting to instances via SSH or RDP may pose security risks, especially when instances are exposed to the public internet. To address this concern, AWS Systems Manager provides a secure and convenient way to access EC2 instances privately, without the need for public IP addresses or open ports.
## What is AWS Systems Manager?
AWS Systems Manager is a management service that helps you automate operational tasks across your AWS resources. It provides a unified user interface, allowing you to view and manage resources, automate operational tasks, and collect and analyze operational data.
Private Access to EC2 Instances
By leveraging AWS Systems Manager, you can establish private connectivity to your EC2 instances using the Session Manager feature. This feature allows you to securely access instances without the need for public IP addresses or inbound security group rules.
The Session Manager works by establishing a secure WebSocket connection between your local machine and the EC2 instance. This connection is facilitated by the AWS Systems Manager agent, which is pre-installed on Amazon Linux 2 and Windows Server 2016 and later AMIs.
Benefits of Using AWS Systems Manager for Private Access
1. Enhanced Security: With private access, you eliminate the need to expose your instances to the public internet, reducing the risk of unauthorized access and potential security breaches.
2. Simplified Access Management: AWS Systems Manager integrates with AWS Identity and Access Management (IAM), allowing you to control access to EC2 instances using IAM policies. This provides a centralized and granular approach to managing user permissions.
3. Auditability and Compliance: All session activities are logged and can be easily audited, providing a comprehensive trail of who accessed which instance and when. This helps meet compliance requirements and enhances accountability.
4. No Need for Bastion Hosts or VPNs: With private access through Systems Manager, you can eliminate the need for bastion hosts or VPN connections, simplifying your network architecture and reducing operational overhead.
Setting Up Private Access to EC2 Instances
Setting up private access to EC2 instances using AWS Systems
Manager involves a few simple steps:
1. Ensure that your EC2 instances are running the required version of the AWS Systems Manager agent. This agent is pre-installed on Amazon Linux 2 and Windows Server 2016 and later AMIs. For other instances, you can manually install the agent.
2. Configure the necessary IAM policies to grant users or roles access to the Systems Manager service and the specific EC2 instances they need to manage.
3. Install the AWS CLI (Command Line Interface) on your local machine if you haven’t already. This will allow you to interact with AWS Systems Manager from the command line.
4. Use the AWS CLI or the AWS Management Console to start a session with your EC2 instance. The Systems Manager console provides a user-friendly interface to initiate sessions, while the CLI offers more flexibility and scripting capabilities.
Once connected, you can securely manage and troubleshoot your EC2 instances using familiar command-line tools or GUI-based tools like PowerShell or Remote Desktop.
Conclusion
AWS Systems Manager provides a secure and convenient way to access EC2 instances privately, without the need for public IP addresses or open ports. By leveraging the Session Manager feature, you can enhance security, simplify access management, ensure auditability and compliance, and eliminate the need for bastion hosts or VPNs. With a few simple steps, you can set up private access to your EC2 instances and confidently manage your resources on the AWS platform. | samsumansarkar | |
1,743,858 | A comprehensive guide on how to create a weather widget in Next.js | Introduction Websites and programs no longer function without weather widgets, which... | 0 | 2024-02-06T17:22:49 | https://danrez.hashnode.dev/a-comprehensive-guide-on-how-to-create-a-weather-widget-in-nextjs | ---
title: A comprehensive guide on how to create a weather widget in Next.js
published: true
date: 2024-01-28 14:42:51 UTC
tags:
canonical_url: https://danrez.hashnode.dev/a-comprehensive-guide-on-how-to-create-a-weather-widget-in-nextjs
---
## Introduction
Websites and programs no longer function without weather widgets, which provide consumers with instantaneous access to weather data. This detailed tutorial will teach us how to use the Next.js framework to build a highly functional and adaptable weather widget. The demand for attractive and useful widgets that can be seamlessly integrated into dynamic web applications is growing.
### **Overview of Weather Widgets**
Widgets that display the current weather and forecasts on a webpage are known as weather widgets. For websites that focus on geography or travel, they not only improve the user experience but also offer functionality.
### **Importance of Weather Widgets in Web Development**
There are more practical uses for weather widgets in web development than just making the site look nicer. It provides helpful, location-based data, which adds to a user-centric experience. Weather widgets improve the functionality of web apps in many different sectors, making them more useful for travel planning, daily forecast checking, and general news consumption.
### **Introduction to Next.js for Building Dynamic Web Applications**
As a React-based framework, [Next.js](https://nextjs.org/) makes it easy for developers to create powerful and efficient online apps. Because of its modular design and server-side rendering capabilities, it is perfect for making widgets that can respond to user input. Our weather widget now works flawlessly for users on any device, regardless of network circumstances, thanks to Next.js.
## **Setting Up the Next.js Project**
Navigate to your desired location in the VS code and run the following command, either using **npx** or **yarn** , to set up the project:
```
npx create-next-app weather-widget yarn create next-app weather-widget
```

Next, when prompted, make use of the parameters that are listed below:

After it finishes installing, use the following commands to open the project in VS Code:
```
cd weather-widgetcode .
```
## Creating a weather widget component
Make a new folder named **components** in your project root folder. In that folder, create a file called **weatherWidget.tsx.** Add the following code to it:.
```
import React, { useState, useEffect } from 'react';import styles from '../styles/widget.module.css';interface WeatherWidgetProps { city?: string; coordinates?: { lat: number; lon: number };}interface WeatherData { name: string; main: { temp: number; feels_like: number; }; weather: { description: string; icon: string; }[];}const WeatherWidget: React.FC<WeatherWidgetProps> = ({ city, coordinates }) => { const [weatherData, setWeatherData] = useState<WeatherData | null>(null); useEffect(() => { const fetchData = async () => { try { let query = ''; if (city) { query = `q=${city}`; } else if (coordinates) { query = `lat=${coordinates.lat}&lon=${coordinates.lon}`; } else { console.error('Please provide either city or coordinates.'); return; } const response = await fetch(`/api/weather?${query}`); const data: WeatherData = await response.json(); setWeatherData(data); } catch (error) { console.error('Error fetching weather data:', error); } }; fetchData(); }, [city, coordinates]); console.log(weatherData); return ( <div className={styles.weatherWidget}> {!weatherData ? ( <div>Loading weather ...</div> ) : ( <> <h2>{weatherData.name}</h2> <p className={styles.weather}>{weatherData.weather[0].description}</p> <div className={styles.currentWeather}> <img src={`https://openweathermap.org/img/wn/${weatherData.weather[0].icon}@2x.png`} alt={weatherData.weather[0].description} /> <div>{Math.round(weatherData.main.temp)}C</div> </div> <p className={styles.feelsLike}> Feels like: {Math.round(weatherData.main.feels_like)}C </p> </> )} </div> );};export default WeatherWidget;
```
The above`WeatherWidget` React component fetches weather data from an API based on the provided city or coordinates using `useState` and `useEffect` hooks. It dynamically renders weather details, including city name, description, and temperature, with a loading message displayed while data is being fetched. Styled using CSS modules, it ensures a clean and modular structure for presenting weather information in a visually appealing manner.
## Setting OpenWeatherMap API Key
You can get 60 calls per minute and 1,000,000 calls per month with OpenWeatherMap's free API tier. Therefore, there is more than enough for this tutorial.
Head over to [openweathermap.org](http://openweathermap.org) and sign up for an account to receive your free key.
After setting up your account:
Copy your API key from the page called [API Keys](https://home.openweathermap.org/api_keys). Add the following code to a new file called \*\*.env.local\*\* in the root folder of your weather widget project. Replace _[YOUR\_API\_KEY\_HERE]_ with the API key you got from OpenWeatherMap:
```
# .env.local OPENWEATHERMAP_API_KEY=[YOUR_API_KEY_HERE]
```

## Creating weather data API
In the`pages/api` folder, create a file called`weather.ts`, and add the following code:
```
// pages/api/weather.tsimport type { NextApiRequest, NextApiResponse } from 'next';const apiKey = process.env.OPENWEATHERMAP_API_KEY;const apiUrl = 'https://api.openweathermap.org/data/2.5/weather';export default async function handler( req: NextApiRequest, res: NextApiResponse) { const { q, lat, lon } = req.query; if (!apiKey) { res.status(500).json({ error: 'API key not set' }); return; } if (!q && (!lat || !lon)) { res .status(400) .json({ error: 'Please provide either city or coordinates.' }); return; } try { const query = q ? `q=${q}` : `lat=${lat}&lon=${lon}`; const response = await fetch( `${apiUrl}?${query}&appid=${apiKey}&units=metric` ); const data = await response.json(); res.status(200).json(data); } catch (error: unknown) { if (error instanceof Error) { res.status(500).json({ error: error.message }); } else { res .status(500) .json({ error: error || 'Server error while trying to fetch weather data', }); } }}
```
This Next.js API route handles requests for weather data from the OpenWeatherMap API. Key aspects:
1. **Import Statements:**
2. **Request Handling:**
3. **API Call and Response:**
4. **Response to Frontend:**
5. **Error Handling:**
## Styling the weather widget
Please make a new file in the styles folder and name it weather.module.css. Then, add the following CSS code to it:
```
.weatherWidget { border: 2px solid #4CAF50; /* Green border */ border-radius: 8px; max-width: 300px; padding: 15px; margin: 15px; text-align: center; background-color: #f2f2f2; /* Light gray background */}.currentWeather { margin: 0 auto; display: flex; flex-direction: column; /* Change to column for a cleaner look */ justify-content: center; align-items: center;}.currentWeather div { margin-top: 10px; /* Add some spacing between elements */ font-size: 2rem; /* Adjust font size */ color: #333; /* Darker text color */}.feelsLike { font-size: 1rem; /* Adjust font size */ font-style: italic; color: #666; /* Lighter text color */}.weather { font-weight: normal; /* Remove bold weight */ color: #4CAF50; /* Green text color */}
```
## Updating index page
Our new widget will be implemented in the **index.tsx** page after we update it. Just copy and paste the following code into **pages/index.tsx** to replace the original boilerplate:
```
import React from 'react';import WeatherWidget from '../components/weatherWidget';const Home: React.FC = () => { return ( <div className="App"> {/* Example using city name */} <WeatherWidget city="Montreal" /> {/* Example using coordinates */} {/* <WeatherWidget coordinates={{ lon: -73.5878, lat: 45.5088 }} /> */} </div> );};export default Home;
```
Run the application using the following command and open the link [http://localhost:3000](http://localhost:3000) to see it in the browser:
```
npm run dev
```

## Conclusion
In conclusion, this comprehensive instruction showed how to use Next.js to create a dynamic and attractive weather widget. This guide gives developers a solid foundation to construct a feature-rich component that gets real-time weather data and provides a seamless user experience.
It teaches you how to easily integrate API calls with Next.js API routes after setting up a project and creating the Weather component. State management and _useEffect_ created a responsive, interactive widget by handling data well. | danmusembi | |
1,743,867 | Approach to Claim your Unclaimed Dividends and Unclaimed Shares | With the rise of online trading and mobile technology, it’s easier than ever to set up an investment... | 0 | 2024-01-28T17:44:16 | https://dev.to/iepfrecoveryshares/approach-to-claim-your-unclaimed-dividends-and-unclaimed-shares-52oe | With the rise of online trading and mobile technology, it’s easier than ever to set up an investment account online. If you have bought mutual funds or stocks in India, chances are that your investment has grown faster than you have had the time to claim your dividends and share certificates. You’re not alone – there are billions of rupees just sitting out there in investor accounts, just waiting to be claimed by their rightful owners! That’s where the process of claiming your **unclaimed dividen[](https://infinysolutions.com/unclaimed-dividends-unclaimed-shares/)ds** and shares in India comes in, with government bodies like the Investor Education and Protection Fund of India taking care of this process on your behalf. Here’s everything you need to know about claiming your unclaimed dividends and shares in India.
## Know about IEPF
IEPF is an autonomous body that was set up as a trust by the Indian government under the auspices of the Ministry of corporate affairs or SEBI. IEPF has been established for the protection of investors, with a mandate to safeguard their dividends, shares, and matured deposits. IEPF was created by the government of India for investor protection, as per SEBI regulations. The fund is administered by SEBI with a focus on investor education, awareness, and grievance redressal.
The IEPF is primarily funded by the securities market participants such as brokers, depositories, clearing corporations, and custodians. The fund is committed to protecting investors from getting cheated or taken advantage of when they are investing in shares, stocks, and other securities markets.
There are various types of funds you can transfer to IEPF
Unclaimed Dividends
**[Unclaimed Shares](https://infinysolutions.com/unclaimed-dividends-unclaimed-shares/)**
Unrealized matured time deposit proceeds of a company
Unclaimed Investments concerning debentures
## Guidelines to Recover Shares from IEPF Authority
Any person whose unclaimed dividend/share or other amounts including application money for refund, mature deposits/ debentures, or interest thereon, the redemption of preference shares, sale proceeds of fractional shares, etc. These all are transferred by the company to IEPF authority. That person can apply to **[IEPF Authority to claim shares](https://infinysolutions.com/claim-of-shares-dividends-from-iepf/)** and refund of the transferred amount from IEPF.
To receive your share of any company investments made in accordance with the International Economical Partners Financial Corp., you must fill out **[Form IEPF 5](https://infinysolutions.com/claim-of-shares-dividends-from-iepf/)** and turn it into the NODAL OFFICER of the company.
There can be only one claim per company per year. If a claim is rejected, the applicant must wait until the following year.
IEPF 5 forms are filled only through OTP verification from the applicant’s phone number and email address. Both phone number and email address must be active and must be accessible till the claim is retrieved in the Demat account.
The website for the IEPF also provides for the checking of unclaimed investments for current or prospective investors.
The IEPF authority has made its best efforts to ensure the interests of the investor, and before any claim to the investor is released, the E-verification report, as well as a document describing the proceedings, must be sent to the IEPF authority.
Procedure to Claim unclaimed dividend/share
Step 1 Download the form IEPF-5 from the IEPF website for filing the refund claim. Follow the instructions given by IEPF with care before filling out the form. Documents needed for fill IEPF-5 form
The applicant’s general details
The organisation’s or company’s details from which the amount is due, including the CIN number
Information about claimed shared or dividends
Aadhaar number or passport/OCI/PIO card number
Information of the bank account that is linked to Aadhaar
Demat account number
2. Step 2. Once the form is filled out, save it on your computer, then submit the form by following the instructions given in the upload link on the website. Once successfully uploaded, you will be notified of the document’s current status with an SRN number. For future records, please enter the SRN of the form.
3. Step 3. Now you need to take a printout of the duly filled IEPF-5
4. Step 4. Submit the following in person at the company’s registered office: Original indemnity bond, acknowledgement receipt for e-form submission, and a self-certified copy of Form IEPF-5 with the other documents indicated on Form IEPF-5. The document you need to include
Copy of form IEPF 5 with applicant’s signature
Non-judicial stamp paper with the signature of the applicant
Original stamped receipt with signature of the applicant
Original share certificates or copy of transaction statement
Copy of Demat account’s client master list
Aadhaar card
Proof of entitlement, i.e. interest warrant application number, etc.
Passport, Overseas Citizen of India (OCI), or Person of Indian Origin (PIO) card in case of NRIs and foreigners
5. Step 5. Claim forms completed in their entirety will be verified by the company concerned and based on the company’s verification report, a refund will be issued by the IEPF Authority to the claimants’ Aadhaar-linked bank account through electronic transfer.
6. Step 6. You can fill out the IEPF 5 form without any fees.
## Conclusion
To summarize, it is possible for you to claim your unclaimed dividends and shares in India, but it will require some patience. The first step is to gather information on the company that issued the dividend or share. The second step is to fill out a form that can be downloaded from the website of the Ministry of Corporate Affairs. You’ll need to provide details such as your name, address, contact number, bank account number, PAN card number, and Aadhaar card number.
IEPF, Share certificates, unclaimed dividends, Unclaimed dividends and shares, Unclaimed Investments concerning debentures, unclaimed shares
Blog Source :- [https://infinysolutions.com/approach-to-claim-your-unclaimed-dividends-and-unclaimed-shares/](https://infinysolutions.com/approach-to-claim-your-unclaimed-dividends-and-unclaimed-shares/) | iepfrecoveryshares | |
1,744,213 | Classifying and Extracting Data using Amazon Textract | In this blog, we will review how Mortgage Loan data can be extracted and classified using Amazon... | 0 | 2024-02-13T03:22:19 | https://dev.classmethod.jp/articles/classifying-and-extracting-data-using-amazon-textract/ | amazontextract, aws, nlp | In this blog, we will review how Mortgage Loan data can be extracted and classified using Amazon Textract.
Mortgage loan applications, typically consist of so many pages of various documentation. All of these papers must be classified and the data on each form retrieved before applications can be assessed. This isn't as simple as it seems! Aside from having various data structures in each document, the same data piece may have multiple names on different papers, such as SSN, Social Security Number, or Tax ID. These three all refer to the same piece of information.
### Summary
[Amazon Textract](https://aws.amazon.com/textract/) has an Analyze Lending API for evaluating and categorizing the documents contained in mortgage loan application packages, as well as extracting the data they contain. The new API can assist in processing applications quicker and with minimal errors, therefore improving the end-customer experience and lowering operational costs.
The API also can identify signatures and determine which papers have signatures and which do not. It also generates a summary of the papers in a mortgage application package and highlights significant documents such as bank statements. A set of machine learning (ML) models powers the new workflow. When the mortgage application package is uploaded, the workflow identifies the documents in the package before sending them to the appropriate ML model for data extraction depending on their categorization.
### Amazon Textract Demo
Although the new API is meant for lenders to use in their business process workflows and apps, anybody may test it out using the Amazon Textract interface. This allows you to examine how the API categorizes documents and extracts the data items included inside them.
- **Open the Amazon Textract console**: The list of supported regions will be displayed to choose your preferred region.


- Expand the Analyze Lending option and select the demo

The demo console immediately analyzes a set of test files, and the result of the output is shown above.
In the console, it displays that one document has a signature, showing that a signature was detected on page 2.

It indicates that it is a check with a signature. Because signature detection is a time-consuming operation, having the document automatically labeled when one is identified saves a substantial amount of time.

Also, a document is labeled Unclassified, because the document type could not be classified.

The identity document of the customer is also crucial for documentation, It shows a confidence score of 100%. The identity document information is displayed with each confidence score respectively.

### Conclusion
Until recently, classification and extraction of data from mortgage loan application data were labor-intensive activities, however, some adopted a hybrid approach, including technology such as Amazon Textract. Customers.
| olawde |
1,744,220 | 🦿🛴Smarcity garbage reporting automation w/ ollama | 💡 About Recently I saw a pile of garbage on the sidewalk next to a street I'm living... | 25,929 | 2024-01-31T22:15:41 | https://dev.to/adriens/smarcity-garbage-reporting-automation-w-ollama-3eg9 | ai, opensource, automation, dataengineering | ## 💡 About
Recently I saw a pile of garbage on the sidewalk next to a street I'm living in:

Generally (it was not the first time), I apply the following process:
1. **📸 Take a photo**
2. **📝** Send a mail in which I **explain** what's wrong
... but this time **I wondered if one could automate a kind of "sidewalk cleanup status reporting", I mean like a batch process.**
This is what triggered this reflection and pitch with the following stack:
- [`ollama`](https://ollama.ai/) : from cli, on my personal workstation (core i5/8 Go RAM)
- [`bakllava`](https://ollama.ai/library/bakllava), a _"multimodal model consisting of the [`Mistral 7B`](https://ollama.ai/library/mistral) base model augmented with the LLaVA architecture."_
I initially got two main ideas (but they have a wide range of customizations.
ℹ️ Notice that I have designed the thing so the terminal that shoots photos does not require a lot of power, but rather **rely on a remote asynchronous analysis system... to keep it as affordable as possible.**
### 🚶🛴 Streets cleaning status reporting w/ "drone like cleaning reporter agent"
- **Walk** (or bike or 🛴) along a street
- **Take photo each `n` meters** in "batch mode" (or photo-shoot only when I see anything abnormal)
### 📍 Specific spot monitoring
Sometimes, people tend to put garbage on **very specific public places that you really want to stay clean** (for health, commercial or any other reason)...
For this case, we just have to
- **Schedule a photo-shot** so you can be aware of the status of this specific spot
## 🍿 Pitch
Enough talk, let's see what it looks like:
{% youtube smtsyhVE2Lk %}
## 🔭 Real life implementation
To implement this at scale, we could:
1. **Take photoshot** with GPS enabled device
2. **Consider image compression** before to send it into the pipeline
3. **Upload the photo** on a remote place (so any low tech device can do the job from almost anywhere)
4. **Poll the raw incoming photo**, then process each photo:
1. **Extract metadatas** (GPS coordinates, timestamp,..) by using [`exif`](https://pypi.org/project/exif/)
2. **Automate photo shot caption** with [`ollama-python`](https://github.com/ollama/ollama-python)
3. **Push data **(original source image, GPS, timestamp) in a common place ([Apache Kafka](https://kafka.apache.org/),...)
4. Consume data into third party software (then let [Open Search](https://opensearch.org/) or [Apache Spark](https://spark.apache.org/) or [Apache Pinot](https://pinot.apache.org/)) for analysis/datascience, GIS systems (so you can put reports on a map) or any ticket management system
5. Use analytics **so human can take the final decision for intervention**
6. ☝️ Last but not least: **re-inject the decision data** in the system so we can create a maintain a dedicated decision making system (**AI photo caption and the final decision**).
## 🤗 Further, faster, stronger
Putting altogether:
- 🗸 **AI** initial scene analysis
- 🗸 **Human explanations** (why he chose the action)
- 🗸 **Human intervention (_"should we send someone to fix it?"_) [`MoSCoW`](https://en.wikipedia.org/wiki/MoSCoW_method) score**:
- 0️⃣ : **Won't**
- 1️⃣ : **Could**
- 2️⃣ : **Should**
- 3️⃣ : **Must**
- 🗸 **Human cleaner feedback loop**: how long did it take to clean it up (can be seen as a complexity score)
Then save & [share it as a proper & public HuggingFace dataset](https://huggingface.co/docs/datasets/index) may also benefit to:
1. **Create** dataset
2. **Train** a model
3. **Share** the [model](https://huggingface.co/docs/hub/models)...
⚡ Even further, once the dataset released, we could **produce & share some synthetic data** to build models **sooner, and with higher quality.**
It may also be interesting to create an **team of AI agents** (the reporter, the analyst & decision provider) to **help in decision making**, for example by using [`crewAI`](https://github.com/joaomdmoura/crewAI). | adriens |
1,744,448 | Jest: Exceeded Timeout of 5000 ms for a Test | Encountering this error during test runs led me to the decision to address it by adding a timeout... | 0 | 2024-01-29T09:04:51 | https://dev.to/thearkein/jest-exceeded-timeout-of-5000-ms-for-a-test-4n06 | testing, jest, javascript, beginners | Encountering this error during test runs led me to the decision to address it by adding a timeout value to the test case. After experimenting with a few values, 7000 ms emerged as the optimal choice, and hoorah! 🎉 The test passed.
On another day, rerunning the tests revealed that the same test now failed with a timeout error:
```
Thrown: "Exceeded timeout of 7000 ms for a test.
Add a timeout value to this test to increase the timeout if it's a long-running test. See https://jestjs.io/docs/api#testname-fn-timeout."
```
Additionally, a couple more errors appeared for other tests:
```
Thrown: "Exceeded timeout of 5000 ms for a test.
Add a timeout value to this test to increase the timeout if it's a long-running test. See https://jestjs.io/docs/api#testname-fn-timeout."
```
Now, one test failed for a 7000 ms timeout and two others for a 5000 ms timeout. After trial-testing with different values, settling on 10000, 7000, 8000 seemed to solve the issue! 🎉🎉
But wait! Had I truly conquered the battle against Jest test timeouts? Not quite. I found myself grappling with recurring situations, requiring me to engage in manual trial-tests and tweak values in specific test cases. The frustration didn't end there, the tests sometimes passed with different timeout values in distinct environments, such as my peers' machines. This inconsistency in test outcomes not only slowed down code review and delivery processes but also proved to be a bottleneck in CI implementation. It impeded from smoothly pushing codes, and this became painfully annoying.
So, what's the key to finally triumphing over these pesky Jest test timeouts? The adventure continues in my next note post, where I explore overall concept of [Configuring Jest test timeouts](#). Discover the strategies that saved the day 😉. | thearkein |
1,744,459 | Leetcode: 2210. Count Hills and Valleys in an Array | Introduction In this series of "Leetcode" posts I will publish solutions to leetcode... | 0 | 2024-01-29T09:19:03 | https://dev.to/marcelos/2210-count-hills-and-valleys-in-an-array-3j07 | java, leetcode, algorithms | # Introduction
In this series of "Leetcode" posts I will publish solutions to leetcode problems. It is true that you can find most/lots of leetcode solutions on the web, but I will try to post my solutions to problems that are interesting or to problems for which the solutions out there are not well explained and deserve a better explanation.
The aim is to share knowledge, and that people who are studying, preparing for interviews, or just practicing, can become better at solving problems. Please feel free to comment if you have a suggestion or a different approach!
# Problem link
[Count hills and valleys](https://leetcode.com/problems/count-hills-and-valleys-in-an-array/description/)
# Problem statement
You are given an array and you have to find hills and valleys.
The idea is simple yet a bit complex, I will just copy/paste from leetcode which is
already quite verbose but helps to fully understand the problem.
Example:
Input: nums = [2,4,1,1,6,5]
Output: 3
Explanation:
At index 0: There is no non-equal neighbor of 2 on the left, so index 0 is neither a hill nor a valley.
At index 1: The closest non-equal neighbors of 4 are 2 and 1. Since 4 > 2 and 4 > 1, index 1 is a hill.
At index 2: The closest non-equal neighbors of 1 are 4 and 6. Since 1 < 4 and 1 < 6, index 2 is a valley.
At index 3: The closest non-equal neighbors of 1 are 4 and 6. Since 1 < 4 and 1 < 6, index 3 is a valley, but note that it is part of the same valley as index 2.
At index 4: The closest non-equal neighbors of 6 are 1 and 5. Since 6 > 1 and 6 > 5, index 4 is a hill.
At index 5: There is no non-equal neighbor of 5 on the right, so index 5 is neither a hill nor a valley.
There are 3 hills and valleys so we return 3.
# Solution
This problem despite being "easy" took me some time to solve.
I had to sleep over it to find an extremely quick solution next day :)
Sometimes you just gotta rest.
## Important considerations
* One of the most challenging parts of this problem is the fact that you could have duplicates:
1,2,2,2,8 -> This has one Hill
This means that you have to "ignore" the adjacent equal vales and only consider the values that are different
* Another important constraint or issue, is that at the beginning of the array you don't know if you are going up
or down.
## Approach 1
* At any point in time, you will be either "Up" or "Down"
* Except in the beginning, as you don't have previous value for the first value of the array
* If you are going **UP** and then go down, meaning you find that the next value is less than the current value
, then that means you have found a Hill.
* If you are going **DOWN** and then go up, meaning you find that the next value is larger than the current value
, then that means you have found a Valley.
```java
public int countHillValleyWithFlag(int[] nums) {
int valleys = 0;
int hills = 0;
int index = 1;
// Find first non-repeating value
while (index < nums.length && nums[index] == nums[index - 1]) {
index++;
}
if (index == nums.length) return 0;
boolean isUp = nums[index] > nums[index - 1];
while (index < nums.length) {
int cur = nums[index];
int prev = nums[index - 1];
if (cur > prev) {
if (!isUp) {
hills++;
isUp = true;
}
} else if (cur < prev) {
if (isUp) {
valleys++;
isUp = false;
}
}
index++;
}
return valleys + hills;
}
```
* We have to find the first non-repeating value, this is because we have to know if we are UP or DOWN,
if the values are the same, we don't know this fact.
* Once we know if we are going UP or DOWN, then we can check:
* If current > prev && !isUP(down): If we were going DOWN and the current value is greater than previous, we have found a Valley \*/
* If current < previous && isUP: If we were going UP and the current value is less than previous, we have found a Hill /*\
## Approach 2
* Notice that for a Hill or Valley we have neighbour values that should be different to the current value.
* **Key fact** : The surrounding values have both to be greater or less than the value in the middle.
* Picture this hill: 10, 30, 20
* Picture this valley: 30, 10, 50
* We have to find the neighbour/surrounding values. And if they comply to the above-mentioned statements
then we found a valley/hill.
```java
public int countHillValleyNeighbours(int[] nums) {
int prev = nums[0];
int hills = 0;
int valleys = 0;
for (int i = 1; i < nums.length - 1; i++) {
int cur = nums[i];
int next = nums[i + 1];
if (next != cur) {
if (prev > cur && next > cur) valleys++;
else if (prev < cur && next < cur) hills++;
prev = cur;
}
}
return hills + valleys;
}
```
* We have to ignore similar values, for instance:
1,2,3,3,3,4,5
* At index 4, we cannot say "3" is the previous values, remember that we are looking for
values/neighbours that are different to the actual value.
* We basically want to simplify to this:
1,2,3,4,5
* So the actual neighbours of 3 are 2 and 4.
* To do this we just have to be sure that the "next" value is different that the "current"
and then we can replace the old previous value with the current one.
### Complexity
## Time complexity
* In both approaches we just have to go through the array one time
Therefore:
* O(n) : Linear to the input size
## Space complexity
* We don't make use of any other data structure. We simply use some variables.
Therefore:
* O(1) Constant space, as it does not depend on the input size
#### Final comments:
This was a tricky task but fun to do it.
It is a good idea to analyze the requirements in concrete terms.
In this case: what it means a hill or a valley?
If you can put those into words then you already have an algorithm to solve it.
| marcelos |
1,744,487 | Admin Dashboard | A post by Sona | 0 | 2024-01-29T09:35:21 | https://dev.to/codingmadeeasy/admin-dashboard-57ph | webdev, javascript, programming, tutorial | {% embed https://www.youtube.com/watch?v=gbdBxBDrX4o %} | codingmadeeasy |
1,744,498 | Past Resumes: Revealing the Surprising Advantages of Nagpur Recruitment Agency | Nagpur, the "Orange City" of India, pulsates with the vibrant energy of commerce and ambition.... | 0 | 2024-01-29T09:43:13 | https://dev.to/allianceinternational/past-resumes-revealing-the-surprising-advantages-of-nagpur-recruitment-agency-4dem | manpowerservicesinnagpur, recruitmentagencyinnagpur, manpowerconsultancynagpur, recruitmentconsultantsinnagpur | Nagpur, the "Orange City" of India, pulsates with the vibrant energy of commerce and ambition. Aspiring professionals navigate its bustling bazaars, eager to carve their niches in its diverse industries. Yet, landing that dream job in Nagpur's dynamic landscape can feel like navigating a labyrinthine spice market—aromatic with possibilities but potentially overwhelming. Fear not, intrepid job seekers! A hidden gem lies within the city, ready to streamline your journey and unlock unexpected benefits beyond resumes: [Recruitment agency in Nagpur](https://www.allianceinternational.co.in/recruitment-consultants-nagpur/).
These aren't just resume shufflers churning out generic positions; they're your career architects, cultural translators, and growth catalysts, ready to transform your job search from a solitary trek into a rewarding expedition with:
## 1. Precision Talent Matchmaking:
Forget casting a wide net and hauling in irrelevant catches. Recruitment agencies in Nagpur leverage their extensive networks, deep understanding of the local talent pool, and industry expertise to pinpoint the perfect candidates for specific roles. Imagine landing the exact skillsets and cultural fit you seek the first time, every time.
## 2. Interview Efficiency:
Bid farewell to marathon interview sessions. Our manpower services in Nagpur take the lead in pre-screening candidates, skill assessments, and conducting preliminary interviews. This guarantees that only the most qualified individuals make it to your desk. Experience a streamlined hiring process that not only saves you valuable hours but also minimizes wasted resources, courtesy of our specialized manpower services in Nagpur.
## 3. Cost-Effective Advantage:
Traditional recruitment can drain your coffers. Agencies offer cost-effective solutions, with lower fees compared to in-house recruitment teams and access to a readily available local talent pool, minimizing travel and relocation expenses. They work within your budget, not against it.
## 4. Compliance Confidence:
Navigating HR regulations can be a legal minefield. Agencies act as your legal navigators, ensuring all your hiring practices comply with local labor laws and avoiding potential fines and legal tangles. This peace of mind allows you to focus on growing your business, not battling paperwork blizzards.
## 5. Workforce Optimization Savvy:
Agencies go beyond simply filling positions. They offer comprehensive workforce management solutions, including payroll processing, employee training, and performance evaluations. This frees up your team to focus on core activities while they handle the administrative burden, maximizing your team's efficiency and return on investment.
## 6. Reduced Turnover Risk:
High turnover can spell disaster for any business. Agencies help you retain top talent by implementing employee engagement programs, conducting exit interviews to understand pain points, and providing career development guidance. This fosters a happy and productive work environment, minimizing turnover costs and maximizing employee loyalty.
## 7. Employer Branding Architects:
In a competitive market, attracting top talent requires a strong employer brand. Agencies understand Nagpur's job market trends and employer perceptions. They help you craft a compelling company narrative tailored to resonate with Nagpur's aspirations, making you an irresistible magnet for the city's best professionals. This attracts high-quality talent without expensive branding campaigns.
## 8. Skill-Bridging Builders:
Need to bridge the skill gap within your team? Agencies offer customized training programs, upskilling your existing workforce and ensuring they stay ahead of the curve in Nagpur's ever-evolving industries. This minimizes the need for external recruitment and increases the overall productivity of your existing team.
## 9. Future-Proofing Your Workforce:
Technology evolves at breakneck speed. Agencies stay ahead of the curve, identifying emerging skillsets and trends and helping you build a future-proof workforce equipped to handle the challenges of tomorrow. This proactive approach saves you the time and expense of constantly retraining your team for new technologies.
## 10. Time-saving superheroes:
Forget spending countless hours on recruitment tasks. Agencies handle the entire process, from initial screening to onboarding, freeing up your valuable time to focus on strategic initiatives and business growth. Imagine reclaiming hours spent on HR and channeling them into driving your Nagpur venture forward.
## Unpacking the Unexpected:
But the benefits of manpower consultancy in Nagpur extend far beyond time savings and perfect matches. They unlock unexpected advantages that empower both employers and employees:
**Career Coaching and Confidence Building:** Agencies offer valuable resources like resume workshops, mock interviews, and career coaching. This empowers job seekers to showcase their strengths, navigate interviews confidently, and land their dream jobs.
**Networking Opportunities:** Agencies act as bridges between companies and qualified candidates, often hosting career fairs and networking events. This exposes job seekers to hidden opportunities and allows employers to connect with a diverse talent pool.
**Industry Insights and Market Trends:** Agencies possess in-depth knowledge of Nagpur's specific industries, salary trends, and emerging job markets. This empowers both employers and employees to make informed career decisions and navigate the dynamic landscape effectively.
**Cultural Understanding and Workplace Harmony:** Agencies bridge the cultural gap between employers and employees, ensuring a smooth transition and a thriving work environment where diverse skillsets and perspectives contribute to collective success.
## Conclusion
In conclusion, [Alliance International](https://www.allianceinternational.co.in/) stands as a beacon of innovation in the realm of recruitment, unraveling the hidden advantages that go beyond conventional expectations. As a distinguished recruitment agency in Nagpur, our commitment extends beyond mere job placements. We redefine success stories by offering tailored solutions, industry insight, and a commitment to excellence.
For a recruitment experience that transcends traditional boundaries, trust Alliance International to propel your hiring endeavors to new heights. [Contact us](https://www.allianceinternational.co.in/contact-us/) today and discover the extraordinary benefits that await your organization on the path to unparalleled talent acquisition.
| allianceinternational |
1,744,514 | Gemini: ChatSession with Kendo Conversational UI and Angular | I continued experimenting with Gemini, and after showing @Jörgen de Groot my demo of my first chat... | 0 | 2024-01-29T09:59:12 | https://www.danywalls.com/gemini-chatsession-with-kendo-conversational-ui-and-angular | angular, kendo, javascript, ia | I continued experimenting with Gemini, and after showing @[Jörgen de Groot](@Alt148) my demo of my first chat using [Gemini and the Conversational UI](https://www.danywalls.com/create-your-personalized-gemini-chat-with-kendo-ui-conversational-ui-and-angular), he asked how to maintain the chat history with Gemini to avoid sending the initial prompt and to preserve the context.
> This is the second part of the article "[Create Your Personalized Gemini Chat with Conversational UI and Angular](https://www.danywalls.com/create-your-personalized-gemini-chat-with-kendo-ui-conversational-ui-and-angular)"
This is an important consideration because sending the prompt repeatedly incurs an expense. Additionally, the first version does not support maintaining the initial context or preserving the conversation history.
However, Gemini offers a chat feature that collects our questions and responses, enabling interactive and incremental answers within the same context. This is perfect for our Kendo ChatBot, so let's implement these changes.
## The Chat Session
In the first version, we directly used the `model` to generate content. This time, we will employ the `startChat` method with the model to obtain a `ChatSession` object, which offers a history and initial context with the `prompt`.
The Gemini model offers an option to initiate a `ChatSession`, where we establish our initial prompt and conversation using the `startChat` feature. The `ChatSession` object contains a `sendMessage` method, which enables us to supply only the second prompt from the user.
First, declare a new object `chatSession` with the initial history, which should include the initial `prompt` and the initial answer, for example:
```typescript
#chatSession = this.#model.startChat({
history: [
{
role: 'user',
parts: this.#prompt,
},
{
role: 'model',
parts: "Yes, I'm a Angular expert with Kendo UI",
},
],
generationConfig: {
maxOutputTokens: 100,
},
});
```
Our next step is to use the `chatSession` instead of directly sending the parts and `user` role to the `model` each time:
```typescript
const result = await this.#model.generateContent({
contents: [{ role: 'user', parts }],
});
```
Replace the `model` with the `chatSession` and utilize the `sendMessage` method:
```typescript
const result = await this.#chatSession.sendMessage(
textInput.message.text,
);
```
Done! 🎉 Our chatbot now supports history and continuous interaction, without sending the full prompt every time, saving our tokens 😊😁
Checkout the demo: 👇

## Recap
Yes, it was quite easy to add history support to our chat, saving tokens and providing a significantly better experience for users interacting with our chat.
We learned how to improve the functionality of a Gemini Chatbot by maintaining chat history and preserving context, thus avoiding the repeated sending of initial prompts.
Using the chat feature, which collects questions and responses, for interactive and incremental answers using `ChatSession` and provides a better user experience, and also saves tokens by not sending the full prompt every time. 💰🎉
> Source Code [https://github.com/danywalls/conversational-with-gemini/tree/feature/turn-to-chat](https://github.com/danywalls/conversational-with-gemini/tree/feature/turn-to-chat) | danywalls |
1,744,535 | Interview with Mark Richards: GSAS 2023 Insights | We had an interview with Mark Richards at GSAS 2023! If you are in the software development... | 0 | 2024-03-04T15:57:04 | https://apiumhub.com/tech-blog-barcelona/interview-with-mark-richards-gsas/ | architecture | ---
title: Interview with Mark Richards: GSAS 2023 Insights
published: true
date: 2024-01-29 08:14:02 UTC
tags: Softwarearchitecture
canonical_url: https://apiumhub.com/tech-blog-barcelona/interview-with-mark-richards-gsas/
---
We had an interview with Mark Richards at GSAS 2023!
If you are in the software development industry, you have probably heard of Mark Richards. Mark Richards is an experienced, hands-on software architect involved in the architecture, design, and implementation of microservices architectures, service-oriented architectures, and distributed systems and also the founder of developertoarchitect.com.
Mark Richards is the author of numerous technical books and videos from O’Reilly, including several books on Microservices, the Software Architecture Fundamentals video series, Enterprise Messaging video series, Java Message Service, 2nd Edition, and a contributing author to 97 Things Every Software Architect Should Know.
Last year in October, Mark participated at the [Global Software Architecture Summit](https://gsas.io/), an event organized by [Apiumhub](https://apiumhub.com/), as a speaker to present his talk “[Effective Microservices: Lessons Learned](https://www.youtube.com/watch?v=TxHBcjv-Eac)” and his workshop “Building Distributed Systems using Transactional Sagas” together with Neal Ford.
Apiumhub had the opportunity to have an interview with Mark Richards between breaks to learn more about him, his interests, and the topics he would like to learn next year. Keep reading to learn more about this interview with Mark Richards!
## Interview with Mark Richards
### What are your thoughts on this year’s Global Software Architecture Summit?
It just keeps getting better! I love the current theme of this edition which is to leverage modern practices in software architecture to become more effective, more efficient, but most importantly to enjoy what we do. I love that theme, this is a chance to bring architects and developers together to be able to talk about software architecture.
This is my third year speaking at the Global Software Architecture Summit, and I love seeing the energy and the growth. The attendees are fantastic, as well as the speakers. So far, it has been a fantastic experience!
### What are the current essential practices in software architecture for this year?
This year, some of the current essential practices in software architecture include a lot of different dimensions. The first is those measurements and fitness functions, being able to validate our architectures to know that when we make changes that are not impacting the structural integrity of those systems. Also, the use of architecture decision records (ADRS) which has come up quite a bit in this year’s conference. This is an essential practice not only for communication but also for forms of collaboration and great forms of very brief documentation.
I think another essential practice that has also come out this year in the conference is that of collaboration, avoiding that ivory architect antipattern and learning as an architect how to collaborate more with developers and also business stakeholders. These are a few of the handful of essential practices in software architecture this year.
### Which area of software architecture are you interested in exploring in the upcoming year?
Next year, I’m interested in exploring areas of trying to control architecture flexibility and that change. A Residuality Theory is currently on my mind and something I intend on doing a lot of exploration on in the coming years, so if you aren’t familiar with Residuality Theory and its application to software architecture, you can certainly Google that. I am also very fascinated with, of course, the typical response which is the use of arch AI within software architecture and how can we leverage this to help validate a software architecture and maybe also find certain patterns and anti-patterns as well. These are just a few of the things that I’m anxious to embark on next year in software architecture.
<iframe title="GSAS 2023: Interview with Mark Richards #GSAS23" width="1140" height="641" src="https://www.youtube.com/embed/fUd2_TkCORY?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
Interested in watching more videos from GSAS speakers like this interview with Mark Richards? Visit our [YouTube channel.](https://www.youtube.com/@Apiumhub/videos)
### GSAS 2024: Call for papers
Good news! Apiumhub is already organizing the fourth edition of [GSAS](https://gsas.io/). This year, the event will take place on October 14-16 at the Axa Auditorium in Barcelona and will focus on AI in software architecture, a topic that´s gaining a lot of attention. Many industry leaders have already confirmed their participation as speakers including Mark Richards, Neal Ford, Eoin Woods, Vlad Khononov, Luca Mezzalira, Andrew Harmel-Law, and Christian Ciceri.
Are you interested in becoming part of GSAS? The call for papers is already open! You may submit three types of proposals: talk, workshop, or open space. Feel free to submit your proposal [here](https://docs.google.com/forms/d/e/1FAIpQLSf2PeISgsRvU1G3Q0zOAsxlGqO017lpK_Dp0EO-k0xsAYijlg/viewform). We are looking forward to hearing from you! | apium_hub |
1,744,625 | The New Computer: Use Serverless to Build Your First AI-OS App | There is no denying some really interesting and groundbreaking things are cooking over at... | 0 | 2024-02-01T12:46:55 | https://dev.to/dawiddahl/the-new-computer-use-serverless-to-build-your-first-ai-os-app-409 | ai, serverless, chatgpt, programming | There is no denying some really interesting and groundbreaking things are cooking over at OpenAI.
Why do I say that? The reason is that in recent months they have started to release some things many people didn't fully expect. I believe this is a sign that internally, OpenAI is currently executing on an overarching plan that over the coming years will change the digital landscape completely.
What are some of these things they have released? Examples include `GPTs`, `GPT Actions`, and most recently: `GPT @-mentions`.

This is simply a way to reference your GPTs—AI chatbots that you can customize on your own—in your current ChatGPT conversation.
Well, you might say, that doesn't sound like such a big deal? And why is so much time being spent on these GPTs? Are they even any good?
Let me show you why it is a big deal.
## The Dawn Of A New Computer
Back in the day, there was a little company called Microsoft that revolutionized personal computing. Founded in 1975 by Bill Gates and Paul Allen, Microsoft achieved its big break with MS-DOS, an operating system developed for the IBM PC in 1981. This success paved the way for Windows, which became the dominant operating system worldwide.
Seizing the opportunity in the flourishing era of personal computing, Microsoft's strategic innovations and market adaptability turned it into a tech juggernaut.
<figure>
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f3bqfxh5ji1q3fmfu7j4.png" alt="OpenAI hypothetical LLM OS">
<figcaption>Image of an hypothetical LLM OS by [Andrej Karpathy](https://twitter.com/karpathy?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor), working at OpenAI.</figcaption>
</figure>
I believe that what Microsoft did with the release of Windows 1.0 back in 1985, is what OpenAI is gearing up to do in 2024 and beyond: creating a new kind of AI-OS for the next generation of personal computers. This could be as pivotal for our digital interactions as when Bill and Paul revolutionized computing with Windows.
## GPTs as AI-OS Apps
So essentially, these GPTs are to the AI-OS, what traditional applications were to Windows; instead of launching an app on your computer, you will be orchestrating AI agents to perform actions on your behalf. And it will be so much more fun and engaging than pressing down 👇🏻 keys on a board or other pieces of plastic.
Instead of being on your own as in the days of PC's past, as I described in a [previous article](https://dev.to/dawiddahl/meet-your-future-co-workers-the-rise-of-ai-agents-in-the-office-441m), you will instead be collaborating directly with a host of artificially intelligent beings, not at all unlike how Luke Skywalker is dealing with C-3PO in Star Wars.

But how do you actually create one of these new AI-OS apps? In the next section, I'll guide you through the process using AI to help you build a ([AI Application Value Level 2](https://dev.to/dawiddahl/climbing-the-ai-application-value-ladder-4cf0#value-level-2-function-calling
)) `GPT Action`, using serverless functions technology.
The most common way of building a `GPT Action` today, if you look on ChatGPT-related YouTube content, is Zapier: a no-code platform allowing you to perform actions like sending email or updating your calendar. By using serverless functions instead, you actually won't need to pay Zapier a subscription fee every month!
>ℹ️ 1. Even though being a developer helps when building serverless functions, with the help of AI (and a little grit), it's not strictly necessary, as you can learn as you go.
>
>ℹ️ 2. Even though it is called “serverless”, that doesn’t mean there is no server. It just means that we don’t use our own local server; we use some other company’s server in the ☁️.
## Using Serverless Functions to Create Your First Proto AI-OS-App
So what shall we build? As a proof-of-concept, let's go for an AI-OS app that should, on the server, generate some ASCII art of a cow that says something. Like this:
<figure>
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/89ats08pjltcpt7jxph3.png" alt="ASCII art cow">
<figcaption>To create the ASCII art, we'll use `cowsay` on the server, which is an [external library](https://www.npmjs.com/package/cowsay) designed for this cowsome purpose.</figcaption>
</figure>
Then that art should be sent from the server back to the AI-OS app (our GPT), which will then create a beautiful painting drawing inspiration from this ASCII art.
>You will need 1) a [ChatGPT](https://chat.openai.com/) Plus or Teams account, 2) a free [Vercel](https://vercel.com/) account and 3) a free [GitHub](https://github.com) account to build along with me.
### Step 1: Set Up The ☁️ Environment
Open up ChatGPT and ask it to generate a serverless function on Vercel.
To get started, use this prompt:
"_Could you carefully guide me through creating a serverless function with Vercel using Node, starting by setting up a Next.js project using create-next-app, then writing a basic serverless function in TypeScript, and finally deploying it via the Vercel CLI? Please also explain step-by-step how we link the Vercel project to GitHub_."
If you prefer a written guide, you can use [this](https://vercel.com/docs/functions/serverless-functions/quickstart). To see or clone my finished serverless function repository on GitHub, click [here](https://github.com/dawid-dahl-umain/gpt-functions-cowsay/tree/main).
>Vercel's Hobby Plan offers _free_ serverless functions for small projects, with up to 10-second runtime and ample monthly capacity of 100 GB-hours. That means a simple function can be run around 700,000 times a month, for free! No need to pay Zapier every month.
>
>More info on pricing [here](https://vercel.com/docs/accounts/plans/hobby).
### Step 2: Create The Function 🛠️
Now you should have a Next project. Inside the `app` folder, there is an `api` folder. Inside that folder, create a new folder and call it something that should be thought of as a spell 🪄✨ we use to activate our function. Let's go for `gpt-functions-cowsay`, or whatever you'd like. Remember this spell name, we will need it later.
Next, in this spell folder, create a file called `route.ts`. The folder structure will thus be: `app/api/gpt-functions-cowsay/route.ts`.
>If at any point you feel lost, no worries! Just ask ChatGPT for clarification or help.
Now, request ChatGPT to write the server-side code if it didn't already, to generate a cowsay and return the result. Use this prompt to get started:
"_I need help creating a simple serverless function in Next.js that uses the 'cowsay' package. The function should take text from a URL search parameter, make a cow say it, and return this along with the request. Can you guide me through the steps, including necessary TypeScript code, to set up this function?_"
If the AI does its job, the code for the function will end up something like this.
```typescript
import { NextResponse } from "next/server";
import { say } from "cowsay";
import type { NextRequest } from "next/server";
export const GET = (request: NextRequest) => {
const cowsayText = request.nextUrl.searchParams.get("cowsay") || "";
return NextResponse.json(
{
cowsay: say({ text: cowsayText }),
},
{
status: 200,
}
);
};
```
Paste this code into the `route.ts` file in the spell folder (`gpt-functions-cowsay`).
Please note that although this function performs a simple task, in reality, within this server environment, you now wield the **full power of software engineering**.
That's right. Unlike with Zapier where you are restricted to follow their rules, in here, you can build _any_ tool you want. And through the `Actions` input the GPTs creation editor, you can hand this tool over to the AI for it to use on your behalf.
Take a moment and just reflect on the vast possibilities. **The sk-AI is the limit!**
### Step 3: Create An OpenAPI Spec 📄
Now, the way we make our GPT aware of our new function so it can use it, is to hand it something called an OpenAPI specification.
>Yes, that was not a typo. While OpenAI is the company, [OpenAPI](https://swagger.io/specification/) is a rulebook for how computer programs talk to each other (APIs).
If you are not a developer, you will have no idea how to write such a specification. But fear not, you can use another GPT called [ActionsGPT](https://chat.openai.com/g/g-TYEliDU6A-actionsgpt) to do it for you.

- In the configuration tab of the GTP creator, click the "Create new action" button.
- In a separate ChatGPT thread, @-mention `ActionsGPT`.

- Ask it: "_I have set up a serverless function in Vercel. What should I do now to get an OpenAPI specification from you?_" You could hand it some of the code too.
- `ActionsGPT` will tell you to hand it some information.
- You will give it something like this. (The Base URL you get from your Vercel project.) The prompt doesn't have to be exact, just get the urls and the `GET` or `POST` right and describe what your function does. Use this prompt to get started:
>"_Endpoint URL(s): gpt-functions-cowsay
HTTP Methods: GET
Base URL: gpt-functions-cowsay.vercel.app_
>
>_When given an input called cowsay, it will take it and make a cowsay out of it. Then it will return the cowsay._"
<figure>
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f9lnghot7bkt1xl8d0kn.png" alt="abra cadabra spell function invocation">
<figcaption>In Aramaic, "_avra kehdabra_" means "_I will create as I speak_". If `gpt-functions-cowsay` is the _kadabra_, `GET` is the _abra_. Using them both together will cast the function's magic! ✨</figcaption>
</figure>
- `ActionsGPT` will then generate the OpenAPI spec for you.
### Step 4: Launch Your GPT! 🚀
Finally, paste the OpenAI specification into the Schema input of the GPT Actions editor. Like this:

If you encounter errors, consult `ActionsGPT` with your serverless function code at hand. Iteration is key in when building with AI.
> Use this free [privacy policy generator](https://app.freeprivacypolicy.com/wizard/privacy-policy) to create a policy for the GPT action, in case you want your GPT to be public.
### Step 5: You're done! ✅
That's it, if OpenAI allows you to save this GPT, that means you did it - you just built your first simple AI-OS app! 👏🏻
This might've seemed daunting, especially for non-developers. And don't worry if you couldn't get it to work on your first try. Because remember, adding an action to a GPT is a [Level 2](https://dev.to/dawiddahl/climbing-the-ai-application-value-ladder-4cf0#value-level-2-function-calling
) task in AI software development — it's supposed to be a bit on the tougher side! But also more rewarding and fun to build, if you ask me.

## Conclusion
In this guide, you've learned how to create an AI-OS app using serverless technology with our `cowsay` example. This introductory project showcases the potential for building some truly innovative AI applications.
If you didn't follow along and build it with me, here is the cowtastic [Cowsay Creator](https://chat.openai.com/g/g-z1SLp6C5w-cowsay-creator) in action!
<figure>
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nsx2su80wztzhsrtiubd.png" alt="gpt allow button for actions">
<figcaption>It is all right to press "_Allow_" here. You can check the [Github repo](https://github.com/dawid-dahl-umain/gpt-functions-cowsay) to verify that apart from bad cow art, nothing else bad happens in our serverless action.</figcaption>
</figure>
OpenAI's latest developments hint at a major shift, similar to when Windows first changed computing. We're seeing the start of a new AI-OS that could change everything, indicating that a future with C-3PO-like companions might be closer than we anticipate.
And while our `Cowsay Creator` GPT was just for fun and practice, by exploring this, you're already a part of the emerging AI-OS future. Who knows what actually valuable AI-OS apps you'll create next!
---
Dawid Dahl is a full-stack developer at [UMAIN](https://www.umain.com/) | [ARC](https://arc.inc/). In his free time, he enjoys metaphysical ontology, analog synthesizers, consciousness, Huayan and Madhyamika Prasangika philosophy, and being with friends and family.
>For those keen to dive deeper into `function calling` with LLMs, in [this article](https://dev.to/dawiddahl/function-calling-the-most-significant-ai-feature-since-chatgpt-itself-81m) I offer another thorough exploration of the topic. | dawiddahl |
1,744,635 | Car Transportation Services in Noida | Car transportation services in Noida offer a range of solutions for vehicle relocation. These... | 0 | 2024-01-29T11:06:23 | https://dev.to/dtcexpresspackers/car-transportation-services-in-noida-2dfp | transport | **[Car transportation services in Noida](https://dtcexpress.in/carrier/car-transport-services-noida/)** offer a range of solutions for vehicle relocation. These services typically include packers movers, container facility, safe packing, and shipping for all types of vehicles. The cost of the service depends on factors such as the type of vehicle, distance, and urgency of transport. Several companies in Noida provide car transport services, including Dtc Express Packers and Movers, Top Car Shifting Service in Noida , Car Movers in Noida, Professional Car Carrier service, car Carriers, Dtc Express Packers and Movers, All India Car Transport Company, affordable Car Transport Company, trusted Car Transport Noida. Safe Car Carrier, and Car Transport. These companies offer various features & have different ratings, so it's advisable to compare their services and choose the one that best suits your needs.
| dtcexpresspackers |
1,744,644 | DFRobot Excels At Bett London 2024: Leading STEM Innovations | DFRobot, a global STEM education leader, showcased cutting-edge STEM solutions at Bett 2024,... | 0 | 2024-01-29T11:16:08 | https://dev.to/grossben794/dfrobot-excels-at-bett-london-2024-leading-stem-innovations-3hi2 |

DFRobot, a global STEM education leader, showcased cutting-edge STEM solutions at Bett 2024, including the groundbreaking 'Carbon-Neutral and Sustainable City' [IoT solution](https://www.techdogs.com/td-articles/curtain-raisers/a-complete-guide-to-industrial-iot-solutions). Standout exhibits featured Boson Lamp for hands-on lamp scenario interactions without coding, Maqueen Smart Construction Vehicle streamlining tasks via QR codes and visual sensors, and Unihiker, a user-friendly Python learning and AI development computer. Unihiker's versatility in intelligent vehicle systems and fruit classification projects was demonstrated. The Lark Weather Station, providing real-time weather data, was exhibited connected to Unihiker, offering educational insights into atmospheric conditions. DFRobot encourages personalized discussions at Stand NJ72 to guide schools in optimizing their STEM resources.
More Information : https://www.techdogs.com/td-articles/curtain-raisers/a-complete-guide-to-industrial-iot-solutions | grossben794 | |
1,744,658 | Mahaveerbook - India’s safest Online Cricket ID Provider. | Welcome to Mahaveerbook : India’s safest Online Cricket ID Provider. In India, Mahaveerbook has... | 0 | 2024-01-29T11:33:41 | https://dev.to/onlinebetting34/mahaveerbook-indias-safest-online-cricket-id-provider-3kjp | onlinecricketid, cricketbettingid, cricketbettingidprovider, onlinebettingid | Welcome to Mahaveerbook : India’s safest Online Cricket ID Provider.
In India, Mahaveerbook has established itself as the leading online cricket ID provider, offering cricket fans a safe and secure platform. In today's digital age, where online cricket communities are growing, having a reliable and trustworthy platform to connect with other fans and receive the latest developments is important. Mahaveerbook separates itself from the competition by focusing on client protection and privacy, offering unique features, and providing a seamless user experience. This article delves into the significance of online cricket ID providers in 2024, explores the features and benefits of Mahaveerbook's service, highlights its emphasis on security and user privacy, discusses the user experience and interface, showcases testimonials from satisfied users, and provides insights into Mahaveerbook's future developments and expansion plans in the field of online cricket.
The Importance of Online Cricket ID Providers
Facilitating Access to Cricket-related Information and Services
[Online cricket ID](https://mahaveerbook.com/) providers like Mahaveerbook offer a one-stop platform for accessing a wealth of cricket-related information and services. From player statistics and team updates to match schedules and ticket bookings, these platforms make it easy for cricket fans to stay up-to-date with the latest happenings in the cricketing world.
Enhancing the Overall Cricket Experience for Fans
Gone are the days of scrambling for television remote controls or refreshing multiple websites to get the latest cricket updates. With an online cricket ID provider like Mahaveerbook, fans can have a seamless and immersive cricket experience. From live scores and highlights to expert analysis and interactive forums, these platforms bring the excitement of the game right to your fingertips.
Features and Benefits of Mahaveerbook
User-friendly Interface and Seamless Navigation
Mahaveerbook prides itself on its intuitive user interface, making it easy for fans of all ages and technical backgrounds to navigate through the platform effortlessly. No more head-scratching or frustration - just pure cricket enjoyment.
Live Score Updates and Match Highlights
Mahaveerbook's live score updates and match highlights feature ensure that you never miss a ball, a wicket, or a fantastic boundary. Stay as close to the action as if you were sitting in the grandstand, minus the high-priced stadium snacks. Whether you're following your favourite team or just keeping up with the latest cricket news, Mahaveerbook keeps you informed in real time. It's similar to carrying around a pocket-sized personal scorecard.
Comprehensive Cricket Profiles and Personalized Settings
Mahaveerbook offers comprehensive cricket profiles for players, teams, and tournaments on the [Betting ID App](https://mahaveerbook.com/). Whether you're a fan of a particular player or want to keep track of upcoming matches, you can customize your settings to receive tailored updates and notifications, ensuring that you never miss a moment of cricketing action.
Real-time Updates and Notifications
Staying up to date with live scores, wickets, and match progress has never been easier. Mahaveerbook provides real-time updates and notifications, so you're always in the know, even if you're on the go.
Data security with encrypted technology
With cyber threats lurking around every corner, Mahaveerbook prioritises high-level security. To protect users' personal information, we implement rigorous data security protocols and encryption technology. Our cutting-edge security infrastructure protects your data, giving you peace of mind as you enjoy your online cricket experience.
Some other Popular Betting
Fairexch9 Com New ID
[Fairexch9.com New ID](https://mahaveerbook.com/fairexch9-com-new-id) is here to make the process of obtaining a new identification document as simple and safe as possible. Whether you require a driver's licence, a passport, or another type of identity, we have you covered.We prioritise your happiness and safety. Our skilled professionals work hard to verify that each ID we give is legitimate, correct, and in accordance with applicable requirements. You can rely on our competence in document processing and verification to achieve remarkable outcomes.
Lotusbook247 new id
[Lotusbook247 ID](https://mahaveerbook.com/lotusbook247-new-id) is your entry point into the betting world, where you may test your talents in a variety of sports, like cricket, hockey, football, horse racing, and others. Other possibilities for casino games betting include Baccarat, Roulette, Blackjack, Teen Patti, poker, and live dealer games. You must complete the Lotusbook247 ID Registration process in order to receive a unique identification number known as your Cricket betting ID or sports betting ID, which is the key to entering the betting Platform with expert guidance and becoming a winner.
Tigerexchange new id
Tiger Exchange ID stands out as a critical component, particularly for cricket, casino, and poker players. It is packed with great features that will let you access the world of casino games, live dealer games, slot games, card games, roulette, teen patti, lucky7, blackjack, Baccarat, and a variety of other gaming possibilities for making money through betting and casino games. Betting will be easier and safer with your privacy secured. If you want to get the most out of your cricket betting, choose T[iger Exchange ID](https://mahaveerbook.com/tigerexchange-new-id).
Conclusion
Mahaveerbook has established itself as the safest and greatest **online betting ID** provider in 2024, offering to the needs of cricket players around the globe. Mahaveerbook offers cricket fans a faultless and engaging experience because of its strong security measures, user-friendly layout, and a number of intriguing features. As the online cricket community expands, Mahaveerbook is committed to developing its services, broadening its reach, and providing an unparalleled platform for cricket fans to interact, engage, and keep informed about their beloved sport. Cricket lovers may feel confident about their online cricket trip with Mahaveerbook since they know they are part of a safe and vibrant community.
| onlinebetting34 |
1,744,740 | Xây Nhà Trọn Gói TPHCM | Công Ty CP Thiết Kế Nội Thất và Xây Dựng Mai Việt là doanh nghiệp chuyên về tư vấn, thiết kế kiến... | 0 | 2024-01-29T13:16:37 | https://dev.to/xaynhatrongoitphcm/xay-nha-tron-goi-tphcm-250p | beginners, webdev, tutorial, programming | Công Ty CP Thiết Kế Nội Thất và Xây Dựng Mai Việt là doanh nghiệp chuyên về tư vấn, thiết kế kiến trúc và xây nhà trọn gói tphcm, với đội ngũ Kiến trúc sư và Kỹ sư đầy tâm huyết.
Chú trọng sự hài lòng của khách hàng, công ty có kinh nghiệm rộng lớn trong nhiều dự án từ nhà ở, văn phòng đến khách sạn, luôn nỗ lực tối đa để mang lại sự hài lòng tuyệt đối cho khách hàng, làm niềm tự hào của chúng tôi.
Địa chỉ: 190/34 Bùi Văn Ngữ, P.Hiệp Thành, Quận 12, TP.HCM
Số điện thoại : 0978626365
Email: kientrucmaiviet@gmail.com
Tags: #xaynhatrongoitphcm #xaynhatrongoi #giaxaynhatrongoi #xaydungnhatrongoitphcmmaiviet #kientrucmaiviet
Website: [](https://maiviet.vn/dich-vu/xay-nha-tron-goi-hcm.html)
Google Site: h[ttps://sites.google.com/view/xay-nha-tron-goitphcm/xaynhatrongoitphcm](url)
Social:
[](https://webanketa.com/forms/6grkje1q60qk8d9r6rtp6d9r/)
[](https://www.sbnation.com/users/xaynhatrongoitphcm1)
[](https://influence.co/xaynhatrongoitphcm1)
[](https://www.ourbeagleworld.com/members/xaynhatrongoitphcm.256769/)
[](https://dev.to/xaynhatrongoitphcm) | xaynhatrongoitphcm |
1,744,826 | Networking Is Just Making Friends | Let’s get down to business. We’re chatting with the infinite source of wisdom that is Shruti Kapoor,... | 0 | 2024-01-29T14:22:28 | https://dev.to/tdesseyn/networking-is-just-making-friends-1j0e | career, watercooler | Let’s get down to business. We’re chatting with the infinite source of wisdom that is Shruti Kapoor, Staff Engineer at Slack and formerly Paypal. What are we talking about? Jobs, duh. Catch the whole convo [here](https://www.linkedin.com/video/live/urn:li:ugcPost:7155981782943678464/).
**TL;DR: Networking isn’t just cheesy, hand-shake business connections, it’s about making friends. And you’re probably not doing as much as you should to prep for your next interview.**
We’ve got good news and bad news to start. And both are the same news. You never know when your next layoff will happen. The bad side is that even if your business had it’s most successful year yet, outside of spending a couple grand talking to whoever replaced Miss Cleo, you can’t predict when the next layoff will happen. The good news in this is that you get to make moves now to fully prepare you for when the day comes.
So what does prepping for career doomsday look like? We always talk about keeping that resume extra crispy and keeping an active log of what you’re actually doing/accomplishing at work. But we haven’t really touched on networking inside your company. If you weren’t logging on every day or sharing an office with someone, would there be a space that you regularly connect with your coworkers? I mean you already know them, so why not give them an add on LinkedIn and stay in touch. And on the note of networking— why does it always feel so cringey. Like it’s something your dad taught you and you’re supposed to be wearing khaki pants and talking about ‘hitting the range’ this weekend. Let’s take a moment to get rid of that mental image and replace it with one of just making friends.
That’s all networking is at the end of the day. And, soft shoutout to my introvert friends, if making friends seems too difficult too, try approaching people in settings where there’s an easy and obvious subject. At a conference (cough cough) that could be approaching a speaker about questions you had on their topic. In an online space, you can comment about my corgi literally any time. Every compliment goes straight to his ears, apparently. You get the point. It doesn’t have to be some awkward convo where you just say “stocks.” back and forth. Go make some job friends so they’re there when you need them most.
Okay, now some quick interview advice. As much information as you can get up front the better. This may look like reaching out to the recruiter that contacted you and asking what the interview will be about. Good recruiters have already invested their time in you, and they’ll want you to succeed. Getting info can also look like asking questions to the hiring manager ahead of the interview. Ask for an outline of what will be discussed. For specifics, go back up and watch the full show. Also don’t forget to bring a list of questions with you. It seems simple enough, but one of the biggest determining factors on candidates getting hired is how interested they seem in the position. We’re flooded with similar talent and experience, so going one step beyond makes all the difference. And if you’re going after roles in areas you’re already passionate about or a product you love, this shouldn’t be a problem.
Final housekeeping note, here’s some gun.io upcoming events that you don’t want to miss out on because we really do want to hang out:
January 29-31 - THAT Conference in Round Rock, TX
Not only are we hosting the opening happy hour, but our very own Dev Advocate (me) will be giving a talk titled "Why Building Community & Content Can Launch Your Career."
February 8 - Gun.io X Couchbase in Nashville, TN
We're joining our friends at Couchbase for an evening of tech talks, networking, and, of course, plenty of food and drinks. If you'll be in town and want to join, shoot us a message and we'll send you more details! | tdesseyn |
1,744,845 | How to connect to Mongo database using Mongo shell on Ubuntu? | Introduction MongoDB, a widely used NoSQL database, provides developers with a flexible... | 0 | 2024-01-29T20:56:33 | https://dev.to/particle4dev/how-to-connect-to-mongo-database-using-mongo-shell-2145 | mongodb, terminal, ubuntu, mongodbcrashcourse | # Introduction
MongoDB, a widely used NoSQL database, provides developers with a flexible and scalable solution for managing large datasets. Connecting to a MongoDB database using the Mongo Shell is an essential skill for both developers and administrators. In this article, we will guide you through the step-by-step process of connecting to a MongoDB database using the Mongo Shell.
## Prerequisites
To utilize the MongoDB Shell, you need a MongoDB deployment to connect to.
- For a free cloud-hosted deployment, consider using [MongoDB Atlas](https://www.mongodb.com/cloud/atlas).
- If you want to run a local MongoDB deployment, refer to the [Install MongoDB](https://www.mongodb.com/docs/manual/installation/) documentation.
**Supported MongoDB Versions:**
You can use the MongoDB Shell to connect to MongoDB version 4.2 or greater.
## Step 1: The following instructions are for Ubuntu 22.04 (Jammy).
1. Install gnupg and its required libraries:
```sh
sudo apt-get install gnupg
```
2. Retry importing the key:
```sh
wget -qO- https://www.mongodb.org/static/pgp/server-7.0.asc | sudo tee /etc/apt/trusted.gpg.d/server-7.0.asc
```
3. Create the `/etc/apt/sources.list.d/mongodb-org-7.0.list` file for Ubuntu 22.04 (Jammy):
```sh
echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-7.0.list
```
4. Reload the local package database:
```sh
sudo apt-get update
```
5. Install the `mongosh` package:
```sh
sudo apt-get install -y mongodb-mongosh
```
6. Confirm that `mongosh` installed successfully:
```sh
mongosh --version
```
## Step 2: Connect to MongoDB Atlas with mongosh
To establish your connection, run the `mongosh` command with your connection string and options:
The connection string includes the following elements:
- Your cluster name
- A hash
- A flag for the API version
- A flag for the username you want to use to connect
It resembles the following string:
```sh
mongosh "mongodb+srv://YOUR_CLUSTER_NAME.YOUR_HASH.mongodb.net/" --apiVersion YOUR_API_VERSION --username YOUR_USERNAME
# for example
mongosh "mongodb+srv://db.fakeuri.mongodb.net/" --apiVersion 1 --username admin
```
## Step 3: Run Commands
### Switch Databases
To display the current database, use the following command:
```sh
db
```
This operation should return `test`, which is the default database. To switch databases, use the `use <db>` helper, as shown in the example below:
```sh
use <database>
```
### Create a New Database and Collection
To create a new database, use the `use <db>` command with the desired database name. For instance, the following commands create the `myNewDatabase` database and the `myCollection` collection using the `insertOne()` operation:
```sh
use myNewDatabase
db.myCollection.insertOne( { x: 1 } );
```
If the collection does not exist, MongoDB creates it when you first store data for that collection.
### Terminate a Running Command
To terminate a running command or query in mongosh, press `Ctrl + C`. When you enter `Ctrl + C`, mongosh:
- Interrupts the active command.
- Tries to terminate the ongoing, server-side operation.
- Returns a command prompt.
If mongosh cannot cleanly terminate the running process, it issues a warning.
## Conclusion
Connecting to a MongoDB database using the Mongo Shell is a straightforward process that involves a few simple steps. Whether you're a developer building applications or an administrator managing databases, mastering the basics of the Mongo Shell is crucial for efficient MongoDB usage. Now that you've learned the essentials, you can explore further commands and functionalities to harness the full power of MongoDB for your projects. | particle4dev |
1,744,848 | [Cloudforet] Enable AWS Billing Plugin | Cloudforet Cost Explorer Cloudforet can provide Cloud Billing information such as AWS,... | 0 | 2024-01-29T15:42:00 | https://dev.to/choonho/cloudforet-enable-aws-billing-plugin-3oma | # Cloudforet Cost Explorer
Cloudforet can provide Cloud Billing information such as AWS, Azure, and GCP.
To enable the Cost Explorer, you need to execute CLI command.
> Sorry, we didn't have frontend page for enable Cost Analysis API call.
# Enable CLI tool, spacectl
To enable this feature, you have to understand ***spacectl*** CLI tool.
> See how to enable spacectl. https://github.com/orgs/cloudforet-io/discussions/139
# Eanble Cost Explorer (AWS Billing Plugin)
Overall process consists of 4 steps.
1. access to spacectl POD
2. Register AWS Billing DataSource Plugin
3. Enable AWS Billing DataSource Plugin
4. Sync AWS Billing DataSource Plugin
## 1. access to spacectl POD
The postfix of spacectl POD name is different from environment.
```
kubectl get pod -n spaceone
repository-667d88c474-8rzvh 1/1 Running 0 46h
secret-7944cdb66d-9qslg 1/1 Running 0 46h
spacectl-c546c4d8c-ztlnr 1/1 Running 0 46h
statistics-5585fb5848-4qxlm 1/1 Running 0 46h
statistics-scheduler-85cb8c5b9f-26mq8 1/1 Running 0 46h
kubectl exec -ti spacectl-c546c4d8c-ztlnr /bin/bash -n spaceone
```
## 2 Register AWS Billing DataSource Plugin
We assume that you have configured spacectl API token already.
Create aws_data_source.yaml
```
---
name: AWS Cost Explorer Data Source
data_source_type: EXTERNAL
provider: aws
secret_type: USE_SERVICE_ACCOUNT_SECRET
plugin_info:
plugin_id: plugin-aws-cost-explorer-cost-datasource
```
To register DataSource
The command is
```
spacectl exec register cost_analysis.DataSource -f aws_data_source.yaml
```
It may take quite long time to deploy AWS Billing plugin. may be 5~10 minutes.
After register plugin, you can find ***data_source_id*** in your response.
You can also find the data_source_id like
```
spacectl list cost_analysis.DataSource -c data_source_id,name
```
## 3. Enable DataSource
After register AWS Billing Plugin, you have to enable DataSource.
```
spacectl exec enable cost_analysis.DataSource -p data_source_id=<Data Source ID>
```
After enable DataSource, you can find ***Cost Explorer*** menu in web browser has changed.
This step is just enable plugin and register daily sync-up scheduler.
## 4. Sync DataSource
To see the real data right now, sync data manually.
```
spacectl exec sync cost_analysis.DataSource -p data_source_id=<Data Source ID>
```
## Enjoy Cost Explorer

> If you have any question, please visit Cloudforet Discord channel. https://discord.gg/7ExpTmA6TE
# References
AWS cost explorer plugin
https://github.com/cloudforet-io/plugin-aws-cost-explorer-cost-datasource
| choonho | |
1,744,973 | CODER CYBER SERVICES - BITCOIN OR CRYPTO RECOVERY EXPERT | Instagram is now a breeding ground for scammers on the search for unsuspecting users whom they will... | 0 | 2024-01-29T16:24:01 | https://dev.to/barragan258/coder-cyber-services-bitcoin-or-crypto-recovery-expert-48d2 | Instagram is now a breeding ground for scammers on the search for unsuspecting users whom they will sell their game and flaunt their huge profits from cryptocurrency. I thought I’d struck a gold mine. I ended up messaging this broker and she went on this long spiel about Bitcoin mining and how it’s profitable, I gave it a long thought plus the strategy they used produced eye-watering returns of 50 percent per month. I was initially skeptical so a few months later I decided to invest and sent them $50 as a test, a month later I was sent back $50 along with another $30 of my profit. Shocked in disbelief I sent hundreds of dollars, then thousands, it didn't take long until I started telling friends and family who even sent more money.
One of my best mates sold his car for $10,000 and put all that money in, and it disappeared. All up, my friends and I lost over $100,000 to this scam, which caused me immense stress and embarrassment plus some of my friends decided not to talk to me anymore. It was like my integrity just vanished all of a sudden, because I’d convinced my friends, I’d shown them my profits and I was actively promoting it, almost like a salesman for her. I tried to go to the police and they said we’ve only lost $100,000. They know people who have lost millions, and this shattered every hope I had of recovering my money or tracing these criminals.
I found CODER CYBER SERVICES with the help of our new intern who referred me to give hackers a try, I’m really glad I listened. With the support of Coder Cyber Services and my Assets recovered I was able to return to investing but only in Stocks now, I stayed away from cryptocurrencies after the scam experience.
Many times we can’t avoid the negative patterns financially but if it’s a wrong investment at the hands of scammers “CODER CYBER SERVICES “has you covered. +1 (571) 591-1233 is their contact number or to know more about them don't hesitate to visit their website https://codercyberservices.info/. | barragan258 | |
1,745,250 | RubyConf: Scholar Edition | My first developer conference: RubyConf! I'm thrilled to share my incredible experience at RubyConf... | 0 | 2024-01-29T21:11:00 | https://dev.to/avikdal/rubyconf-scholar-edition-24n5 | ruby, beginners, rubyconf, learning | **My first developer conference: RubyConf!**
I'm thrilled to share my incredible experience at RubyConf 2023 and why I wholeheartedly recommend it, especially for fellow junior developers. Thanks to the [Scholars & Guides Program](https://rubycentral.org/scholars_guides_program/) by Ruby Central, I secured a scholarship to attend this remarkable conference held in San Diego, CA.

The Scholars and Guides Program is a mentorship initiative for aspiring Rubyists looking to deepen their knowledge of the Ruby programming language, enhance their professional skills, and navigate Ruby Central conferences like RubyConf and RailsConf. Ruby Central provides scholarships specifically for developers new to the Ruby community, aiming to build important professional connections. While the program welcomes community members interested in learning Ruby, it strongly encourages applicants from underrepresented tech communities. Each scholar is paired with a Guide who serves as a mentor, offering valuable insights and advice on Ruby programming and the developer profession, creating a rewarding conference experience. Explore the various conference scholarship offerings by Ruby Central [here](https://rubycentral.org/scholars_guides_program/).
As a recent career switcher and a junior developer fresh from boot camp, I was excited about attending my first dev conference. I knew several people who had been to RubyConf 2022, and they spoke so highly of the program I was delighted to have the opportunity to go as a scholar.

**Conf Scholar Experience**
Connecting with fellow Scholars and my Guide before and during RubyConf via Slack, LinkedIn, and Discourse allowed for resource-sharing and networking before, during, and after the event. My Guide, reaching out proactively before the conference, offered unwavering support. A recent addition to Scholar requirements is a mini-project, an opportunity to present any subject or research to your program peers or a larger audience at the 5-minute Lightning Talks. Ideas include community awareness, ongoing projects, passion research, or learning new coding skills. These projects facilitate skill growth and inspire others through your findings.
Attending a conference can be overwhelming. To ease the conference experience, I planned my goals and agenda, including prioritized talks, speakers, and activities. Then, I collaborated with my Guide for feedback and support.
As a past RubyConf attendee and speaker, my Guide shared insider gems like the "MINSWAN" principle, emphasizing the friendly atmosphere fostered by "Matz is nice, so we are nice." This notion was held during my lightning talk, where, amidst confessing my nerves, the audience erupted in cheers and applause, showcasing a remarkably supportive community. Many fellow scholars also embraced the opportunity to participate in lightning talks, and you can catch those [here](https://www.youtube.com/watch?v=tNcfYLuQ5Es).

While I won't summarize all the conference talks, you can watch them on Ruby Central's YouTube channel [here](https://www.youtube.com/c/RubyCentral). They were diverse and entertaining, and although a lot of it went over my head, a thread I noticed during the event was a message of curiosity and playfulness in coding, and that was an unexpected albeit refreshing encouragement.
**Takeaways from a First-Timer's Perspective**
1. Embrace the unfamiliar during talks; take notes and create action items for post-conference exploration, adding a dynamic layer to your learning journey.
2. Get to know your fellow Rubyists! Take advantage of the "Hallway Track,” meaning you should set aside time to chat spontaneously with others in the hallway between scheduled talks.
3. For career changers, refine your pitch and connect with others for inspiring insights on unconventional paths in software engineering.
4. Explore a new city, engage in social events, and consider trying the tradition of karaoke – it's incredibly enjoyable if you're up for a late night! Embrace new experiences with your conference buddies, as you may discover surprising things about yourself during this time.
5. Collect fun swag—stickers, bags, t-shirts, and more—if you're into it!
**Thank You, Ruby community & Ruby Central!**
In conclusion, RubyConf was a wonderful experience, and I hope to return as a Guide or Volunteer someday. A heartfelt thank you to the welcoming Ruby community and Ruby Central team, the incredible RubyConf organizers, and my fellow Scholars & Guides!

** Photos taken by Harmony Haft, _San Diego Conference Photography_ | avikdal |
1,745,337 | Checky - Verify Phone Number API | Introducing Checky Verify Phone Number API – a powerful tool to determine the validity of phone... | 0 | 2024-01-29T23:22:10 | https://dev.to/kidddevs/checky-verify-phone-number-api-46k2 | programming, api, dakidarts, verification | Introducing Checky Verify Phone Number API – a powerful tool to determine the validity of phone numbers, retrieve comprehensive carrier information, and discover valid phone numbers within text inputs. Whether you're building applications in e-commerce, finance, marketing, or other domains, Checky is designed to enhance your services in various ways.
## Table of Contents
- [Features](#features)
- [Phone Type Verification](#1-phone-type-verification)
- [Carrier Information](#2-carrier-information)
- [Find Phone Numbers (New)](#3-find-phone-numbers-new)
- [Potential Advantages](#potential-advantages)
- [Get Started](#get-started)
- [Usage](#usage)
- [Issues](#issues)
## Features:
### 1. Phone Type Verification
- **Real-time Validation:** Determine the validity of a given phone number in real-time.
- **Phone Line Identification:** Obtain information about the type of phone line connected (e.g., "MOBILE").
- **Carrier Details:** Retrieve carrier information for enhanced insights.
- **Timezone and Geolocation:** Access timezone and geolocation data for a comprehensive understanding.
### 2. Carrier Information
- **Comprehensive Data:** Retrieve detailed information about the carrier or service provider.
- **Accurate Timezone Data:** Access essential timezone information associated with the phone number.
- **Enhanced Geolocation:** Obtain accurate geolocation information for improved analysis.
- **Fraud Prevention:** Perfect for identifying and preventing fraudulent activities.
### 3. Find Phone Numbers (New)
- **Text Input Discovery:** Discover valid phone numbers within a given text input.
- **Data Extraction:** Retrieve information about phone type, carrier, timezone, and location.
- **Geospatial Analysis:** Enhance geospatial analysis with detailed location details.
## Potential Advantages:
- **Enhanced Data Quality:** Ensure accuracy in your database by validating phone numbers.
- **Fraud Prevention:** Identify and prevent fraudulent activities through phone number verification.
- **Geospatial Analysis:** Improve geospatial analysis and location-based services with accurate geolocation data.
- **Customer Trust:** Build trust with customers by validating their phone numbers and ensuring secure communication.
- **Versatility:** Suitable for a wide range of use cases, including e-commerce, finance, marketing, and more.
## Get Started:
Explore the capabilities of Checky and integrate its functionalities to improve your applications, streamline customer interactions, and enhance the security of your services.
1. **Visit [Checky on RapidAPI](https://rapidapi.com/kidddevs/api/checky-verify-phone-number) for more details.**
## Usage:
1. **Visit [Checky on RapidAPI](https://rapidapi.com/kidddevs/api/checky-verify-phone-number) for information on how to make API requests and test the functionalities.**
## Issues:
Encountered a problem or have a question? Feel free to [open an issue](https://github.com/dakidarts/checky-verify-phone-number-api/issues) on GitHub.
Happy coding! 🌟
| kidddevs |
1,745,428 | 🚀Navigating the GraphQL Galaxy🌌: A Comprehensive Roadmap for Developers 🚀 | Embarking on the journey of mastering GraphQL can be an exhilarating experience. This powerful query... | 0 | 2024-01-30T01:44:14 | https://dev.to/mohitkadwe19/navigating-the-graphql-galaxy-a-comprehensive-roadmap-for-developers-43nm | javascript, webdev, graphql, programming | Embarking on the journey of mastering GraphQL can be an exhilarating experience. This powerful query language has revolutionized the way developers interact with APIs, offering flexibility, efficiency, and a holistic approach to data fetching. Whether you're a seasoned developer or just starting, this comprehensive roadmap will guide you through the GraphQL galaxy, helping you explore its various facets.
## 🌐 GraphQL Basics: Laying the Foundation
**Schema Definition 📄**
Understand the fundamental building blocks of GraphQL by defining a schema that outlines your data types and their relationships.
**Types 📊**
Dive into scalar and custom types, exploring how they shape your data and define the core of your GraphQL schema.
**Queries, Mutations, and Subscriptions 📝💥🔄**
Master the art of querying data, executing mutations, and handling real-time updates through subscriptions.
**Directives 🛠️**
Learn to wield directives to control the execution of your GraphQL operations, providing a dynamic and customizable experience.
## ⚙️ GraphQL Operations: Behind the Scenes
**Query Resolution, Mutation Execution, and Subscription Handling 📤⚡🎉**
Delve into the inner workings of GraphQL operations, understanding how queries are resolved, mutations executed, and subscriptions handled.
## 🧑💻 GraphQL Schemas & Types: Crafting Your Data Model
**Scalar Types and Enum Types 🌈🌐**
Explore the diverse range of scalar types and delve into enumerations to enhance your data modeling capabilities.
**Input Types 📥**
Master the use of input types to facilitate cleaner and more maintainable mutations in your GraphQL schema.
## 🧰 GraphQL Resolvers: Bridging the Gap
**Query and Mutation Resolvers, Resolver Arguments 📤💥🎯**
Become proficient in writing resolvers, handling query and mutation resolution, and effectively managing resolver arguments.
## 🚀 Apollo Client: Conquering Client-Side GraphQL
**Setup, Queries, Mutations, and Caching 🛠️📝💥🔄**
Navigate the world of Apollo Client, from initial setup to performing queries, mutations, and efficient caching.
## 🔄 Relay Framework: Unleashing GraphQL’s Full Potential
**Node Interface, Connections, and Pagination 🌐🔗📄**
Unlock the advanced features of GraphQL with the Relay framework, handling connections, nodes, and paginating through large datasets.
## 🔒 Authentication & Authorization: Securing Your GraphQL API
**JWT Authentication and Role-Based Access Control 🌐🔐🧑💼**
Implement secure authentication using JSON Web Tokens and control access with role-based authorization.
## 🚨 Error Handling: Navigating the Troubles
**GraphQL Errors and Error Extensions ❌🚫🚀
**
Learn effective error handling strategies, understanding and extending GraphQL errors for a smoother development experience.
## 🌟 GraphQL Best Practices: Mastering Efficiency
**Batched Resolvers, DataLoader, Debouncing & Throttling 🔄⚡⏲️**
Discover best practices for optimizing GraphQL performance, including batching resolvers, using DataLoader, and managing data flow.
## 🌐 Federation: Embracing Microservices
**Microservices Architecture, Service Definition, and Entity Resolution 🚀📄🎯**
Dive into the world of GraphQL federation, architecting scalable systems using microservices and resolving entities seamlessly.
## ⚙️ Testing GraphQL: Ensuring Reliability
**Unit Testing, Integration Testing, and Mocking 🧪🤝🃏**
Learn the art of testing GraphQL applications, from unit tests to integration tests and effective mocking strategies.
## 🧰 GraphQL Tools & Libraries: Leveraging the Ecosystem
**Apollo Server, Relay, Prisma, and GraphQL Yoga 🚀🔗🛠️🧘**
Explore the rich GraphQL ecosystem, utilizing tools like Apollo Server, Relay, Prisma, and GraphQL Yoga to streamline development.
## 🌍 Real-world GraphQL: Applying Your Knowledge
**Building APIs, Optimizing Queries, and Handling Large Datasets 🚀⚡📊**
Apply your GraphQL expertise to real-world scenarios, building APIs, optimizing queries, and efficiently handling large datasets.
**🔄🌐 GraphQL vs REST: Navigating the Paradigm Shift**
Explore the fundamental differences between GraphQL and REST, understanding when and why GraphQL might be the preferred choice.
**🚀🔮 Future of GraphQL: What Lies Ahead**
Stay ahead of the curve by exploring the evolving landscape of GraphQL and anticipating future trends and advancements.
## 🤝📚 Community & Resources: Joining the GraphQL Movement
- Conferences & Meetups 🌍🤝
- Documentation 📚
- Online Communities 🌐💬
Connect with the vibrant GraphQL community, attend conferences, explore documentation, and engage in online forums to enhance your learning experience.
## 🚀 GraphQL Knowledge Galaxy 🌌
```
|
|── GraphQL Basics 🚀
| ├── Schema Definition 📄
| ├── Types 📊
| | ├── Scalar Types 🌈
| | └── Custom Types 🧑💼
| ├── Queries 📝
| | ├── Basic Queries 🔍
| | ├── Query Variables 🧾
| | └── Fragments 🧩
| ├── Mutations 💥
| ├── Subscriptions 🔄
| └── Directives 🛠️
|
|── GraphQL Operations ⚙️
| ├── Query Resolution 📤
| ├── Mutation Execution ⚡
| └── Subscription Handling 🎉
|
|── GraphQL Schemas & Types 🧑💻
| ├── Scalar Types 🌈
| | ├── Int 🕵️♂️
| | ├── Float 🌊
| | ├── String 📝
| | ├── Boolean ✔️
| | └── ID 🆔
| ├── Enum Types 🌐
| └── Input Types 📥
|
|── GraphQL Resolvers 🧰
| ├── Query Resolvers 📤
| ├── Mutation Resolvers 💥
| └── Resolver Arguments 🎯
|
|── Apollo Client 🚀
| ├── Setup 🛠️
| ├── Queries 📝
| ├── Mutations 💥
| └── Caching 🔄
|
|── Relay Framework 🔄
| ├── Node Interface 🌐
| ├── Connections 🔗
| └── Pagination 📄
|
|── Authentication & Authorization 🔒
| ├── JWT Authentication 🌐🔐
| └── Role-Based Access Control 🧑💼🔐
|
|── Error Handling 🚨
| ├── GraphQL Errors ❌
| └── Error Extensions 🚫🚀
|
|── GraphQL Best Practices 🌟
| ├── Batched Resolvers 🔄⚡
| ├── DataLoader 📤📥
| ├── Debouncing & Throttling ⏲️
| └── Schema Stitching 🧵🌐
|
|── Federation 🌐
| ├── Microservices Architecture 🚀
| ├── Service Definition 📄
| └── Entity Resolution 🎯
|
|── Testing GraphQL ⚙️
| ├── Unit Testing 🧪
| ├── Integration Testing 🤝
| └── Mocking 🃏
|
|── GraphQL Tools & Libraries 🧰
| ├── Apollo Server 🚀
| ├── Relay 🔗
| ├── Prisma 🛠️
| └── GraphQL Yoga 🧘
|
|── Real-world GraphQL 🌍
| ├── Building APIs 🚀
| ├── Optimizing Queries ⚡
| └── Handling Large Datasets 📊
|
|── GraphQL vs REST 🔄🌐
|
|── Future of GraphQL 🚀🔮
|
|── Community & Resources 🤝📚
| ├── Conferences & Meetups 🌍🤝
| ├── Documentation 📚
| └── Online Communities 🌐💬
|
|____________ END __________________
```
## 🌈💻 Conclusion: Your GraphQL Odyssey Begins
As you navigate this comprehensive GraphQL roadmap, remember that learning is a journey, not a destination. Embrace the challenges, celebrate victories, and contribute to the ever-growing GraphQL community. Your GraphQL odyssey has just begun, and the possibilities are limitless! 🌈💻🚀
Let the GraphQL adventure commence! Share your experiences, insights, and newfound knowledge with the global developer community. Happy coding! 🚀🌐
If you have cool ideas or questions about making your code do awesome things, just drop a comment below! Let's chat and learn together! 👍💬
## Connect with me
Let's stay connected and keep the conversation going! Feel free to connect with me on my social media platforms for updates, interesting discussions, and more. I'm always eager to engage with like-minded individuals🌱, so don't hesitate to reach out and connect. Looking forward to connecting with you all! 🌟
Here's my link: {% embed https://linktr.ee/mohitkadwe %}
| mohitkadwe19 |
1,745,565 | Top Asking SQL vs Mongo Queries | To find the second-highest salary in a MySQL database, you can use a query that utilizes the ORDER BY... | 0 | 2024-01-30T06:22:28 | https://dev.to/akmaurya31/top-asking-queries-1koi | javascript, mongodb, sql, mysql | To find the second-highest salary in a MySQL database, you can use a query that utilizes the `ORDER BY` and `LIMIT` clauses. Assuming you have a table named `employees` with a column named `salary`, here is an example query:
```sql
SELECT DISTINCT salary
FROM employees
ORDER BY salary DESC
LIMIT 1 OFFSET 1;
```
Explanation:
1. `SELECT DISTINCT salary`: This selects all distinct salary values from the `employees` table.
2. `ORDER BY salary DESC`: This orders the salary values in descending order, so the highest salary comes first.
3. `LIMIT 1 OFFSET 1`: This limits the result set to one row starting from the second row. In other words, it skips the first row (highest salary) and retrieves the second row, which is the second-highest salary.
Make sure to replace `employees` and `salary` with the actual table and column names in your database.
In MongoDB, you can use the `aggregate` pipeline to find the second-highest salary. Assuming you have a collection named `employees` with a field named `salary`, here is an example query:
```mongodb
db.employees.aggregate([
{ $group: { _id: null, salaries: { $addToSet: "$salary" } } },
{ $unwind: "$salaries" },
{ $sort: { "salaries": -1 } },
{ $skip: 1 },
{ $limit: 1 }
])
```
Explanation:
1. `$group`: Groups the documents in the collection and creates an array `salaries` containing all unique salary values.
2. `$unwind`: Deconstructs the array `salaries` to transform each element of the array into a separate document.
3. `$sort`: Sorts the documents based on the `salaries` field in descending order.
4. `$skip`: Skips the first document, which corresponds to the highest salary.
5. `$limit`: Limits the result set to one document, which represents the second-highest salary.
This query assumes that the `salary` field in the `employees` collection contains numerical values. Adjust the field names accordingly if your actual field names are different.
Certainly! One of the most commonly asked SQL query-based questions involves finding the second-highest (or nth-highest) value in a particular column. Here's an example of a query to find the second-highest salary in a table:
```sql
SELECT MAX(salary) AS second_highest_salary
FROM employees
WHERE salary < (SELECT MAX(salary) FROM employees);
```
Explanation:
1. The inner subquery `(SELECT MAX(salary) FROM employees)` finds the highest salary in the `employees` table.
2. The outer query then selects the maximum salary from the `employees` table where the salary is less than the highest salary found in the subquery. This effectively retrieves the second-highest salary.
This approach is often considered a classic and is widely used in interviews to assess a candidate's understanding of subqueries and logical thinking. Remember to replace `employees` and `salary` with your actual table and column names.
Keep in mind that there are multiple ways to achieve the same result, and interviewers might be interested in discussing alternative approaches or optimizations.
```mongodb
db.employees.aggregate([
{ $sort: { salary: -1 } },
{ $group: { _id: null, salaries: { $push: "$salary" } } },
{ $project: { second_highest_salary: { $arrayElemAt: ["$salaries", 1] } } }
])
```
**Question 13** Write a query to fetch the EmpFname from the EmployeeInfo table in upper case and use the ALIAS name as EmpName.?
Certainly! Below are examples of queries in both SQL and MongoDB for fetching the `EmpFname` from the `EmployeeInfo` table or collection in upper case and using the alias name `EmpName`.
### SQL Query:
```sql
SELECT UPPER(EmpFname) AS EmpName
FROM EmployeeInfo;
```
Explanation:
- `UPPER(EmpFname)`: Converts the `EmpFname` column values to uppercase.
- `AS EmpName`: Assigns the alias name `EmpName` to the result set.
### MongoDB Query:
```mongodb
db.EmployeeInfo.aggregate([
{
$project: {
EmpName: { $toUpper: "$EmpFname" }
}
}
])
```
Explanation:
- `$project`: Reshapes each document in the collection.
- `EmpName: { $toUpper: "$EmpFname" }`: Uses the `$toUpper` operator to convert the value of `EmpFname` to uppercase and assigns it to the field `EmpName`.
Make sure to adjust the collection and field names according to your actual MongoDB setup. If you have any more questions or if there's anything else I can help you with, feel free to ask!
**Question 14** Write a query to fetch the number of employees working in the department ‘HR’.?
Certainly! Here are the queries for both MySQL and MongoDB to fetch the number of employees working in the department 'HR':
### MySQL Query:
```sql
SELECT COUNT(*) AS NumEmployeesInHR
FROM EmployeeInfo
WHERE Department = 'HR';
```
Explanation:
- `COUNT(*)`: Counts the number of rows that satisfy the condition.
- `WHERE Department = 'HR'`: Filters the rows to include only those where the department is 'HR'.
### MongoDB Query:
```mongodb
db.EmployeeInfo.count({ Department: 'HR' })
```
Explanation:
- `db.EmployeeInfo.count({ Department: 'HR' })`: Counts the number of documents in the `EmployeeInfo` collection where the department is 'HR'.
Adjust the collection and field names according to your actual MongoDB setup.
**Question 15** Write a query to get the current date.?
Certainly! Below are the queries for both MySQL and MongoDB to get the current date:
### MySQL Query:
```sql
SELECT CURDATE() AS CurrentDate;
```
Explanation:
- `CURDATE()`: Returns the current date in MySQL.
### MongoDB Query:
```mongodb
db.runCommand({ date: 1 })
```
Explanation:
- In MongoDB, you can use the `db.runCommand()` method with the `{ date: 1 }` command to retrieve the current date.
These queries will provide you with the current date in both MySQL and MongoDB. If you have any more questions or if there's anything else I can help you with, feel free to ask!
**Question 16** Write a query to retrieve the first four characters of EmpLname from the EmployeeInfo table.
Certainly! Below are the queries for both MySQL and MongoDB to retrieve the first four characters of `EmpLname` from the `EmployeeInfo` table or collection:
### MySQL Query:
```sql
SELECT SUBSTRING(EmpLname, 1, 4) AS FirstFourCharacters
FROM EmployeeInfo;
```
Explanation:
- `SUBSTRING(EmpLname, 1, 4)`: Retrieves the substring of `EmpLname` starting from position 1 (the first character) and with a length of 4 characters.
### MongoDB Query:
```mongodb
db.EmployeeInfo.aggregate([
{
$project: {
FirstFourCharacters: { $substr: ["$EmpLname", 0, 4] }
}
}
])
```
Explanation:
- `$project`: Reshapes each document in the collection.
- `FirstFourCharacters: { $substr: ["$EmpLname", 0, 4] }`: Uses the `$substr` operator to retrieve the substring of `EmpLname` starting from position 0 (the first character) and with a length of 4 characters.
Adjust the collection and field names according to your actual MongoDB setup.
**Question 17** Write q query to find all the employees whose salary is between 50000 to 100000.
Certainly! Here are the queries for both MySQL and MongoDB to find all employees whose salary is between 50000 and 100000:
### MySQL Query:
```sql
SELECT *
FROM EmployeeInfo
WHERE Salary BETWEEN 50000 AND 100000;
```
Explanation:
- `BETWEEN 50000 AND 100000`: Filters the rows to include only those where the salary falls within the specified range.
### MongoDB Query:
```mongodb
db.EmployeeInfo.find({
Salary: { $gte: 50000, $lte: 100000 }
})
```
Explanation:
- `$gte`: Matches values that are greater than or equal to the specified value (50000).
- `$lte`: Matches values that are less than or equal to the specified value (100000).
Adjust the collection and field names according to your actual MongoDB setup.
**Question 18** Write a query to find the names of employees that begin with ‘S’
Certainly! Here are the queries for both MySQL and MongoDB to find the names of employees that begin with 'S':
### MySQL Query:
```sql
SELECT *
FROM EmployeeInfo
WHERE EmpName LIKE 'S%';
```
Explanation:
- `EmpName LIKE 'S%'`: Filters the rows to include only those where the `EmpName` starts with the letter 'S'.
### MongoDB Query:
```mongodb
db.EmployeeInfo.find({
EmpName: /^S/
})
```
Explanation:
- `EmpName: /^S/`: Uses a regular expression to match names that start with the letter 'S'.
Adjust the collection and field names according to your actual MongoDB setup.
**Question 19** Write a query to fetch top N records.
Certainly! To fetch the top N records in both MySQL and MongoDB, you can use the `LIMIT` clause in MySQL and the `limit()` method in MongoDB. Here are the queries for both:
### MySQL Query:
```sql
SELECT *
FROM YourTableName
LIMIT N;
```
Explanation:
- `LIMIT N`: Limits the result set to the first N records.
Replace `YourTableName` with the actual name of your table, and replace `N` with the desired number of records.
### MongoDB Query:
```mongodb
db.YourCollectionName.find().limit(N)
```
Explanation:
- `.limit(N)`: Limits the result set to the first N records.
Replace `YourCollectionName` with the actual name of your collection, and replace `N` with the desired number of records.
**Question 20** Write a query to retrieve the EmpFname and EmpLname in a single column as “FullName”. The first name and the last name must be separated with space.
Certainly! Here are the queries for both MySQL and MongoDB to retrieve `EmpFname` and `EmpLname` in a single column as "FullName":
### MySQL Query:
```sql
SELECT CONCAT(EmpFname, ' ', EmpLname) AS FullName
FROM YourTableName;
```
Explanation:
- `CONCAT(EmpFname, ' ', EmpLname)`: Concatenates `EmpFname` and `EmpLname` with a space in between.
Replace `YourTableName` with the actual name of your table.
### MongoDB Query:
```mongodb
db.YourCollectionName.aggregate([
{
$project: {
FullName: { $concat: ["$EmpFname", " ", "$EmpLname"] }
}
}
])
```
Explanation:
- `$project`: Reshapes each document in the collection.
- `FullName: { $concat: ["$EmpFname", " ", "$EmpLname"] }`: Concatenates `EmpFname` and `EmpLname` with a space in between.
Replace `YourCollectionName` with the actual name of your collection.
**Question 21** Write a query find number of employees whose DOB is between 02/05/1970 to 31/12/1975 and are grouped according to gender
Certainly! Here are the queries for both MySQL and MongoDB to find the number of employees whose DOB is between 02/05/1970 to 31/12/1975 and group them according to gender:
### MySQL Query:
```sql
SELECT Gender, COUNT(*) AS NumEmployees
FROM YourTableName
WHERE DOB BETWEEN '1970-05-02' AND '1975-12-31'
GROUP BY Gender;
```
Explanation:
- `COUNT(*)`: Counts the number of rows for each gender group.
- `GROUP BY Gender`: Groups the result set by the `Gender` column.
Replace `YourTableName` with the actual name of your table.
### MongoDB Query:
```mongodb
db.YourCollectionName.aggregate([
{
$match: {
DOB: {
$gte: ISODate("1970-05-02T00:00:00.000Z"),
$lte: ISODate("1975-12-31T23:59:59.999Z")
}
}
},
{
$group: {
_id: "$Gender",
NumEmployees: { $sum: 1 }
}
}
])
```
Explanation:
- `$match`: Filters the documents to include only those with DOB between the specified date range.
- `$group`: Groups the documents by the `Gender` field.
- `NumEmployees: { $sum: 1 }`: Calculates the count of employees for each gender group.
Replace `YourCollectionName` with the actual name of your collection.
Adjust the table/collection names according to your actual MySQL/MongoDB setup.
**Question 22** Write a query to fetch details of employees whose EmpLname ends with an alphabet ‘A’ and contains five alphabets.
Your SQL query is almost correct. The query you provided will select all records from the `EmployeeInfo` table where the `EmpLname` ends with an 'a' and contains four characters before the 'a'. Here's the corrected version:
```sql
SELECT * FROM EmployeeInfo WHERE EmpLname LIKE '____a';
```
Explanation:
- `LIKE '____a'`: The underscore (`_`) is a wildcard in SQL, representing a single character. So, `____a` will match any four characters followed by the letter 'a'.
This query will fetch details of employees whose last names end with 'a' and have exactly four characters before the 'a'.
In MongoDB, the `$regex` operator is commonly used to perform regular expression-based queries, which is analogous to the use of wildcards in SQL `LIKE` queries. If you want to perform a wildcard-like query in MongoDB, you can use a regular expression with the `$regex` operator.
Here's an example of how you might achieve a similar effect to your SQL query using a wildcard in MongoDB:
```mongodb
db.EmployeeInfo.find({
EmpLname: { $regex: /^.{4}a$/i }
})
```
Explanation:
- `^`: Anchors the regex to the start of the string.
- `.{4}`: Matches any four characters.
- `a$`: Matches the letter 'a' at the end of the string.
- `i`: Makes the regex case-insensitive.
This query will find documents in the `EmployeeInfo` collection where the `EmpLname` field contains exactly four characters before the letter 'a'. Adjust the collection and field names according to your actual MongoDB setup.
Your SQL query is almost correct. The query you provided will select all records from the `EmployeeInfo` table where the `EmpLname` ends with an 'a' and contains four characters before the 'a'. Here's the corrected version:
```sql
SELECT * FROM EmployeeInfo WHERE EmpLname LIKE '____a';
```
Explanation:
- `LIKE '____a'`: The underscore (`_`) is a wildcard in SQL, representing a single character. So, `____a` will match any four characters followed by the letter 'a'.
This query will fetch details of employees whose last names end with 'a' and have exactly four characters before the 'a'.
**Question 23** Write a query to fetch details of all employees excluding the employees with first names, “Radha” and Krishna from the EmployeeInfo table.
Certainly! Here are the queries for both MySQL and MongoDB to fetch details of all employees excluding those with the first names "Radha" and "Krishna":
### MySQL Query:
```sql
SELECT *
FROM EmployeeInfo
WHERE EmpFname NOT IN ('Radha', 'Krishna');
```
Explanation:
- `NOT IN ('Radha', 'Krishna')`: Excludes rows where the `EmpFname` is either "Radha" or "Krishna".
### MongoDB Query:
```mongodb
db.EmployeeInfo.find({
EmpFname: { $nin: ['Radha', 'Krishna'] }
})
```
Explanation:
- `$nin`: Matches values that are not in the specified array.
Adjust the table/collection and field names according to your actual MySQL/MongoDB setup.
**Question 24**
SELECT * FROM EmployeeInfo WHERE Address LIKE 'DELHI(DEL)%';
In MongoDB, you can use the `$regex` operator to perform regular expression-based queries. Here's the equivalent MongoDB query for your given SQL query:
### MongoDB Query:
```mongodb
db.EmployeeInfo.find({
Address: { $regex: /^DELHI\(DEL\)/ }
})
```
Explanation:
- `$regex`: Allows you to use regular expressions in the query.
- `/^DELHI\(DEL\)/`: Represents a regular expression that matches strings starting with "DELHI(DEL)".
Adjust the collection and field names according to your actual MongoDB setup.
Keep in mind that MongoDB uses a different syntax for regular expressions compared to SQL, and you need to escape special characters like parentheses.
**Question 25** Write a query to fetch all employees who also hold the managerial position.
SELECT E.EmpFname, E.EmpLname, P.EmpPosition
FROM EmployeeInfo E INNER JOIN EmployeePosition P ON
E.EmpID = P.EmpID AND P.EmpPosition IN ('Manager');
In MongoDB, you can use the `$lookup` stage to perform a similar operation to an SQL join. Here's the equivalent MongoDB query for your given SQL query:
### MongoDB Query:
```mongodb
db.EmployeeInfo.aggregate([
{
$lookup: {
from: "EmployeePosition",
localField: "EmpID",
foreignField: "EmpID",
as: "positions"
}
},
{
$match: {
"positions.EmpPosition": "Manager"
}
},
{
$project: {
_id: 0,
EmpFname: 1,
EmpLname: 1,
EmpPosition: "$positions.EmpPosition"
}
}
])
```
Explanation:
- `$lookup`: Performs a left outer join to the `EmployeePosition` collection based on the `EmpID` field.
- `$match`: Filters the documents to include only those where the `EmpPosition` in the joined array is "Manager".
- `$project`: Shapes the output document, including only the desired fields (`EmpFname`, `EmpLname`, and `EmpPosition`).
Adjust the collection and field names according to your actual MongoDB setup.
**Question 26** Write a query to fetch the department-wise count of employees sorted by department’s count in ascending order.
```sql
SELECT Department, count(EmpID) AS EmpDeptCount
FROM EmployeeInfo GROUP BY Department
ORDER BY EmpDeptCount ASC;
```
Certainly! Below is the equivalent MongoDB query for your given SQL query, which fetches the department-wise count of employees sorted by the department's count in ascending order:
### MongoDB Query:
```mongodb
db.EmployeeInfo.aggregate([
{
$group: {
_id: "$Department",
EmpDeptCount: { $sum: 1 }
}
},
{
$sort: {
EmpDeptCount: 1
}
}
])
```
Explanation:
- `$group`: Groups the documents by the `Department` field and calculates the count of employees in each department using `$sum: 1`.
- `$sort`: Sorts the result set based on the `EmpDeptCount` field in ascending order (`1`).
Adjust the collection and field names according to your actual MongoDB setup.
**Question 27** Write a query to retrieve two minimum and maximum salaries from the EmployeePosition table.
To retrieve two minimum salaries, you can write a query as below:
```sql
SELECT DISTINCT Salary FROM EmployeePosition E1
WHERE 2 >= (SELECTCOUNT(DISTINCT Salary)FROM EmployeePosition E2
WHERE E1.Salary >= E2.Salary) ORDER BY E1.Salary DESC;
--To retrieve two maximum salaries, you can write a query as below:
SELECT DISTINCT Salary FROM EmployeePosition E1
WHERE 2 >= (SELECTCOUNT(DISTINCT Salary) FROM EmployeePosition E2
WHERE E1.Salary <= E2.Salary) ORDER BY E1.Salary DESC;
```
For MongoDB, you can use the aggregation framework to achieve a similar result. Here's the equivalent MongoDB query for retrieving two minimum and maximum salaries from the `EmployeePosition` collection:
### MongoDB Query for Two Minimum Salaries:
```mongodb
db.EmployeePosition.aggregate([
{
$group: {
_id: null,
distinctSalaries: { $addToSet: "$Salary" }
}
},
{
$unwind: "$distinctSalaries"
},
{
$sort: {
distinctSalaries: 1
}
},
{
$limit: 2
}
])
```
Explanation:
- `$group`: Groups all documents into one group, creating an array `distinctSalaries` containing all unique salary values.
- `$unwind`: Deconstructs the array `distinctSalaries` to transform each element of the array into a separate document.
- `$sort`: Sorts the documents based on the `distinctSalaries` field in ascending order.
- `$limit`: Limits the result set to the first 2 documents.
### MongoDB Query for Two Maximum Salaries:
```mongodb
db.EmployeePosition.aggregate([
{
$group: {
_id: null,
distinctSalaries: { $addToSet: "$Salary" }
}
},
{
$unwind: "$distinctSalaries"
},
{
$sort: {
distinctSalaries: -1
}
},
{
$limit: 2
}
])
```
Explanation:
- Similar to the previous query, but sorts in descending order (`$sort: { distinctSalaries: -1 }`) to get the two maximum salaries.
Adjust the collection and field names according to your actual MongoDB setup.
**Question 28** Write a query to retrieve duplicate records from a table.
```sql
SELECT EmpID, EmpFname, Department COUNT(*)
FROM EmployeeInfo GROUP BY EmpID, EmpFname, Department
HAVING COUNT(*) > 1;
```
To retrieve duplicate records from a MongoDB collection, you can use the aggregation framework to group by the specified fields and filter based on the count. Here's the equivalent MongoDB query for your given SQL query:
### MongoDB Query:
```mongodb
db.EmployeeInfo.aggregate([
{
$group: {
_id: { EmpID: "$EmpID", EmpFname: "$EmpFname", Department: "$Department" },
count: { $sum: 1 }
}
},
{
$match: {
count: { $gt: 1 }
}
}
])
```
Explanation:
- `$group`: Groups the documents by `EmpID`, `EmpFname`, and `Department` and calculates the count of each group using `$sum: 1`.
- `$match`: Filters the grouped documents to include only those with a count greater than 1.
Adjust the collection and field names according to your actual MongoDB setup.
**Question 29** Write a query to find the third-highest salary from the EmpPosition table.
```sql
SELECT TOP 1 salary
FROM(
SELECT TOP 3 salary
FROM employee_table
ORDER BY salary DESC) AS emp
ORDER BY salary ASC;
```
In MongoDB, finding the third-highest salary can be done using the `aggregate` framework. Here's the equivalent MongoDB query for your given SQL query:
```mongodb
db.EmpPosition.aggregate([
{
$group: {
_id: null,
salaries: { $addToSet: "$Salary" }
}
},
{
$project: {
thirdHighestSalary: {
$cond: {
if: { $gt: [{ $size: "$salaries" }, 2] },
then: { $arrayElemAt: ["$salaries", -3] },
else: null
}
}
}
}
])
```
Explanation:
1. `$group`: Groups all documents into one group, creating an array `salaries` containing all unique salary values.
2. `$project`: Projects a new field `thirdHighestSalary` that uses the `$arrayElemAt` operator to get the third-highest salary from the array.
Adjust the collection and field names according to your actual MongoDB setup.
**Question 30** Write a query to fetch 50% records from the EmployeeInfo table.
```sql
SELECT *
FROM EmployeeInfo WHERE
EmpID <= (SELECT COUNT(EmpID)/2 from EmployeeInfo);
```
In MongoDB, you can achieve fetching 50% of records using the `aggregate` framework. Here's the equivalent MongoDB query for your given SQL query:
```mongodb
var totalRecords = db.EmployeeInfo.count();
var halfRecords = Math.ceil(totalRecords / 2);
db.EmployeeInfo.find().limit(halfRecords);
```
Explanation:
1. `var totalRecords = db.EmployeeInfo.count();`: Calculates the total number of records in the `EmployeeInfo` collection.
2. `var halfRecords = Math.ceil(totalRecords / 2);`: Calculates half of the total records, rounding up to ensure you get at least 50%.
3. `db.EmployeeInfo.find().limit(halfRecords);`: Fetches records from the collection, limiting the result set to 50%.
This MongoDB query uses the `find()` method with the `limit()` method to achieve the same result.
Adjust the collection and field names according to your actual MongoDB setup.
| akmaurya31 |
1,745,583 | More than just chatbots: Using context to build the future of in-app AI experiences | This was first published as a series on AI in Dopt's blog. Today, text-driven, turn-based... | 0 | 2024-01-30T15:00:00 | https://blog.dopt.com/more-than-just-chatbots | ai, machinelearning, chatgpt, webdev | This was first published as a series on AI in [Dopt's blog](https://blog.dopt.com/more-than-just-chatbots).
---
Today, text-driven, turn-based chatbots dominate AI experiences.
At Dopt, we believe there’s a much more interesting problem to solve: building in-app AI assistants. We believe there’s a huge opportunity to help users learn, navigate, and use the interfaces they already work within. What if we could create AI assistants that offer contextual help seamlessly integrated into a user’s experience without the user ever needing to think about a prompt?
In this deep dive, we’ll investigate how we might build such experiences and the technical architectures and product considerations that come with them.
First, we’ll walk through why chatbots might not be the future and explore what alternate models of embedded assistance might look like.
Next, we’ll define contexts and how they work in these kinds of systems, and we’ll determine which contexts we want to gather for building in-app assistants.
Then, we’ll outline a retrieval architecture built on custom embeddings to find the most relevant signals for a user’s queries and contexts, and we’ll cover a generation architecture which uses a system of multimodal prompts to craft a meaningful response.
Finally, we’ll highlight a few of our key learnings from building these in-app AI experiences ourselves.
## Why not embedded chatbots?
We can start forming a better mental model of in-app assistance by asking why we should restrict ourselves to using natural language conversations as our medium for AI assistance.
As Amelia Wattenberger deftly explains in her article, *[Boo Chatbots: Why Chatbots Are Not the Future](https://wattenberger.com/thoughts/boo-chatbots):*
> Natural language is great at rough direction: teleport me to the right neighborhood. But once ChatGPT has responded, how do I get it to take me to the right house?
We’ve all experienced the struggle to create precise prompts about an app when writing questions to a chatbot. And, even more, we’ve all struggled to translate those answers back into the steps we need to take to be successful.
Specifically, existing embedded chat experiences suffer in three key ways:
1. prompting is imprecise and very chaotic in the sense that small changes in prompts cause a huge change in answers
2. the user (the question-asker) is burdened with solving the problem of identifying and producing all relevant context for their question
3. questions and answers are not integrated into the app itself but instead surfaced indirectly in separate and often disruptive UIs
At Dopt, we believe that by tackling these problems, we can radically alter how in-app assistance works.
## Exploring alternate models of embedded in-app assistance
Within an app, a user’s interactions can follow many distinct paths — they might navigate pages by clicking on a link, they might close an announcement modal, or they might type something into a search box. All of these interaction paths are valuable inputs into an in-app assistant.

Consider the design exploration above where a user wants to know what the *Enabled* status refers to. The user might’ve been able to make progress through a chatbot, but their journey would’ve been a lot more circuitous. They would likely need to explain the *Enabled* selector and situate it within the *Monitoring* column of the *Integrations* page. After that, they’d need to carefully create a prompt about the action to get a consistent answer.
In our exploration, we propose a different solution. We consider how we can use context from the page (for example, that it’s the *Integrations* page), from the DOM (that the user is interacting with an *Enabled* chip in the *Monitoring* column of a table), and from a screenshot of the page to ground the answer in the user’s frame of reference. This helps generate a targeted and helpful answer with accompanying sources and related next steps. We also consider continuations (”Ask a follow up question…”) where a user can ask further questions which are still grounded in the same context.
This exploration highlights an alternate and potentially more valuable model for in-app assistance. By leveraging static context like sources alongside in-app context like user and company properties, DOM and app state, and visual UI screenshots, we can offer accessible and actionable responses embedded within apps themselves.
## Briefly defining context
Before we jump into architecture, we should first clearly define context. Within a search (or any other question-answer) system, we can define contexts as sets of information that help enrich and complete a user’s potentially under-specified query.
Say a user enters the following query: “weather tomorrow”. Our system needs to gather two key pieces of information, a *where* and a *when*: *where* is the user asking for the weather and *when* is the user asking about. The *when* is pretty easy — requests from a browser or device will come with information on the client’s local time, so “tomorrow” can be replaced with the following day according to the local time. The *where* is harder — if a browser or device permits an application to gather geolocation information, our system could use that. Otherwise, we’ll have to rely on substitutes like using the request’s IP Address or combing through the user’s history.
Even with these harder cases and substitutes, being able to identify a user’s time and geolocation to provide the most likely weather produces a better experience than asking the user for clarification. The context-less journey, the one where the user has to add additional inputs or take extra steps, is easier to build because it requires exact specification, but it’s also less rewarding and definitely less magical.

<small>A diagram of Google’s design and architecture for an embedded contextual search system within a camera forum (taken from *[Aggregating context data for programmable search engines](https://patents.google.com/patent/US8756210)*).</small>
## Gathering relevant context for in-app assistance
The interaction models suggested above in *Exploring alternate models* require multiple, overlapping contexts.
We can break these contexts down into two broad categories: static and dynamic.
Static contexts are fixed per app and can be provided at build-time. This static context is often in the form of documentation and other help and support sources related to your app — for example, in Dopt, our static context would include [docs.dopt.com](http://docs.dopt.com) and [blog.dopt.com](http://blog.dopt.com). Likewise, navigation, actions, APIs and their related metadata (i.e. descriptions, parameters, API schemas, etc.) can also be configured as static context, since they’re also available at build time.
We could stop building with just static context and produce a functional in-app chatbot. The input to this chatbot would be a string, and a user could ask questions like, “how do I invite a new user to my company workspace?” The chatbot would then search over sources and actions using [Retrieval Augmented Generation](https://en.wikipedia.org/wiki/Prompt_engineering#Retrieval-augmented_generation) and respond with generated help text and relevant actions.
This modality suffers from the same problems we encountered earlier in *Why not embedded chatbots*: first, creating the right prompts to ask questions to chatbots is just plain difficult; second, even with the right prompts, the chatbot won’t have access to the in-app context necessary to retrieve the most relevant sources and actions and generate a quality response.
We can solve both of these problems through the use of dynamic context. Dynamic contexts are those that we can collect within an app at runtime — they differ per user and session and are tied to the things a user is seeing and interacting with.

When we collect dynamic in-app contexts, we need to prioritize three key factors: latency, density, and informativeness. We want to make sure that dynamic contexts can be gathered quickly, that they’re compact enough to process and send over-the-wire, and that they capture independently valuable information.
From these criteria, the following contexts stand out as necessary for building an in-app assistant:
1. User and company context — at Dopt, we support [identifying user and company level properties](https://docs.dopt.com/setup/users/) like a user’s role and their company’s plan. By including this context, we can retrieve the most relevant sources and actions for a user and ensure that our answers are grounded in what they can and cannot do.
2. Runtime context, like errors and view states — these are custom contexts where the schema is specified at build time and the value at runtime so that parameterized values can be injected. This context is useful when augmenting an assistant with app state like whether a user is encountering an error.
3. Window context — the URL and title of the page within the app that the user is currently viewing. This context is useful for retrieving sources and actions that are relevant to their current page.
4. Semantic HTML context — when a user interacts with an element in the DOM, context can be taken directly from that element and its relatives (its ancestors, descendants, and siblings). From the element and its relatives, we grab three types of attributes: descriptive, for example `innerText`; interactive, for example an anchor’s `href` or an input’s `value`; semantic, for example `role` and `label` or `aria-*` attributes. [Coupled with models trained on HTML](https://aclanthology.org/2023.findings-emnlp.185/), this context can ground a system by specifying what elements a user is interacting with and how they’re performing their interactions.
5. Visual UI context — since we’re operating in-app, we can directly capture screenshots of what the user is looking at and the elements they’re interacting with. This context serves multiple purposes: first, when other forms of context are incomplete or inaccurate, this context can help ground a user’s questions in sources and actions that are relevant to what they’re viewing; second, [jointly embedding visual context with semantic HTML context](https://arxiv.org/abs/2305.11854) can enable a deeper understanding of a user’s interactions.
## Grounding AI systems by leveraging multiple contexts
Ultimately, our goal is to build an AI system which does the following:
- indexes sources, actions, and other static context
- retrieves relevant sources and actions given users’ queries and dynamic in-app contexts like visual images, semantic HTML, runtime information, and user and company attributes
- uses sources and actions as well as users’ queries and context as grounding for a multimodal LLM to generate meaningful responses to augment in-app experiences
### Retrieval
When building a standard RAG system, we first need determine which sources and actions (and other pre-defined static context) are relevant given a user’s question. However, what if a user hasn’t asked a question, but we still want to proactively retrieve the most relevant sources and actions for the page they’re viewing. Alternately, what if a user asks an incomplete question, like “how do I navigate to settings from **this page**?”
To satisfy all of these potential entry points, we need to build a retrieval system that supports all the different combinations of explicit user query and gathered in-app contexts.
We start by processing static contexts like sources and actions through multiple embeddings models. Whenever sources are added, sources crawled, or actions updated, we’ll run them (and their metadata, like a document’s title or an action’s description) through our models and store the embeddings for future consumption (possibly in a vector DB, but we’ve found [pgvector](https://github.com/pgvector/pgvector) works just fine for moderate volumes).

These are a rough collection embedding models that we’ve found to be particularly useful for the in-app contexts outlined above:
- Cohere’s general purpose [embeddings](https://docs.cohere.com/docs/semantic-search) — `embed-english-v3.0` is a natural language workhorse — we can embed sources and actions with it for retrieval with a user’s raw, unprocessed queries.
- Microsoft’s Mistral-7B LLM-derived [embeddings](https://huggingface.co/intfloat/e5-mistral-7b-instruct) — `e5-mistral-7b-instruct` is a Mistral-7B-instruct derived embedding model. Trained on synthetic data pairs from GPT-4, the model maximizes Mistral-7B-instruct’s instruction fine-tuning with retrieval learned from GPT-4’s synthetic pairings. We can engineer specific instructions for text-based retrieval of sources and actions on user and company properties as well as page and HTML contexts.
- THUDM’s CogAgent VLM-derived [embeddings](https://huggingface.co/THUDM/cogagent-vqa-hf) — `cogagent-vqa` is a single-turn visual-question answering vision language model (VLM). The model uses a Llama-2-7B text model augmented with 11B visual parameters and achieves state of the art performance on [web GUI tasks](https://arxiv.org/abs/2312.08914). We can [extend](https://arxiv.org/abs/2307.16645) the model to output meaningful multimodal embeddings and use it for UI screenshot-based retrieval of sources and actions.
Then, when we receive a request with a query and / or in-app contexts, we also embed those with the same models. As outlined above, each piece of the request (the query, the user and company context, the visual UI context, etc.) is embedded in a specific way depending on the information it contains. For example, a screenshot can only be embedded by a multimodal model; likewise, a semantic HTML extract needs to be processed with either a code fine-tuned model or a larger, LLM-derived model. Some contexts like user and company properties or runtime errors blur the line, so we might embed them using two different models.
After this step, we go from a request to a list of embeddings. We can take each embedding and run a nearest-neighbor search against the stored sources and actions to retrieve the most relevant sources and actions for that embedding. If we do this for each embedding and then take the union of the sources and actions, we’re left with a set of sources and actions each of which should have 1 or more similarity scores to each of the embeddings. There are many different ways to order this set, but a modified, weighted [rank fusion](https://en.wikipedia.org/wiki/Mean_reciprocal_rank) works pretty well.
```ts
/**
* Start with a list where each element is an list of embedding scores.
* Some elements in the list may be null if that specific sources or action
* failed to match with a query and / or in-app context. This may be the case
* with either limited or approximate nearest-neighbor searches.
*/
const records: Array<Array<number | null>>;
/**
* A function which finds the percentile of an individual record's similarity score
* in a population of similarity scored. If the individual record's value is null,
* this function returns 1.0 (that is, this record is the most dissimilar record).
* Lower percentiles are more similar and higher percentiles less.
*/
const percentile: (individual: number | null, population: Array<number | null>) => number;
/**
* A function which finds the weighted harmonic means of its inputs.
* Its inputs are an Array of number tuples where the first number
* is the value and the second number is the weight.
*/
function weightedHarmonicMean(...inputs: Array<[number, number]>): number {
let sum = 0;
let weights = 0;
for (const [input, weight] of inputs) {
sum += weight / input;
weights += weight;
}
return weights / sum;
}
/**
* `weights` are an array of numbers with the same length
* as the total number of embeddings. These are hyperparameters
* which convey how much each embedding should be weighted in the final
* rank fusion. For example, we might believe that images convey
* more information than HTML and so the image embedding should have
* a higher weight.
*/
const weights: number[];
/**
* This final result, `fused` contains each record sorted by
* its modified mean reciprocal rank. The first record (source or action)
* is the most relevant and the last is the least relevant.
*/
const fused: number[] = records.map((record) => {
/**
* Compute the weighted percentiles for each of the similarity scores.
* `scores` refers to an array of scores for that specific embedding.
*/
const weightedPercentiles = record.map((score, index) => {
return [
percentile(score, scores),
weights[index]
];
}) as Array<[number, number]>;
return weightedHarmonicMean(...weightedPercentiles);
}).sort((a, b) => a - b);
```
An important thing to consider when using the weighted fusion approach outlined above is the importance of the `weights` hyperparameters. One way you could do this is by sitting next to someone and fiddling with the hyperparameters while they thumbs-up or thumbs-down your results (we did this ourselves when prototyping!). A more robust way might be to ask raters to evaluate retrieved results across the hyperparameter grid and then create a linear model on top of those ratings to interpolate the best `weights` values.
We much prefer this weighted linear model approach to other reranking approaches like using a [transformer-based cross-encoder](https://www.sbert.net/examples/applications/retrieve_rerank/README.html#re-ranker-cross-encoder) for reranking. Because of the diversity of contexts and queries we consider, a cross-encoder would have to be fairly large and quite slow in order to encapsulate all the possible inputs against which we want to rank our retrieved results. We find our weighted linear model approach, once fine-tuned, provides just as much flexibility with the benefit of greatly improved speed.
### Generation
Once we’ve retrieved a set of results that we feel comfortable with, the next step is to generate a meaningful answer. A simple approach might be to just dump everything into a prompt and hope the vision-LLM returns a meaningful result. Through some quick experiments with multimodal foundation models like GPT-4V and Gemini, we can quickly discover that this isn’t the case.
Instead, we can use a combination of [chain-of-thought derivatives](https://arxiv.org/abs/2311.09210) and [context-specific grounding](https://osu-nlp-group.github.io/SeeAct/)
to coax these foundation models into producing intelligible results.
*[Chain-of-Note](https://arxiv.org/abs/2311.09210)* suggests that we can improve RAG by producing piecewise notes first and then a subsequent answer to get a better result. The authors offer the following prompt for summarizing Wikipedia:
```text
Task Description:
1. Read the given question and five Wikipedia passages to gather relevant information.
2. Write reading notes summarizing the key points from these passages.
3. Discuss the relevance of the given question and Wikipedia passages.
4. If some passages are relevant to the given question, provide a brief answer based on the passages.
5. If no passage is relevant, directly provide answer without considering the passages.
```
We follow this general template, but instead pass in sources and actions. One of the main highlights of *Chain-of-Note*, covered by steps *4* and *5*, is that it avoids explicitly instructing an LLM to produce a canned answer like “I do not have enough information” in negative cases. Instead, it allows the LLM to decide on what pieces of information from sources and context are relevant. This enables the LLM to more freely produce an answer based on any of the additional contexts provided to it.
We introduce the additional contexts to the LLM in the form of follow-up messages. We can explain each additional piece of context like so:
```text
User: Here is an image of {url} where the user is interacting with {element.innerText}.
User: {base64 image}
Assistant: OK
User: Here is the HTML of {element.innerText} that the user is interacting with.
User: {HTML}
Assistant: OK
```
*[SeeAct](https://osu-nlp-group.github.io/SeeAct/)* provides a deeper exploration of how to ground GPT-4V with web and HTML context for accurate agent-like answers. We found their explorations particularly helpful when iterating and evaluating custom prompts for combinations of context.
Once all contexts are passed to the LLM, we can then ask it to answer a question. If the user has entered a question, we pass that question directly to the LLM. Alternately, depending on the specific interaction model of our in-app assistant, we can create an instruction based on the available contexts.
```text
User: Here is my question: “{query}”
User: Provide a brief explanation of what the {element.innerText} {element.type} does.
User: Provide a brief explanation and next steps for how the user should respond to {runtime.error}.
```

As with customizing embeddings and tuning weights during retrieval, engineering a system of prompts for multiple contexts and queries is a highly iterative process that requires care and attention as LLMs and contexts evolve. We’ve found that we can go a long way by continuously testing the model and seeing how results match expectations. Likewise, we’ve found that refinement along these axes is much closer to A / B testing than model optimization since the outcomes being optimized are heavily user dependent.
## Building magical experiences
We’re finally to the magical experiences part! We’re ready to surface our generated answers and their underlying sources and actions in our in-app assistant.
At Dopt, we’re [actively building](https://www.dopt.com/ai) a few different magical experiences for embedded AI assistants. We’ve found that experiences all share two important requirements: they need to be fast, and they need to be useful.
**First, speed.** Much of what we’ve architected above includes considerations of speed: we optimize our in-app contexts so that they’re fast to collect in the browser; we rely heavily on embeddings which are relatively fast to compute; we perform ranking of our retrieved sources and actions via a simple linear model; and last, we minimize the footprint of our most expensive step, the call to a multimodal foundation model, by only performing it once. From the perspective of a user, we can return relevant sources and actions within a few hundred milliseconds after their interaction; we can then start streaming an answer to them within a second. For AI experiences, these speeds are pretty magical.
**Then, usefulness.** This part is a lot trickier, but here are a few things we’ve learned. First, foundation models, while wonderful, can easily be flooded with irrelevant information. Through our weighted ranking system, we try to filter down sources and actions and context aggressively so that we can minimize the amount of irrelevant information the LLM has to parse during generation. Sometimes, this means that recall is sacrificed for precision. Second, even with relevant sources and actions and in-app context, answers need to be succinct and actionable. Fortunately (or unfortunately), the main lever here is how we engineer prompts, a process that is pretty chaotic and largely driven by trial-and-error.
## See what we’re up to!
At Dopt, we believe that embedded in-app assistants are the future of AI experiences. Much of what we’ve written here is directly based on our experiences prototyping and shipping these assistants and the systems that power them.
If you’re interested in learning more about what we’re building, visit [dopt.com/ai](https://www.dopt.com/ai).
## Dive deeper
- *[Boo Chatbots: Why Chatbots Are Not the Future](https://wattenberger.com/thoughts/boo-chatbots)*
- *[Understanding HTML with Large Language Models](https://aclanthology.org/2023.findings-emnlp.185/)*
- *[Multimodal Web Navigation with Instruction-Finetuned Foundation Models](https://arxiv.org/abs/2305.11854)*
- *[Improving Text Embeddings with Large Language Models](https://arxiv.org/abs/2401.00368)*
- *[CogAgent: A Visual Language Model for GUI Agents](https://arxiv.org/abs/2312.08914)*
- *[Scaling Sentence Embeddings with Large Language Models](https://arxiv.org/abs/2307.16645)*
- *[Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models](https://arxiv.org/abs/2311.09210)*
- *[SeeAct: GPT-4V(ision) is a Generalist Web Agent, if Grounded](https://arxiv.org/abs/2401.01614)* | karthikramen |
1,745,612 | C#: The Programming Language of the Year 2023 | Title: C#: The Programming Language of the Year 2023 Introduction: The world of programming is... | 0 | 2024-01-30T07:17:07 | https://dev.to/homolibere/c-the-programming-language-of-the-year-2023-b5j | csharp | Title: C#: The Programming Language of the Year 2023
Introduction:
The world of programming is constantly evolving, with new languages emerging and gaining popularity each year. In recent times, C# has been making significant strides, growing into one of the most widely used languages across the development community. With its robust features, versatility, and continuous advancements, C# is poised to become the programming language of the year 2023. In this post, we will delve into the reasons behind C#'s ascent and understand its key features through some code examples.
1. Cross-platform Development:
C# has evolved to support cross-platform application development, allowing developers to write code once and deploy it on multiple platforms seamlessly. The introduction of .NET Core and Xamarin frameworks has empowered C# in building applications for Windows, Linux, macOS, iOS, and Android, further expanding its reach. Let's take a look at a simple C# snippet demonstrating cross-platform capabilities:
```C#
using System;
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello, World!");
Console.ReadKey();
}
}
```
2. Asynchronous Programming:
As the demand for efficient and responsive software grows, so does the need for asynchronous programming. C# has embraced the concept of asynchronous programming with its built-in `async` and `await` keywords, making handling tasks in a non-blocking manner more straightforward. Below is a C# example demonstrating the use of asynchronous programming:
```C#
using System;
using System.Threading.Tasks;
class Program
{
static async Task Main(string[] args)
{
await PerformAsyncTask();
Console.WriteLine("Operation completed!");
Console.ReadKey();
}
static async Task PerformAsyncTask()
{
// Simulating asynchronous task delay
await Task.Delay(2000);
Console.WriteLine("Async task completed!");
}
}
```
3. Language Innovations:
C# continuously evolves, introducing new features and improving existing ones. The language's progressive approach enhances developer productivity and code maintainability. Some noteworthy additions include pattern matching, nullable reference types, record types, and simplified switch statements. Here's an example demonstrating pattern matching:
```C#
using System;
class Program
{
static void Main(string[] args)
{
object data = 42;
if (data is int number)
Console.WriteLine($"The value is an integer: {number}");
Console.ReadKey();
}
}
```
Conclusion:
C# has come a long way since its inception, and in the year 2023, it stands as a programming language on the rise. With its support for cross-platform development, asynchronous programming, and continual advancements, C# has become an increasingly popular choice for developers, offering a versatile and powerful toolset. Whether you are a seasoned professional or a beginner, C# empowers you to build robust and efficient applications. Sharpen your coding skills and explore the exciting world of C#, as this programming language is undoubtedly set to dominate the programming landscape in 2023 and beyond. | homolibere |
1,745,613 | Easy Icons using iconify.design | As long as we are building a UI Components in any framework we come across situations where we need... | 0 | 2024-01-30T07:44:36 | https://dev.to/harish_soni/easy-icons-using-iconifydesign-37kj | icons, javascript, react, ui | As long as we are building a UI Components in any framework we come across situations where we need the icons to get the interest of the user, but we get stuck on choosing which library or which tool to use.
Here comes https://iconify.design/ which is far better then every icon package I have used, it is ok to use the icons from the design pattern installed eg: MUI, ANTD etc. But when there is a need of creating your own custom components with the icons you can take the iconify as your primary choice, because it is::
- Easy to setup
- Easy to use
- Light weight component
- You can get the ready to use Source code
- TypeScript Supported
I have been using the Icons from https://iconify.design/ in almost every project I have worked on.
## Installation
Super easy,
if you use npm:
`npm install --save-dev @iconify/react`
if you use yarn
`yarn add --dev @iconify/react`
## Importing
`import { Icon } from '@iconify/react';`
## Usage
```javascript
import React from 'react';
import { Icon } from '@iconify/react';
export default function App() {
return (
<div>
<div className="light-blue-block">
<Icon style={{ fontSize: '54px' }} icon="mdi:user" />
</div>
</div>
);
}
```
You can modify anything you want using the style or className props.
## Advantages
Even if you don't want to install the package, there is the SVG Code of the icon which you can use directly in your component

You can even modify this SVG Image according to your need.
```javascript
import React from "react";
export default function App() {
return (
<div>
<div className="light-blue-block">
<svg
xmlns="http://www.w3.org/2000/svg"
width="10em"
height="10em"
viewBox="0 0 24 24"
>
<path
fill="gray"
d="M12 4a4 4 0 0 1 4 4a4 4 0 0
1-4 4a4 4 0 0 1-4-4a4
4 0 0 1 4-4m0 10c4.42 0 8
1.79 8 4v2H4v-2c0-2.21
3.58-4 8-4"
/>
</svg>{" "}
</div>
</div>
);
}
```
You can get the icons from here: https://icon-sets.iconify.design/
**Thanks if you read till here, let me which sets of icons you have used in your code?**
If you like this and starting to use https://iconify.design/ then give a heads up.
| harish_soni |
1,745,723 | Virtualization Security Issues and best practices | 🔒✨ Elevate Your Virtualization Security IQ! Discover the nuances of virtualization security, from... | 0 | 2024-01-30T09:32:36 | https://dev.to/relianoid/virtualization-security-issues-and-best-practices-3lh9 | 🔒✨ Elevate Your Virtualization Security IQ! Discover the nuances of virtualization security, from risks to best practices. 🚀 Dive into the insights that every IT professional needs!
https://www.relianoid.com/blog/virtualization-security-issues-and-risks/
#VirtualizationSecurity #Cybersecurity #ITInfrastructure #TechInnovation #InfoSec #RiskManagement #DigitalTransformation #DataProtection #CyberAware #CloudSecurity #ITBestPractices #NetworkSecurity #Hypervisor #CyberThreats #SecureTech
 | relianoid | |
1,745,755 | CryptoPotato Analytics: Pre-Halving Dip Completed? Bitcoin’s Price Tested $44K | On Jan. 30, crypto analyst “Rekt Capital” said that the pre-halving period where pullbacks tend to... | 0 | 2024-01-30T12:42:21 | https://dev.to/victordelpino/cryptopotato-analytics-pre-halving-dip-completed-bitcoins-price-tested-44k-3pno | crypto, blockchain, cryptocurrency, analytics | On Jan. 30, crypto analyst “Rekt Capital” said that the pre-halving period where pullbacks tend to occur is ending in two weeks.
The halving is less than three months away now, and it is predicted to occur around April 22.
A similar pre-halving retrace occurred in early 2020 when BTC was trading at around $9,500. Additionally, that year had the pandemic-induced black swan event in March, which is unlikely to play out in this cycle (unless something cataclysmic occurs in the next couple of months).
Analysts had predicted a correction to between $34,000 and $36,000 as the ETF hype dwindled, but the asset only remained sub-$40K for a few days last week.
The analyst also posted five phases of the Bitcoin halving. These included a pre-halving period of 126 days followed by a 63-day pre-halving rally, then a 77-day pre-halving retrace where we currently are.
A 147-day accumulation period comes around, or after the halving, then there is a parabolic uptrend that can last a year.
There have been many price predictions post-halving, with one of the more recent ones from SkyBridge Capital founder Anthony Scaramucci, who said that BTC will reach $170,000.
Meanwhile, Dragonfly managing partner ‘Haseeb’ observed that retail has yet to enter the fray. He based this assumption on Coinbase app popularity which is way down compared to previous cycle highs.
Bitcoin prices are up 3% on the day at $43,422 at the time of writing. The asset hit an intraday high of $43,730 during the Tuesday morning Asian trading session.
Moreover, Bitcoin has made 9% since its post-ETF launch dip below $39K, and it is now eyeing resistance just over $44,000.

| victordelpino |
1,752,173 | React Fragment VS DIV, Kapan Waktu yang Tepat Menggunakannya? | Ketika bekerja dengan React, terkadang kita perlu merender beberapa elemen tanpa menambahkan elemen... | 26,317 | 2024-02-05T12:39:23 | https://dev.to/ferryops/react-fragment-vs-div-kapan-waktu-yang-tepat-menggunakannya-1cm9 | react, frontend, javascript, indonesia | Ketika bekerja dengan React, terkadang kita perlu merender beberapa elemen tanpa menambahkan elemen HTML tambahan ke DOM. Di sinilah kita menentukan ingin menggunakan tag React Fragment atau DIV, hal ini tentu dapat memungkinkan kita untuk mengelola struktur komponen dengan lebih fleksibel.
## Apa itu React Fragment?
React Fragment adalah cara untuk mengelompokkan beberapa elemen tanpa menambahkan elemen HTML tambahan ke DOM. Secara visual, ia tidak membuat elemen tambahan saat dirender, membuatnya ideal untuk menghindari penambahan elemen yang tidak perlu.
Contoh penggunaan React Fragment:
```javascript
import React from 'react';
const MyComponent = () => {
return (
<>
<h1>Judul Komponen</h1>
<p>Paragraf pertama</p>
<p>Paragraf kedua</p>
</>
);
};
```

## Kapan Harus Menggunakan React Fragment?
- Menghindari Elemen Tidak Diperlukan, jika kita ingin mengelompokkan beberapa elemen tanpa menambahkan elemen HTML tambahan ke DOM, kita bisa menggunakan React Fragment untuk menjaga struktur markup yang bersih.
- Menggunakan dalam List dan Tabel, saat bekerja dengan list atau tabel, React Fragment dapat membantu mengelompokkan elemen tanpa mempengaruhi struktur DOM.
```javascript
const MyList = () => {
return (
<ul>
<>
<li>Item 1</li>
<li>Item 2</li>
</>
</ul>
);
};
```
## Apa itu DIV?
DIV adalah elemen HTML umum yang sering digunakan untuk mengelompokkan elemen atau sebagai wadah untuk styling. Saat bekerja dengan React, kita sering menggunakan DIV untuk membungkus beberapa elemen.
Contoh penggunaan DIV:
```javascript
const MyComponent = () => {
return (
<div>
<h1>Judul Komponen</h1>
<p>Paragraf pertama</p>
<p>Paragraf kedua</p>
</div>
);
};
```

## Kapan Harus Menggunakan DIV?
- Memerlukan Elemen Tambahan, jika kita memerlukan elemen tambahan di DOM, misalnya, untuk styling atau keperluan tertentu, kita bisa menggunakan DIV.
- Mengelompokkan Elemen untuk Styling, jika kita ingin mengelompokkan elemen untuk styling dengan CSS, DIV adalah pilihan yang baik.
## Kesimpulan
Dalam banyak kasus, baik React Fragment maupun DIV dapat digunakan dengan efektif, tergantung pada kebutuhan spesifik proyek yang sedang kita kerjakan. Gunakan React Fragment untuk menghindari penambahan elemen yang tidak perlu di DOM dan pertahankan struktur markup yang bersih. Sebaliknya, jika kita memerlukan elemen tambahan atau ingin mengelompokkan elemen untuk styling, gunakan DIV sebagai pembungkusnya.
Dengan memahami perbedaan antara React Fragment dan DIV, kita dapat membuat keputusan yang tepat untuk meningkatkan kualitas dan keterbacaan kode React.
<blockquote class="tiktok-embed" cite="https://www.tiktok.com/@ferryops_/video/7358841542510972166" data-video-id="7358841542510972166" style="max-width: 605px;min-width: 325px;" > <section> <a target="_blank" title="@ferryops_" href="https://www.tiktok.com/@ferryops_?refer=embed">@ferryops_</a> DIV vs React Fragment <a title="capcut" target="_blank" href="https://www.tiktok.com/tag/capcut?refer=embed">#CapCut</a> <a title="react" target="_blank" href="https://www.tiktok.com/tag/react?refer=embed">#react</a> <a title="web" target="_blank" href="https://www.tiktok.com/tag/web?refer=embed">#web</a> <a title="coding" target="_blank" href="https://www.tiktok.com/tag/coding?refer=embed">#coding</a> <a title="programming" target="_blank" href="https://www.tiktok.com/tag/programming?refer=embed">#programming</a> <a target="_blank" title="♬ Vlog - Soft boy" href="https://www.tiktok.com/music/Vlog-7152796746278504449?refer=embed">♬ Vlog - Soft boy</a> </section> </blockquote> <script async src="https://www.tiktok.com/embed.js"></script>
| ferryops |
1,753,163 | Benefits of Private Limited Company Registration 🌐✨ | Dreaming of launching your own business? Elevate your journey with a Private Limited Company! 🌟 Enjoy... | 0 | 2024-02-06T12:11:38 | https://dev.to/ishikarawa41183/benefits-of-private-limited-company-registration-1ffo | privatelimitedcompany, companyregistration, beneiftsofprivatelimited, setindiabiz | Dreaming of launching your own business? Elevate your journey with a Private Limited Company! 🌟 Enjoy limited liability protection, credibility boost, and easy fundraising.
🛡️💼 Ready to thrive in the world of perpetual existence?
Click the link to unveil the secrets of success! 👉 [https://shorturl.at/ftzD0](https://shorturl.at/ftzD0)
Also Check 👉 [https://shorturl.at/GMQ79](https://shorturl.at/GMQ79) | ishikarawa41183 |
1,754,503 | MiniScript Ports | Since its introduction in 2017, MiniScript's community has been steadily growing. And a delightful... | 0 | 2024-02-07T15:27:52 | https://dev.to/joestrout/miniscript-ports-jp9 | miniscript, programming, languages | Since its introduction in 2017, [MiniScript](https://miniscript.org)'s community has been steadily growing. And a delightful community it is — it spans the gamut from brand-new, never-coded-before beginners to experienced software engineers.
Some of those experienced community members have gone so far as to port or reimplement MiniScript in other languages and environments. Because of its minimal nature (see its [one-page quick reference](https://miniscript.org/files/MiniScript-QuickRef.pdf)!), implementing an interpreter or bytecode compiler for MiniScript is a much more accessible task than for most other languages. Let's take a look at some of the ports currently available.
## Official Reference Implementations
There are two official reference implementations of MiniScript: one in C#, and one in C++. Both are actively maintained, and available [on GitHub](https://github.com/JoeStrout/miniscript).
## Java and Kotlin
There are two MiniScript ports that run on the JVM (Java Virtual Machine). The first one was actually written for Kotlin, and is available [here](https://github.com/Arcnor/miniscript-kt). It was last updated over 4 years ago, so it's probably a little out of date now, but would be an excellent starting point and probably wouldn't take much effort to refresh.
The other one is for standard Java, available [here](https://github.com/heatseeker0/JavaMiniScript). It was last updated 2 years ago, so it's a bit newer but still not completely current. This might be an excellent launching point to adding MiniScript support to a Java-based game or app. ([Minecraft mods](https://mcreator.net/) leap to mind!)
## MiniScript in MiniScript
Active community member Marc Gurevitx has recently published a project called [ms.ms](https://github.com/marcgurevitx/ms-ms), a MiniScript interpreter _written in MiniScript_.

While Marc is quick to warn you that the result runs slowly, in cases where you need something like this, that's not the point. Among other tricks, gives MiniScript an "eval" function — a way to evaluate an arbitrary snippet of MiniScript code stored in a string, from within your own MiniScript program.
(ms-ms is based on another of Marc's projects, [peg-ms](https://github.com/marcgurevitx/peg-ms/tree/aa191eccec3d8414ddd5746b46e892a3123bf67a), which implements [Parsing Expression Grammars](https://en.wikipedia.org/wiki/Parsing_expression_grammar). That's a topic that's dear to me as well — see my [2011 blog post](https://luminaryapps.com/blog/better-text-searching-with-peg/) extolling the virtues of PEG over regular expressions!)
## MiniScript in TypeScript
Finally (for now!), another active community member Sebastian Nozzi (@sebnozzi) has been developing a MiniScript implementation in JavaScript — or more specifically, TypeScript (strongly-typed JavaScript). This enables MiniScript to run directly in a web browser! It's fast, too; on some code, the speed is comparable to command-line MiniScript or Mini Micro running on your local machine.
He's currently divided his work into two projects on GitHub: [MiniScript.TS](https://github.com/sebnozzi/miniscript.ts) is the language core, while [MiniScript-NodeJS](https://github.com/sebnozzi/miniscript-nodejs) adds Node.js support and a script-runner. The latter includes support for `print` and `input`, the [`import`](https://miniscript.org/wiki/Import) command, and a subset of the Mini Micro [/sys disk](https://github.com/JoeStrout/minimicro-sysdisk).
The importance of this work cannot be overstated. Until now, the only way to run MiniScript code in a web browser was to send it to a back-end server running a command-line build (like our current [Try-It! page](https://miniscript.org/tryit/), or to use the web version of [Mini Micro](https://miniscript.org/MiniMicro/), which is built in Unity and does not work well on mobile browsers. All that has changed! Now we can run MiniScript code directly in the browser, no need for a backend server. And Seb has already ported several of his old Mini Micro games, like [Sliding Puzzle](https://sebnozzi.itch.io/sliding-puzzle) and [Foggy Window](https://sebnozzi.itch.io/foggy-window), to use his MiniScript.TS engine rather than Mini Micro, enabling them to work fine even on mobile devices.
I'm already planning to work with Seb to reimplement the official MiniScript Try-It! page using his port, as well as the [Robo-Reindeer Rumble](https://miniscript.org/RoboReindeer/) game. Eventually, I expect to see an explosion of web-based apps and game development environments based on MiniScript and running happily within the browser. All thanks to the hard work @sebnozzi's been putting in on MiniScript.TS and related code!
## Future Work
The official [MiniScript discord server](https://discord.gg/7s6zajx) has a #ports channel, where anybody interested in porting MiniScript (or working with existing ports) can discuss. Projects that have been kicked around there include porting MiniScript to:
• Go
• Rust
• Lua
• pure C (sans C++)
• 6502 assembly
• WebAssembly
• compiler backends, like [LLVM](https://llvm.org/) or [Cranelift](https://cranelift.dev/)
It's exciting to see all this work going on! And remember, if you've ever wanted to find out what it's like to implement a language, MiniScript is a great choice — tiny enough to be relatively easy to implement, but powerful enough to be useful for real programs. And you'll have an entire community of enthusiastic users cheering you on!
| joestrout |
1,755,160 | Comment créer une application de diffusion en direct et en vidéo avec WebRTC | Permettez aux utilisateurs de démarrer rapidement un chat vidéo directement dans une application de chat avec le nouveau plugin communautaire WebRTC pour ChatEngine. | 0 | 2024-02-08T05:37:06 | https://dev.to/pubnub-fr/comment-creer-une-application-de-diffusion-en-direct-et-en-video-avec-webrtc-44gp | Qu'est-ce que le streaming vidéo WebRTC ?
-----------------------------------------
Le streaming vidéo WebRTC est un projet gratuit et open-source qui permet aux navigateurs web et aux appareils mobiles tels que iOS et Android de fournir une communication en temps réel. Cette fonctionnalité permet d'intégrer facilement dans une page web des fonctions d'application telles que la vidéoconférence peer-to-peer. Avec le streaming vidéo WebRTC, un chat vidéo basé sur un navigateur peut être conçu rapidement avec HTML et JavaScript, sans qu'aucun code d'arrière-plan ne soit nécessaire. Il s'agit d'un élément clé de l'[engagement du public en direct](https://www.pubnub.com/solutions/live-audience-engagement/) et des solutions de [collaboration multi-utilisateurs](https://www.pubnub.com/solutions/multiuser-collaboration/), qui améliore l'expérience de l'utilisateur sur des plateformes telles que les médias sociaux, les applications de diffusion vidéo en direct et les réseaux de diffusion de contenu. L'adaptabilité de WebRTC en fait un outil essentiel pour le développement d'applications, répondant à un large éventail d'applications allant des services en nuage d'Amazon à la vidéo à la demande de Netflix, en passant par la plateforme de streaming de Twitch, Facebook Live et les fonctions interactives sur Hulu, Spotify et les appareils Apple.

Comment fonctionne la diffusion vidéo WebRTC ?
----------------------------------------------
WebRTC permet aux utilisateurs de diffuser de l'audio et de la vidéo en peer-to-peer dans les navigateurs web modernes. Cette fonctionnalité est prise en charge par les dernières versions de Chrome, FireFox, Edge, Safari et Opera sur les ordinateurs de bureau, ainsi que par les navigateurs web natifs iOS et Android. C'est la base des solutions de [streaming de données](https://www.pubnub.com/solutions/data-streaming/) fournies par PubNub.
Pour faire de l'appareil d'un utilisateur un client WebRTC, il suffit d'initialiser un nouvel objet `RTCPeerConnection()` dans le JavaScript frontal.
Architecture du streaming en direct WebRTC
------------------------------------------
Le chat vidéo est établi sur deux appareils clients ou plus à l'aide du protocole WebRTC. La connexion peut être établie selon l'un des deux modes suivants. Le premier mode est le mode pair-à-pair, ce qui signifie que les paquets audio et vidéo sont diffusés directement de client à client avec la [configuration RTC](https://developer.mozilla.org/en-US/docs/Web/API/RTCConfiguration). Cette configuration fonctionne tant que les deux machines ont une adresse IP accessible par l'internet public.
Toutefois, il n'est pas judicieux de s'appuyer sur des connexions peer-to-peer pour le chat vidéo et la conférence dans le navigateur dans les applications de production. Il est fréquent que le cadre ICE (Interactive Connectivity Establishment) ne parvienne pas à établir une connexion entre deux utilisateurs lorsque l'un d'entre eux, ou les deux, se trouvent derrière un système de sécurité LAN avancé.
Pour pallier ce problème, vous pouvez configurer votre RTCConfiguration de manière à ce qu'elle tente d'abord une connexion d'égal à égal, puis qu'elle se rabatte sur une connexion relayée en cas d'échec de cette dernière.

Si les adresses IP accessibles au public ne sont pas une option, une connexion WebRTC doit être établie via un serveur TURN. Le cadre ICE décidera si cela est nécessaire lorsque les utilisateurs essaieront de se connecter.
Ne construisez pas un serveur de signalisation WebRTC pour votre diffusion en direct - utilisez PubNub
------------------------------------------------------------------------------------------------------
WebRTC laisse de côté un composant très important du streaming de chat vidéo. Un client doit utiliser un service de signalisation pour communiquer des messages avec son ou ses pairs. PubNub permet à un développeur d'implémenter entièrement, et à moindre coût, des fonctionnalités telles qu'un service de signalisation WebRTC. Cela est facilité par la [documentation](https://www.pubnub.com/docs) complète de PubNub sur la configuration de votre compte et l'envoi/réception de messages.
### Exemples de chat vidéo en streaming avec WebRTC
Ces messages concernent des événements tels que :
- Moi, utilisateur A, j'aimerais vous appeler, utilisateur B.
- L'utilisateur A est en train d'essayer de vous appeler, utilisateur B
- Je, Utilisateur B, accepte votre appel Utilisateur A
- Je, Utilisateur B, rejette votre appel Utilisateur A
- Je, Utilisateur B, souhaite mettre fin à notre appel Utilisateur A
- Je, Utilisateur A, souhaite mettre fin à notre appel Utilisateur B
- Messagerie instantanée textuelle comme dans Slack, Google Hangouts, Skype, Facebook Messenger, etc.
- Session Codec audio/vidéo et données de connectivité de l'utilisateur.
Ces messages font partie du **flux de transactions de signalisation** qui est décrit dans la [documentation du Mozilla Developer Network pour Web](https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API/Signaling_and_video_calling)RTC.Le serveur de signalisation WebRTC est un concept abstrait. De nombreux services peuvent devenir ce "serveur de signalisation", comme WebSockets, Socket.IO ou PubNub. Si vous êtes chargé de créer une solution pour cela, vous finirez par vous poser des questions : Faut-il construire ou acheter ?
### Pourquoi PubNub : Extensions logiques comme le streaming vidéo WebRTC one-to-many
PubNub permet à un développeur comme vous de mettre en œuvre un service de signalisation WebRTC de manière complète et peu coûteuse. Une [bibliothèque WebRTC Open Source qui utilise PubNub](https://github.com/stephenlb/webrtc-sdk) est disponible sur GitHub. Cependant, la solution de streaming de données PubNub suivante est encore plus rapide que la construction avec le SDK WebRTC, car notre plateforme vous permet de construire rapidement et facilement une application qui supporte le streaming un-à-plusieurs à n'importe quelle échelle.
Package soutenu par la communauté pour les appels vidéo WebRTC
--------------------------------------------------------------
PubNub est comme un CDN global pour les données en temps réel. Les développeurs peuvent utiliser son [IaaS](https://www.pubnub.com/learn/glossary/what-is-infrastructure-as-a-service-iaas/) pour créer des plateformes de streaming en temps réel de haute qualité, des applications mobiles et bien plus encore. Des SDK PubNub sont disponibles pour tous les langages de programmation et tous les appareils, permettant des connexions pub/sub fiables, la livraison de données et le contrôle du réseau, le tout en quelques lignes de code.
Tutoriel d'application de streaming vidéo WebRTC avec Javascript, HTML, CSS
---------------------------------------------------------------------------
Dans ce tutoriel, nous utiliserons JavaScript, HTML et CSS pour construire notre application de chat vidéo. Cependant, si vous souhaitez utiliser un framework front-end moderne comme Vue, [React](https://www.pubnub.com/docs/chat/sdks/messages/sending-messages), ou Angular, vous pouvez consulter la [page des tutoriels PubNub](https://www.pubnub.com/blog/category/build/) mis à jour ou le [centre de ressources PubNub Chat](https://www.pubnub.com/docs/chat/overview). Nous disposons également d'une équipe de développement substantielle disponible pour consultation.
Pour commencer, vous pouvez utiliser le [HTML](https://github.com/ajb413/pubnub-js-webrtc/blob/master/example/index.html) et le [CSS](https://github.com/ajb413/pubnub-js-webrtc/blob/master/example/style.css) de mon exemple de projet. Ces fichiers présentent une interface utilisateur d'application de chat vidéo très générique. L'application d'exemple n'a qu'un seul chat global et aucun chat privé 1:1, bien qu'ils soient faciles à mettre en œuvre.
Application de streaming vidéo WebRTC HTML
------------------------------------------
Ouvrez **index.html** avec votre éditeur de texte favori. **Remplacez les** balises de script sous la balise body de votre fichier HTML par ces 2 scripts CDN. Laissez la troisième balise de script qui fait référence à **app.js.** Nous écrirons ce fichier ensemble.
```js
<script type="text/javascript" src="https://cdn.pubnub.com/sdk/javascript/pubnub.4.32.0.js"></script>
<script src="https://cdn.jsdelivr.net/npm/pubnub-js-webrtc@latest/dist/pubnub-js-webrtc.js"></script>
```
L'étape suivante consiste à créer votre propre fichier app.js dans le même répertoire que votre fichier index.html. La raison pour laquelle nous devons créer un nouveau fichier app.js est que le script de mon exemple utilise [Xirsys](https://xirsys.com/). Mon compte privé est relié à mon serveur Functions. Vous devrez créer votre propre serveur back-end et votre propre compte si vous souhaitez utiliser un fournisseur TURN comme Xirsys. Mon prochain article de blog contiendra un tutoriel pour construire des applications WebRTC avec TURN.
Le script app.js que nous écrirons ensemble n'utilisera que des connexions WebRTC peer-to-peer gratuites. Si vous essayez de faire un appel vidéo en direct avec 2 appareils sur le même réseau local, votre application fonctionnera. Il n'est pas certain qu'une connexion d'appel vidéo puisse être établie avec des clients situés sur des réseaux distincts (en raison de la sécurité NAT). C'est pourquoi il est essentiel de bien comprendre les protocoles de diffusion en continu.
### WebRTC streaming vidéo app javascript
Tout d'abord, nous allons faire référence à tous les éléments du DOM à partir du fichier index.html. Une fois que nous pouvons y faire référence dans notre code JavaScript, nous pouvons les manipuler par programme.
```js
const chatInterface = document.getElementById('chat-interface');
const myVideoSample = document.getElementById('my-video-sample');
const myVideo = document.getElementById('my-video');
const remoteVideo = document.getElementById('remote-video');
const videoModal = document.getElementById('video-modal');
const closeVideoButton = document.getElementById('close-video');
const brokenMyVideo = document.getElementById('broken-my-video');
const brokenSampleVideo = document.getElementById('broken-sample-video');
const usernameModal = document.getElementById('username-input-modal');
const usernameInput = document.getElementById('username-input');
const joinButton = document.getElementById('join-button');
const callConfirmModal = document.getElementById('call-confirm-modal');
const callConfirmUsername = document.getElementById('call-confirm-username');
const yesCallButton = document.getElementById('yes-call');
const noCallButton = document.getElementById('no-call');
const incomingCallModal = document.getElementById('incoming-call-modal');
const callFromSpan = document.getElementById('call-from');
const acceptCallButton = document.getElementById('accept-call');
const rejectCallButton = document.getElementById('reject-call');
const onlineList = document.getElementById('online-list');
const chat = document.getElementById('chat');
const log = document.getElementById('log');
const messageInput = document.getElementById('message-input');
const submit = document.getElementById('submit');
```
Ensuite, nous allons ajouter quelques variables qui contiennent un nom de classe CSS, des informations globales sur l'application et des informations sur la configuration WebRTC. Dans le dictionnaire RTCConfiguration, nous ajoutons les informations des serveurs STUN et TURN pour les appels WebRTC. Il s'agit d'une étape cruciale pour un contenu vidéo de haute qualité dans votre service de streaming.
```js
const hide = 'hide';
// PubNub Channel for sending/receiving global chat messages
// also used for user presence with Presence
const globalChannel = 'global-channel';
let webRtcPhone;
let pubnub;
// An RTCConfiguration dictionary from the browser WebRTC API
// Add STUN and TURN server information here for WebRTC calling
const rtcConfig = {};
let username; // User's name in the app
let myAudioVideoStream; // Local audio and video stream
let noVideoTimeout; // Used to check if a video connection succeeded
const noVideoTimeoutMS = 5000; // Error alert if the video fails to connect
```
Nous allons maintenant aborder une partie du code client impératif pour la fonctionnalité du paquet WebRTC. C'est là que l'aspect temps réel de votre plateforme de streaming vidéo entre en jeu.
```js
// Init the audio and video stream on this client
getLocalStream().then((localMediaStream) => {
myAudioVideoStream = localMediaStream;
myVideoSample.srcObject = myAudioVideoStream;
myVideo.srcObject = myAudioVideoStream;
}).catch(() => {
myVideo.classList.add(hide);
myVideoSample.classList.add(hide);
brokenMyVideo.classList.remove(hide);
brokenSampleVideo.classList.remove(hide);
});
// Prompt the user for a username input
getLocalUserName().then((myUsername) => {
username = myUsername;
usernameModal.classList.add(hide);
initWebRtcApp();
});
// Send a chat message when Enter key is pressed
messageInput.addEventListener('keydown', (event) => {
if (event.keyCode === 13 && !event.shiftKey) {
event.preventDefault();
sendMessage();
return;
}
});
// Send a chat message when the submit button is clicked
submit.addEventListener('click', sendMessage);
const closeVideoEventHandler = (event) => {
videoModal.classList.add(hide);
chatInterface.classList.remove(hide);
clearTimeout(noVideoTimeout);
webRtcPhone.disconnect(); // disconnects the current phone call
}
// Register a disconnect event handler when the close video button is clicked
closeVideoButton.addEventListener('click', closeVideoEventHandler);
```
Le nouveau code que nous venons d'ajouter
- demande au navigateur s'il peut accéder à la webcam et au microphone de l'ordinateur, et stocke l'objet stream dans une variable globale.
- Demande à l'utilisateur un "nom d'utilisateur" dans l'application avant d'initialiser la partie WebRTC de l'application.
- Enregistre des gestionnaires d'événements pour les messages de chat, par exemple lorsqu'un utilisateur clique sur le bouton d'envoi ou appuie sur la touche Entrée.
- Crée un autre gestionnaire d'événements pour la fermeture du chat vidéo par l'utilisateur.
Ensuite, nous allons ajouter le code d'initialisation pour la partie WebRTC de l'application web. Dans cette partie, nous initialisons notre instance PubNub avec la dernière version du SDK 4.32.0.
```js
const initWebRtcApp = () => {
// WebRTC phone object event for when the remote peer's video becomes available.
const onPeerStream = (webRTCTrackEvent) => {
console.log('Peer audio/video stream now available');
const peerStream = webRTCTrackEvent.streams[0];
window.peerStream = peerStream;
remoteVideo.srcObject = peerStream;
};
// WebRTC phone object event for when a remote peer attempts to call you.
const onIncomingCall = (fromUuid, callResponseCallback) => {
let username = document.getElementById(fromUuid).children[1].innerText;
incomingCall(username).then((acceptedCall) => {
if (acceptedCall) {
// End an already open call before opening a new one
webRtcPhone.disconnect();
videoModal.classList.remove(hide);
chatInterface.classList.add(hide);
noVideoTimeout = setTimeout(noVideo, noVideoTimeoutMS);
}
callResponseCallback({ acceptedCall });
});
};
// WebRTC phone object event for when the remote peer responds to your call request.
const onCallResponse = (acceptedCall) => {
console.log('Call response: ', acceptedCall ? 'accepted' : 'rejected');
if (acceptedCall) {
videoModal.classList.remove(hide);
chatInterface.classList.add(hide);
noVideoTimeout = setTimeout(noVideo, noVideoTimeoutMS);
}
};
// WebRTC phone object event for when a call disconnects or timeouts.
const onDisconnect = () => {
console.log('Call disconnected');
videoModal.classList.add(hide);
chatInterface.classList.remove(hide);
clearTimeout(noVideoTimeout);
};
// Lists the online users in the UI and registers a call method to the click event
// When a user clicks a peer's name in the online list, the app calls that user.
const addToOnlineUserList = (occupant) => {
const userId = occupant.uuid;
const name = occupant.state ? occupant.state.name : null;
if (!name) return;
const userListDomElement = createUserListItem(userId, name);
const alreadyInList = document.getElementById(userId);
const isMe = pubnub.getUUID() === userId;
if (alreadyInList) {
removeFromOnlineUserList(occupant.uuid);
}
if (isMe) {
return;
}
onlineList.appendChild(userListDomElement);
userListDomElement.addEventListener('click', (event) => {
const userToCall = userId;
confirmCall(name).then((yesDoCall) => {
if (yesDoCall) {
webRtcPhone.callUser(userToCall, {
myStream: myAudioVideoStream
});
}
});
});
}
const removeFromOnlineUserList = (uuid) => {
const div = document.getElementById(uuid);
if (div) div.remove();
};
pubnub = new PubNub({
publishKey : '_YOUR_PUBNUB_PUBLISH_API_KEY_HERE_',
subscribeKey : '_YOUR_PUBNUB_SUBSCRIBE_API_KEY_HERE_'
});
// This PubNub listener powers the text chat and online user list population.
pubnub.addListener({
message: function(event) {
// Render a global chat message in the UI
if (event.channel === globalChannel) {
renderMessage(event);
}
},
status: function(statusEvent) {
if (statusEvent.category === "PNConnectedCategory") {
pubnub.setState({
state: {
name: username
},
channels: [globalChannel],
uuid: pubnub.getUUID()
});
pubnub.hereNow({
channels: [globalChannel],
includeUUIDs: true,
includeState: true
},
(status, response) => {
response.channels[globalChannel].occupants
.forEach(addToOnlineUserList);
});
}
},
presence: (status, response) => {
if (status.error) {
console.error(status.error);
} else if (status.channel === globalChannel) {
if (status.action === "join") {
addToOnlineUserList(status, response);
} else if (status.action === "state-change") {
addToOnlineUserList(status, response);
} else if (status.action === "leave") {
removeFromOnlineUserList(status.uuid);
} else if (status.action === "timeout") {
removeFromOnlineUserList(response.uuid);
}
}
}
});
pubnub.subscribe({
channels: [globalChannel],
withPresence: true
});
window.ismyuuid = pubnub.getUUID();
// Disconnect PubNub before a user navigates away from the page
window.onbeforeunload = (event) => {
pubnub.unsubscribe({
channels: [globalChannel]
});
};
// WebRTC phone object configuration.
let config = {
rtcConfig,
ignoreNonTurn: false,
myStream: myAudioVideoStream,
onPeerStream, // is required
onIncomingCall, // is required
onCallResponse, // is required
onDisconnect, // is required
pubnub // is required
};
webRtcPhone = new WebRtcPhone(config);
};
```
Dans le code d'initialisation de la partie WebRTC de l'application web, nous avons fait quelques mises à jour pour refléter les dernières fonctionnalités offertes par PubNub. Il s'agit d'un élément essentiel pour garantir la compatibilité de votre application de streaming vidéo avec les dernières tendances technologiques.
Le code que nous venons d'ajouter à app.js s'exécute après que l'utilisateur a saisi son "nom d'utilisateur" :
- déclare tous les gestionnaires d'événements du plugin pour les événements d'appel WebRTC
- Ajoute et supprime les éléments de la liste des utilisateurs en ligne au fur et à mesure que les utilisateurs se connectent et se déconnectent de l'application.
- Enregistre un gestionnaire d'événements pour passer un nouvel appel vidéo à un utilisateur chaque fois que son nom est cliqué dans l'interface utilisateur de la liste.
- Enregistre un gestionnaire d'événements pour rendre les nouveaux messages de chat chaque fois qu'un message est envoyé au chat global, en temps réel.
- Configure PubNub pour envoyer et écouter les messages en temps réel avec le [modèle de messagerie Pub/Sub](https://www.pubnub.com/products/pubnub-platform/)
- Initialise le paquet WebRTC et transmet l'objet de configuration à l'instance.
Avant de continuer, il est important de noter que nous devons insérer nos clés API PubNub gratuites dans cette fonction. Nous pouvons obtenir des clés gratuites pour toujours en utilisant le formulaire d'inscription ci-dessous. Ces clés sont gratuites jusqu'à 1 million de transactions par mois, ce qui est idéal pour les amateurs ou les applications professionnelles de validation de concept.
Vous pouvez insérer vos clés API Pub/Sub dans le fichier app.js dans l'objet d'initialisation PubNub, comme vous pouvez le voir dans l'extrait de code précédent.
```js
pubnub = new PubNub({
publishKey : 'PUBLISH_KEY',
subscribeKey : 'SUBSCRIBE_KEY',
uuid: "UUID"
});
```
Nous devons **activer la fonctionnalité Presence** dans le tableau de bord d'administration de PubNub. Lorsque vous créez un jeu de clés PubNub, la fonction de Présence est désactivée par défaut sur la clé. Nous pouvons l'activer pour la clé en allant dans le [PubNub Admin Dashboard](https://dashboard.pubnub.com/) et en cliquant sur l'interrupteur à bascule. Pour en savoir plus sur la fonction de présence et ses possibilités, consultez notre [documentation](https://www.pubnub.com/docs/general/presence/overview).
L'application d'exemple utilise la [présence](https://www.pubnub.com/docs/general/presence/overview) pour montrer quels utilisateurs sont en ligne dans l'application. Nous utilisons l'UUID de l'utilisateur PubNub pour conserver des références uniques à chaque utilisateur dans l'application. Lorsque nous effectuons un appel vidéo WebRTC, nous utilisons l'UUID pour que les deux utilisateurs puissent afficher le nom d'utilisateur correspondant dans leur interface utilisateur.
Ensuite, nous aurons besoin de quelques méthodes utilitaires pour exécuter des fonctionnalités spécifiques à l'interface utilisateur. Ces méthodes ne sont pas spécifiques à toutes les applications WebRTC, elles ne servent qu'à faire fonctionner l'interface utilisateur que j'ai conçue. Ajoutez ce code à la fin du fichier app.js.
```js
// =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// UI Render Functions
// =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
function renderMessage(message) {
const messageDomNode = createMessageHTML(message);
log.append(messageDomNode);
// Sort messages in chat log based on their timetoken (value of DOM id)
sortNodeChildren(log, 'id');
chat.scrollTop = chat.scrollHeight;
}
function incomingCall(name) {
return new Promise((resolve) => {
acceptCallButton.onclick = function() {
incomingCallModal.classList.add(hide);
resolve(true);
}
rejectCallButton.onclick = function() {
incomingCallModal.classList.add(hide);
resolve(false);
}
callFromSpan.innerHTML = name;
incomingCallModal.classList.remove(hide);
});
}
function confirmCall(name) {
return new Promise((resolve) => {
yesCallButton.onclick = function() {
callConfirmModal.classList.add(hide);
resolve(true);
}
noCallButton.onclick = function() {
callConfirmModal.classList.add(hide);
resolve(false);
}
callConfirmUsername.innerHTML = name;
callConfirmModal.classList.remove(hide);
});
}
function getLocalUserName() {
return new Promise((resolve) => {
usernameInput.focus();
usernameInput.value = '';
usernameInput.addEventListener('keyup', (event) => {
const nameLength = usernameInput.value.length;
if (nameLength > 0) {
joinButton.classList.remove('disabled');
} else {
joinButton.classList.add('disabled');
}
if (event.keyCode === 13 && nameLength > 0) {
resolve(usernameInput.value);
}
});
joinButton.addEventListener('click', (event) => {
const nameLength = usernameInput.value.length;
if (nameLength > 0) {
resolve(usernameInput.value);
}
});
});
}
function getLocalStream() {
return new Promise((resolve, reject) => {
navigator.mediaDevices
.getUserMedia({
audio: true,
video: true
})
.then((avStream) => {
resolve(avStream);
})
.catch((err) => {
alert('Cannot access local camera or microphone.');
console.error(err);
reject();
});
});
}
function createUserListItem(userId, name) {
const div = document.createElement('div');
div.id = userId;
const img = document.createElement('img');
img.src = './phone.png';
const span = document.createElement('span');
span.innerHTML = name;
div.appendChild(img);
div.appendChild(span);
return div;
}
function createMessageHTML(messageEvent) {
const text = messageEvent.message.text;
const jsTime = parseInt(messageEvent.timetoken.substring(0,13));
const dateString = new Date(jsTime).toLocaleString();
const senderUuid = messageEvent.publisher;
const senderName = senderUuid === pubnub.getUUID()
? username
: document.getElementById(senderUuid).children[1].innerText;
const div = document.createElement('div');
const b = document.createElement('b');
div.id = messageEvent.timetoken;
b.innerHTML = `${senderName} (${dateString}): `;
div.appendChild(b);
div.innerHTML += text;
return div;
}
// =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
// Utility Functions
// =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
function sendMessage() {
const messageToSend = messageInput.value.replace(/?
|/g, '');
const trimmed = messageToSend.replace(/(\s)/g, '');
if (trimmed.length > 0) {
pubnub.publish({
channel: globalChannel,
message: {
text: messageToSend
}
});
}
messageInput.value = '';
}
// Sorts sibling HTML elements based on an attribute value
function sortNodeChildren(parent, attribute) {
const length = parent.children.length;
for (let i = 0; i < length-1; i++) {
if (parent.children[i+1][attribute] < parent.children[i][attribute]) {
parent.children[i+1].parentNode
.insertBefore(parent.children[i+1], parent.children[i]);
i = -1;
}
}
}
function noVideo() {
const message = 'No peer connection made.
' +
'Try adding a TURN server to the WebRTC configuration.';
if (remoteVideo.paused) {
alert(message);
closeVideoEventHandler();
}
}
```
### **CSS pour l'application de streaming vidéo WebRTC**
Nous avons besoin de styles CSS dans notre application pour que l'interface utilisateur soit jolie et agréable. Le fichier index.html contient déjà une référence au fichier style.css, ajoutez-le donc dans le même dossier. Le [fichier style.css](https://github.com/ajb413/pubnub-js-webrtc/blob/master/example/style.css) pour cette application WebRTC est disponible dans le dépôt GitHub.
C'est fait ! Vous pouvez maintenant déployer vos fichiers web frontaux statiques sur une plateforme d'hébergement web comme WordPress ou des [pages GitHub](https://pages.github.com/). Votre application de chat WebRTC pourra être utilisée par n'importe qui dans le monde. Le code est compatible avec les mobiles, ce qui signifie que les derniers navigateurs web sur iOS et Android seront en mesure d'exécuter l'application pour la vidéo en face à face !
**FAQ sur l'offre de streaming WebRTC**
---------------------------------------
### **Le package WebRTC fait-il officiellement partie de PubNub ?**
Non. Il s'agit d'un projet open-source soutenu par la communauté. Si vous avez des questions ou besoin d'aide, contactez [devrel@pubnub.com.](mailto:devrel@pubnub.com) Si vous souhaitez signaler un bogue, faites-le sur la [page GitHub Issues](https://github.com/ajb413/pubnub-js-webrtc/issues).
### **PubNub diffuse-t-il des données audio ou vidéo avec WebRTC ?**
Non. PubNub s'associe très bien avec WebRTC en tant que service de signalisation. Cela signifie que PubNub signale des événements de client à client en utilisant la messagerie Pub/Sub. Ces événements sont les suivants :
- Moi, utilisateur A, j'aimerais vous appeler, utilisateur B.
- L'utilisateur A est en train d'essayer de vous appeler, utilisateur B
- Je, Utilisateur B, accepte votre appel Utilisateur A
- Je, Utilisateur B, rejette votre appel Utilisateur A
- Je, Utilisateur B, souhaite mettre fin à notre appel Utilisateur A
- Je, Utilisateur A, souhaite mettre fin à notre appel Utilisateur B
- Messagerie instantanée textuelle comme dans Slack, Google Hangouts, Skype, Facebook Messenger, etc.
### **Puis-je faire un appel de groupe avec plus de 2 participants en utilisant WebRTC et PubNub ?**
Il est possible de développer des appels de groupe avec WebRTC et PubNub, cependant, le package actuel PubNub JS WebRTC ne peut connecter que 2 utilisateurs dans un appel privé, et non un simulcast supporté par WebRTC à partir de plus de 2 utilisateurs. La communauté pourrait développer cette fonctionnalité à l'avenir, mais il n'y a aucun plan de développement à ce jour.
**Démarrer avec PubNub pour votre application WebRTC**
------------------------------------------------------
Développement de logiciels utilisant PubNub pour votre application de streaming en direct. Assurez une faible latence et des coûts de développement en vous inscrivant simplement à un compte gratuit et en intégrant nos API dans votre application WebRTC. Vous aurez rapidement un MVP de votre application de chat. Avec PubNub, vous aurez accès à [à de nombreux outils et ressources](https://www.pubnub.com/docs) pour vous aider à construire une application de chat robuste et évolutive.
Pour commencer, suivez les étapes suivantes :
- [Créez un compte PubNub gratuit.](https://admin.pubnub.com/#/register)
- [Suivez](https://www.pubnub.com/tutorials/) un tutoriel étape par étape pour configurer et construire une application de chat avec le SDK PubNub.
- [Ajoutez des fonctionnalités telles que les notifications push mobiles](https://www.pubnub.com/tutorials/) à vos applications de chat iOS ou Android en suivant un [tutoriel détaillé](https://www.pubnub.com/tutorials/).
[Visitez nos documents pour en savoir plus sur](https://www.pubnub.com/docs) la construction de votre application web de chat en temps réel.
Comment PubNub peut-il vous aider ?
===================================
Cet article a été publié à l'origine sur [PubNub.com](https://www.pubnub.com/blog/integrating-video-calling-in-chat-with-webrtc-and-pubnub/)
Notre plateforme aide les développeurs à construire, livrer et gérer l'interactivité en temps réel pour les applications web, les applications mobiles et les appareils IoT.
La base de notre plateforme est le réseau de messagerie en temps réel le plus grand et le plus évolutif de l'industrie. Avec plus de 15 points de présence dans le monde, 800 millions d'utilisateurs actifs mensuels et une fiabilité de 99,999 %, vous n'aurez jamais à vous soucier des pannes, des limites de concurrence ou des problèmes de latence causés par les pics de trafic.
Découvrez PubNub
----------------
Découvrez le [Live Tour](https://www.pubnub.com/tour/introduction/) pour comprendre les concepts essentiels de chaque application alimentée par PubNub en moins de 5 minutes.
S'installer
-----------
Créez un [compte PubNub](https://admin.pubnub.com/signup/) pour un accès immédiat et gratuit aux clés PubNub.
Commencer
---------
La [documentation PubNub](https://www.pubnub.com/docs) vous permettra de démarrer, quel que soit votre cas d'utilisation ou votre [SDK](https://www.pubnub.com/docs). | pubnubdevrel | |
1,755,451 | mini silent generator for home | Power Output: Determine your power needs based on the appliances or devices you plan to power during... | 0 | 2024-02-08T11:21:43 | https://dev.to/pinnaclegenerators/mini-silent-generator-for-home-glf | Power Output: Determine your power needs based on the appliances or devices you plan to power during an outage. Mini generators typically have lower power outputs ranging from a few hundred watts to a few kilowatts.
Noise Level: Look for generators with low noise levels, typically measured in decibels (dB). Inverter generators are known for their quiet operation compared to traditional generators.
Fuel Type: Mini generators can run on gasoline, propane, or even solar power. Consider the availability and convenience of fuel options in your area.
Run Time: Evaluate the generator's run time on a single tank of fuel. Longer run times can be advantageous during extended power outages.
Portability: Choose a lightweight and portable generator for easy transport and storage. Some models come with handles and wheels for added convenience.
Inverter Technology: Inverter generators provide clean and stable power, making them suitable for powering sensitive electronics such as computers and smartphones.
Automatic Start/Stop: Some generators come with automatic start/stop features, allowing them to turn on automatically during a power outage and shut off when power is restored .https://pinnaclegenerators.com/ | pinnaclegenerators | |
1,755,667 | How to remove your personal info from Google | Being undoubtedly the most popular engine, Google seems to know everything. You look up any query -... | 0 | 2024-02-08T15:17:47 | https://dev.to/jesssika89/how-to-remove-your-personal-info-from-google-2nl3 | security, google, internet | Being undoubtedly [the most popular engine](https://www.statista.com/statistics/216573/worldwide-market-share-of-search-engines/), Google seems to know everything. You look up any query - it provides you with an answer. But have you ever wondered how much it knows about you?
Asking yourself this question from time to time can prove to be very useful. Googling yourself provides a great inside into how other people, be it a potential employer, a date, and even a landlord, see you online. The better you know what links get pulled into the results, the better you’ll understand how to manage your online (and subsequently offline) reputation.
But it’s not just that. Googling yourself might reveal the pages with your personal information you didn’t know existed. These vary from people-finder profiles to business directories. And these are the results you should be concerned about.
Having personal info so easily accessible by anyone leads to numerous threats ranging from stalking and harassment to fraud and identity theft. Having even such mundane details as your phone number, legal name, address, and family relations revealed in Google puts you at numerous risks.
## So, how do you remove your personal info from Google?
First things first, it’s important to understand that Google itself doesn’t generate any info. Instead, it crawls the websites and brings up the links that are likely to satisfy your query. So, the best way to remove URLs from search results is to remove them from the original source. To do that, contact the websites directly. However, if they don’t comply, you can still try to hide the unwanted links from Google’s SERP. Here are your options:
**1. You find the URLs yourself**
If you‘ve already located the web pages you want deleted from Google, you can operate right on the results page. Click the three dots to the right of the link and then click “Remove result.” You’ll have to select the reason behind your request and follow the prompts from there.
Another option is to submit the URLs via Google’s [personal content removal form](https://support.google.com/websearch/contact/content_removal_form). This one also deals with websites with exploitative removal practices (the ones that refuse to delete your info or ask for a payment).
There several other forms you can use as well, each covering different cases from intimate personal photos to images of minors. The whole list can be found [here](https://support.google.com/websearch/troubleshooter/3111061).
And yet another form for queries about illegal content. This one covers copyright infringement, court orders, and trademark violations.
**2. You rely on Google to find the pages exposing your info**
If formerly you had to find all the links yourself and manually submit them to Google, now you can rely on the search giant itself to find the pages containing your personal data. A relatively new tool called “Results about you” lets you input your personal details and then crawls the web to find pages that expose them. Then you can choose which links you want to have removed and which can remain in the search results. You’ll receive notifications along the way and have a dashboard that helps keep track of everything that has been found. You will also receive alerts when new results containing your contact information emerge.
The great thing is that the whole process requires minimal effort on your side. The best thing is that this tool might find the links you wouldn’t find yourself.
The drawback is that “Results about you” doesn’t operate worldwide and is accessible primarily in the US and the UK.
**_Sources:_**
1.[https://support.google.com/websearch/troubleshooter/3111061?hl=en](https://support.google.com/websearch/troubleshooter/3111061?hl=en)
2.[https://www.statista.com/statistics/216573/worldwide-market-share-of-search-engines/](https://www.statista.com/statistics/216573/worldwide-market-share-of-search-engines/)
3.[https://onerep.com/blog/how-to-remove-your-personal-info-from-google](https://onerep.com/blog/how-to-remove-your-personal-info-from-google)
| jesssika89 |
1,756,485 | Selecting the Perfect Hair Serum: Tailoring to Your Hair Type and Goals | Hair silk serum is a popular hair care product that claims to make your hair smooth, shiny, and... | 0 | 2024-02-09T08:49:50 | https://dev.to/nanolamination/selecting-the-perfect-hair-serum-tailoring-to-your-hair-type-and-goals-406o | beauty, hair, silkyhair | [Hair silk serum](https://www.nanolamination.com/product-page/magic-silk-serum-1) is a popular hair care product that claims to make your hair smooth, shiny, and healthy. It contains hydrolyzed silk protein, a natural ingredient derived from silk fibers, which has many benefits for your hair.
But how does hair smooth serum work? And what are the best hair serums for different hair types and needs?
In this article, we will answer these questions and more and provide you with some tips and recommendations to help you get the most out of hair serum.

The Science Behind [Hair Silk Serum](https://www.nanolamination.com/product-page/magic-silk-serum-1) and Its Benefits for Your Hair
Hair smooth serum is a liquid or semi-liquid product containing hydrolyzed silk protein and other ingredients such as oils, vitamins, and conditioners.
Hydrolyzed silk protein is a natural ingredient extracted from silk fibers, known for their strength, softness, and luster. Hydrolyzed silk protein can penetrate the hair shaft and bond with the hair’s keratin, which makes up the hair’s structure.
Therefore, this serum helps to replenish and repair the hair, fill in the gaps and cracks in the cuticle, and smooth out the hair surface. As a result, this serum can improve the hair’s texture, moisture, elasticity, and shine and reduce frizz, split ends, and breakage.
By creating a protective barrier on the hair, it can shield your hair from the harmful effects of heat, environment, and color loss. Studies by the International Journal of Cosmetic Science and the Journal of Cosmetic Science have shown that “hydrolyzed silk protein, a key ingredient in [hair silk serum](https://www.nanolamination.com/product-page/magic-silk-serum-1), can improve the heat resistance of hair by 20% and the color durability of dyed hair by 12.5%”.
How to Use Hair Smooth Serum Effectively and Safely
Hair serum can be applied to damp or dry hair, depending on the product’s instructions and your preference. Usually, a small amount of serum coats the hair evenly, from mid-lengths to ends. You can then style your hair as usual or leave it to air-dry. Depending on your hair condition and needs, it can be used daily or occasionally.
However, using this serum sparingly and avoiding applying it to the roots or scalp is important, as this can make your hair look oily or greasy. Before using a new hair serum, a patch test is also advisable to check for allergic reactions or sensitivities. If you experience any irritation, itching, or redness, stop using the product immediately and consult a doctor.
How to Pick the Right Hair Serum for Your Hair Type and Goals
Nowadays serums is a popular hair care product that makes your hair smooth, shiny, and healthy. It contains hydrolyzed silk protein, a natural ingredient with many benefits for your hair. But not all serums are the same. You need to choose the one that suits your hair type and goals.
Here are some tips to help you find the best hair serum for you:
For fine or thin hair, go for a light and non-oily serum that won’t make your hair flat or greasy. You may also want a volumizing or thickening ingredient, such as caffeine, biotin, or redensyl, to give your hair more body and fullness.
For dry or damaged hair, choose a rich and hydrating hair serum that can moisturize and nourish your hair and bring back its life and shine, that has repairing or strengthening ingredients, such as keratin, wheat protein, or plant cells, to heal your hair’s damage and prevent more breakage.
For curly or wavy hair, opt for a smoothing and defining hair serum that can control your frizz and enhance your curls or waves. Also, it has moisturizing or curl-boosting ingredients, such as argan oil, shea butter, or coconut oil, to feed and hydrate your hair and add bounce and glow.
Conclusion
[Hair silk serum](https://www.nanolamination.com/product-page/magic-silk-serum-1) is a natural and effective way to achieve smooth, shiny, and healthy hair. It contains hydrolyzed silk protein, a natural ingredient that has many benefits for your hair, such as improving its texture, moisture, elasticity, and shine and reducing frizz, split ends, and breakage. It protects hair from heat damage, environmental stressors, and color fading.
However, you must choose the best silk serum for your hair type and needs and use it sparingly and safely. By following the tips and recommendations in this article, you can get the most out of hair serum and enjoy its amazing results.
Don’t miss this opportunity to try [Nano Lamination](https://www.nanolamination.com/) and see the difference. When you choose us, you choose the best products for frizzy hair and a natural and effective solution for maintaining the health and beauty of your hair. Book your appointment today and get a special discount on your first session.
| nanolamination |
1,758,941 | How To Become a Game Developer | Introduction The game development industry is a dynamic and rapidly expanding sector,... | 0 | 2024-02-12T11:42:03 | https://dev.to/ericabrookssf/how-to-become-a-game-developer-212b | gamedev, gaming | ## Introduction
The game development industry is a dynamic and rapidly expanding sector, teeming with creativity and innovation. Over the last few decades, it has transformed from a niche community of developers into a global powerhouse, influencing not just entertainment but also education, social interaction, and even virtual economies. This growth is propelled by continuous advancements in technology, expanding access to gaming through mobile devices, and an ever-growing audience of gamers across all demographics. The result is a rich tapestry of opportunities for those aspiring to craft the next generation of gaming experiences.
Becoming a game developer today means entering a field with vast potential: from indie games that touch the hearts of niche audiences to blockbuster hits that define generations, and innovative educational tools that transform learning. Moreover, the emergence of online slots and casino games as a significant sector within the industry highlights the diverse paths available for developers. Companies like Pragmatic Play, known for popular titles like Gates of Olympus, exemplify success in niche markets, demonstrating that the realm of game development is as varied as it is vast.
The goal of this article is to navigate you through the multifaceted world of game development. Whether you are intrigued by the storytelling and world-building of AAA titles, the rapid development cycle of mobile games, the cutting-edge technology of VR/AR development, or the specialized niche of online slots, this guide aims to lay the groundwork for your journey into game development. From understanding the industry's landscape to acquiring the necessary skills, and ultimately carving out your career path, we aim to provide a comprehensive overview that will serve as your roadmap to becoming a game developer.
## Section 1: Overview of the Game Development Industry
### The Essence of Game Development
At its core, game development is the art and science of creating interactive experiences. It is a discipline that combines creativity with technology, requiring a harmonious blend of storytelling, graphic design, programming, sound design, and user experience. Unlike other forms of software development, game development is uniquely interdisciplinary, often bringing together teams of specialists who work in concert to bring a game from concept to reality.
### Interdisciplinary Teams
A typical game development team might include game designers, who devise the game's mechanics and story; programmers, who bring these ideas to life through code; artists and animators, who create the visual elements; sound engineers, who design audio effects and music; and testers, who ensure the game is both fun and functional. This collaboration across different skill sets is what makes game development both challenging and rewarding.
### Evolution of Gaming Platforms
The landscape of gaming has evolved dramatically from the days of coin-operated arcade machines. The introduction of home consoles and personal computers in the late 20th century democratized access to gaming, setting the stage for the industry's explosive growth. The advent of the internet and mobile technology further expanded gaming's reach, enabling multiplayer experiences and on-the-go gaming to a global audience. Today, the emergence of virtual and augmented reality (VR/AR) is pushing the boundaries of what games can be, offering immersive experiences that were once the stuff of science fiction.
This evolution has not only expanded the types of games that can be developed but also the markets and audiences available to developers. Each platform offers its unique challenges and opportunities, from the deep, narrative-driven experiences of PC and console games to the quick, accessible gameplay of mobile titles, and the immersive worlds of VR/AR. Moreover, the rise of online slots and casino games illustrates the industry's capacity for endless innovation, catering to a wide range of interests and demographics.
The game development industry's journey from arcade cabinets to immersive VR experiences is a testament to its relentless innovation and growth. As we delve deeper into the skills, education, and career paths within this vibrant field, it's clear that the possibilities for aspiring game developers are as limitless as the imagination.
## Section 2: Types of Games and Prominent Developers
The game development landscape is incredibly diverse, catering to a wide range of preferences and playing styles. This diversity is not just in the genres of games available but also in the platforms they are played on. Here, we delve into the different types of games and highlight some of the industry's most prominent developers, showcasing the breadth of opportunities available to aspiring game developers.
### Mobile Games
Mobile games stand out for their accessibility and broad appeal, reaching players across all age groups and demographics. The key to their success lies in the convenience of smartphones and tablets, allowing people to engage with games anytime, anywhere. This segment has seen exponential growth, thanks to developers like [Supercell](https://supercell.com/en/), known for blockbuster hits such as "Clash of Clans" and "Brawl Stars." These games have not only captivated millions of players worldwide but have also shown how successful business models in mobile gaming operate, combining engaging gameplay with strategies for player retention and monetization.
### PC and Console Games
The realm of PC and console games is where depth and complexity thrive, offering immersive experiences that often require hours of engagement. Developers like [Valve](https://www.valvesoftware.com/en/), with its revolutionary titles like "Half-Life" and "Portal," and Naughty Dog, known for narrative-driven games such as "The Last of Us" series and "Uncharted," have set high standards for storytelling, gameplay mechanics, and visual fidelity. This segment appeals to dedicated gamers seeking rich, expansive game worlds and intricate stories, showcasing the potential for game developers to create compelling, emotionally resonant experiences.
### Online Slots and Casino Games
A specialized yet lucrative niche within the game development industry is online slots and casino games. These games blend traditional gambling mechanics with the interactivity and engagement of video games. [Pragmatic Play](https://www.pragmaticplay.com/en/) is a standout developer in this space, known for its popular online slot game, "[Gates of Olympus](https://great.com/slots/gates-of-olympus/)." This title exemplifies how online slots can captivate players with compelling themes, impressive graphics, and innovative gameplay mechanics. The success of Pragmatic Play and similar developers underscores the potential for game developers to innovate within regulated markets, creating games that offer both entertainment and the chance for financial reward.
### VR/AR Games
At the frontier of gaming technology lies the immersive world of VR (Virtual Reality) and AR (Augmented Reality) games. These platforms offer unprecedented levels of immersion, enabling players to step into and interact with game worlds in ways previously unimaginable. Oculus Studios, a pioneer in VR gaming, has developed a range of titles that demonstrate the potential of VR technology to create deeply engaging and interactive experiences. From action-packed adventures to tranquil puzzle games, VR and AR are opening new frontiers for game developers to explore, pushing the boundaries of creativity and technological innovation.
The diversity of games and platforms highlights the vast spectrum of opportunities available in the game development industry. Whether you're passionate about crafting expansive worlds for PC and console gamers, developing accessible mobile games, innovating in the online slots and casino space, or pushing the boundaries of VR/AR technology, there's a niche for every aspiring game developer. With the right skills, creativity, and dedication, you can contribute to the next generation of gaming experiences, creating games that entertain, inspire, and innovate.
## Section 3: Essential Skills and Knowledge
Becoming a successful game developer requires a blend of technical, artistic, and soft skills. These competencies allow you to navigate the complexities of game development, from initial concept to final product.
### Technical Skills
- Programming Languages: Proficiency in programming languages is foundational. C++ and C# are staples in the industry, known for their use in high-performance game engines and large-scale projects. JavaScript, on the other hand, is essential for web-based games, including online slots and casino games. Mastery of these languages enables developers to implement game mechanics, optimize performance, and solve technical challenges.
- Game Engines: Understanding how to utilize game engines like Unity and Unreal Engine is crucial. These engines provide the tools necessary for game development, including rendering, physics, and scripting. Unity is particularly favored for mobile and indie projects due to its versatility and ease of use, while Unreal Engine is renowned for its powerful graphics capabilities, making it ideal for AAA games.
### Artistic Skills
- Game Design Principles: A thorough understanding of game design principles is vital for creating engaging and balanced games. This includes mechanics, level design, and user experience (UX) design.
- Graphic Design and Animation: Visual aesthetics are critical in game development. Skills in graphic design and animation bring the game world and its inhabitants to life, enhancing the player's immersion.
- Audio Production: Sound effects and music are integral to creating a compelling game atmosphere. Knowledge of audio production allows developers to craft soundscapes that complement the game's aesthetics and storytelling.
### Soft Skills
- Teamwork: Game development is inherently collaborative. The ability to work effectively within a team, understanding and integrating the contributions of colleagues from various disciplines, is essential.
- Communication: Clear communication is crucial for articulating vision, providing feedback, and coordinating with team members.
- Critical Thinking: Problem-solving and the ability to critically evaluate game designs and development processes are key to overcoming challenges and improving game quality.
## Section 4: Educational Pathways
There are multiple pathways to entering the game development industry, each with its advantages and challenges.
### Degree Programs vs. Self-Taught Routes
- Degree Programs: Pursuing a degree in game development or a related field provides a structured educational experience, access to resources, and networking opportunities. However, it can be time-consuming and costly.
- Self-Taught Routes: Many successful game developers are self-taught, using online resources and community projects to learn. This route offers flexibility and the ability to tailor learning to specific interests but requires discipline and motivation.
### Online Platforms Offering Specialized Courses
Platforms like [Coursera](https://www.coursera.org/), [Udemy](https://www.udemy.com/), and specific engine tutorials (e.g., Unity Learn, Unreal Online Learning) offer courses in game development. These resources are valuable for both beginners and experienced developers looking to expand their skills.
### The Value of Continuous Learning and Certifications
The game development industry is constantly evolving, making continuous learning essential. Certifications in new technologies and methodologies can enhance your skill set and improve your career prospects.
## Section 5: Gaining Practical Experience
Hands-on experience is invaluable in game development, providing insight into the development process and helping to build a portfolio.
### Participating in Game Jams and Hackathons
Game jams and hackathons are excellent opportunities for practical experience. These events encourage creativity, teamwork, and rapid development, culminating in a functional game prototype.
### Building a Diverse Portfolio
A portfolio showcasing a range of projects, including personal and collaborative works, is crucial for demonstrating your skills and creativity to potential employers or clients.
### Internship Opportunities
Internships with indie and AAA companies offer real-world experience, mentorship, and industry connections. Securing an internship often involves showcasing a strong portfolio, leveraging educational or personal networks, and demonstrating a keen interest in game development.
Gaining a solid foundation in essential skills, pursuing educational opportunities, and accumulating practical experience are key steps on the path to becoming a game developer. These efforts not only enhance your abilities but also prepare you for a successful career in creating innovative and engaging games.
## Section 6: Specialization in Game Development
The path to becoming a game developer is enriched by choosing a specialization. This decision should be guided by personal interest, skills, and market needs. One of the emerging niches with significant growth potential is online slots development, which combines traditional game mechanics with the chance for monetary reward.
### Choosing a Focus Area
- Online Slots Development: Specializing in this area requires understanding of probability, game mechanics, and regulatory compliance. Developers like Pragmatic Play, with titles like Gates of Olympus, demonstrate the success achievable in this niche.
- Mobile Gaming: With the widespread use of smartphones, mobile gaming offers opportunities to reach a vast audience. Skills in user interface design and optimization for various devices are key.
- Indie Projects: For those with a creative vision, indie game development allows for experimenting with unique concepts and storytelling. This area demands versatility and a do-it-yourself attitude.
- AAA Titles: Working on AAA titles involves contributing to large-scale projects with significant budgets, focusing on cutting-edge graphics and deep narrative structures.
- Online Multiplayer Games: This specialization requires expertise in network coding, server management, and creating engaging multiplayer experiences.
## Section 7: Launching Your Career in Game Development
Entering the game development industry is a competitive but rewarding journey. Here are strategies to effectively launch your career:
### Job Searching and Networking
- Engage with online communities and social media platforms dedicated to game development.
- Attend industry conferences and workshops to meet professionals and learn about job opportunities.
### Crafting a Standout Resume and Portfolio
- Highlight specific projects and roles that showcase your skills and contributions.
- Include a mix of personal, academic, and freelance projects to demonstrate versatility.
### Interview Preparation
- Be ready to discuss your development process, problem-solving methods, and how you stay updated with industry trends.
- Prepare a presentation or demo reel that effectively showcases your best work.
## Section 8: Staying Ahead in the Game Development Industry
The game development industry is constantly evolving, making it essential to stay informed and adapt to new technologies and trends.
### Keeping Skills Updated
- Continuously learn new programming languages, game engines, and development tools.
- Pursue certifications in emerging technologies like AI and virtual reality.
### Networking and Continuous Learning
- Participate in forums, online courses, and attend game development meetups to exchange ideas and learn from peers.
- Follow industry news and research future trends to anticipate the direction of game development.
## Conclusion
The journey to becoming a game developer is filled with challenges, learning, and immense satisfaction. From mastering essential skills to choosing a specialization and launching your career, each step offers the opportunity to contribute to the vibrant and ever-evolving game industry. Embrace the journey with determination, creativity, and an unwavering passion for game development.
## Additional Resources
To further support your journey into game development, consider exploring the following resources:
- Books: "[The Art of Game Design: A Book of Lenses](https://www.amazon.com/Art-Game-Design-Book-Lenses/dp/0123694965)" by Jesse Schell, "Game Programming Patterns" by Robert Nystrom.
- Online Resources: Gamasutra for industry news, Unity Learn and Unreal Engine Online Learning for tutorials.
- Communities and Forums: Reddit’s r/gamedev, Stack Exchange’s Game Development section, and Discord communities dedicated to game development.
- Conferences and Events: GDC (Game Developers Conference), PAX (Penny Arcade Expo), and local meetups offer invaluable networking opportunities and insights into the industry.
By leveraging these resources, staying engaged with the community, and continuously honing your skills, you can navigate the path to a successful career in game development.
| ericabrookssf |
1,759,132 | Building a serverless GraphQL API with NeonDB and Prisma | Written by Nitish Sharma ✏️ Serverless architecture is one answer to the demand for scalable,... | 0 | 2024-02-14T15:00:54 | https://blog.logrocket.com/building-serverless-graphql-api-neondb-prisma | graphql, prsima, webdev | **Written by [Nitish Sharma
](https://blog.logrocket.com/author/nitishsharma/)✏️**
Serverless architecture is one answer to the demand for scalable, efficient, and easily maintainable solutions in modern web development. NeonDB, a serverless PostgreSQL offering, stands out as a robust choice among the serverless databases available.
In this article, we’ll create a powerful and flexible GraphQL API by harnessing the combined capabilities of Prisma ORM, Apollo Server, and NeonDB's serverless PostgreSQL. You can follow along with the [project code](https://github.com/nitishxyz/serverless-neon-prisma-graphql) and preview the [live demo](https://345g5ydhzd.execute-api.ap-south-1.amazonaws.com/) as we get started.
## Why use NeonDB with Prisma?
There are many tools available to choose from in the serverless architecture ecosystem. So, why are we using this particular combination of tools?
NeonDB is an innovative serverless database solution that offers the power of PostgreSQL with the flexibility and cost-effectiveness of serverless computing. It eliminates the need for complex database management tasks, allowing developers to focus on crafting exceptional user experiences.
Leveraging Prisma ORM — a modern, type-safe database toolkit for Node.js and TypeScript — adds a layer of abstraction that simplifies database interactions. Meanwhile, [Apollo Server](https://blog.logrocket.com/graphql-local-state-management-apollo/) facilitates the seamless integration of our GraphQL API.
So, we’ll set up a serverless PostgreSQL database with NeonDB, configure Prisma ORM for efficient data modeling, and implement Apollo Server to expose a GraphQL API.
By the end of this tutorial, you'll have a functional and scalable GraphQL API as well as a deeper understanding of the powerful synergy between serverless databases and cutting-edge development tools. Let's get started.
## Setting up NeonDB's serverless Postgres
It’s easy to set up Neon’s serverless Postgres. There are just four steps:
1. Sign up for NeonDB
2. Create a new database instance
3. Retrieve connection details
4. Connect to NeonDB
Start by signing up for a NeonDB account. Navigate to the NeonDB website and follow the registration process. Once registered, log in to your NeonDB dashboard.
In your NeonDB dashboard, look for an option to create a new database instance. Provide a name for your database, choose the desired region for deployment, and configure any additional settings according to your project requirements: 
Once your Neon database instance is set up, locate the connection details provided by NeonDB. This typically includes the endpoint URL, port, username, and password. You'll need these details to connect your applications and services to the serverless Postgres instance: 
The image above shows an example of a direct connection to your database. You can also set up a pooled connection by checking the **Pooled connection** option: 
A direct connection opens a new connection with the database on every request, while a pooled connection caches the connection so it can be reused by multiple queries. We can only push migrations via direct connections, but to make queries and other requests, we should use a pooled connection.
Use the obtained connection details to connect to your NeonDB serverless Postgres instance from your development environment or server. You can use tools like `psql` or any PostgreSQL client library in your preferred programming language.
That’s all! Now that the dashboard is all set up, take some time to explore it. Familiarize yourself with the monitoring tools, performance metrics, and any additional features NeonDB provides for managing and optimizing your serverless Postgres database.
## Setting up our Prisma and TypeScript project
Create a new directory in a place that you prefer. For this article, we’re going to use `serverless-neon-prisma-graphql` as our directory name:
```shell
mkdir serverless-neon-prisma-graphql
```
Inside that directory, initialize a project using the following npm command:
```bash
npm init
```
Then, set the `type` key to `module` in your `package.json` file, which should look something like this:
```json
{
"name": "serverless-neon-prisma-graphql",
"module": "index.js",
"type": "module",
"peerDependencies": {},
"dependencies": {},
"devDependencies": {}
}
```
Next, install and initialize TypeScript:
```bash
npm install typescript --save-dev
npx tsc --init
```
Now, install Prisma CLI as a development dependency to the project:
```bash
npm install prisma --save-dev
```
Set up Prisma with the `init` command, setting the `provider` to `postgresql`:
```bash
npx prisma init --datasource-provider postgresql
```
You should see the following output:
```plaintext
✔ Your Prisma schema was created at prisma/schema.prisma
You can now open it in your favorite editor.
warn You already have a .gitignore file. Don't forget to add `.env` in it to not commit any private information.
Next steps:
1\. Set the DATABASE_URL in the .env file to point to your existing database. If your database has no tables yet, read https://pris.ly/d/getting-started
2\. Run prisma db pull to turn your database schema into a Prisma schema.
3\. Run prisma generate to generate the Prisma Client. You can then start querying your database.
More information in our documentation: https://pris.ly/d/getting-started
```
This will create a `.env` file, which should look something like this:
```plaintext
# Environment variables declared in this file are automatically made available to Prisma.
# See the documentation for more detail: https://pris.ly/d/prisma-schema#accessing-environment-variables-from-the-schema
# Prisma supports the native connection string format for PostgreSQL, MySQL, SQLite, SQL Server, MongoDB and CockroachDB.
# See the documentation for all the connection string options: https://pris.ly/d/connection-strings
DATABASE_URL="postgresql://johndoe:randompassword@localhost:5432/mydb?schema=public"
```
Also, you’ll see a file `schema.prisma` with the following contents:
```json
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
```
Finally, install Prisma Client:
```bash
npm install @prisma/client
```
With that, our TypeScript project should be set up with Prisma.
## Connecting Prisma with NeonDB
Now that we've set up Prisma, the next crucial step is to establish the connection between Prisma and NeonDB, ensuring that our GraphQL API can seamlessly interact with the serverless Postgres instance.
First, let’s update the Prisma configuration. Open the `prisma/schema.prisma` file and ensure that the connection details match those of your NeonDB serverless Postgres instance. The configuration should look something like this:
```json
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL") // This is the pooler connection string to your DB
directUrl = env("DIRECT_DATABASE_URL") // This is the direct connection string to your DB
}
```
As you can see, we added a `directUrl` key to the `datasource` connector. This is because we will use a pooled connection for client queries and a direct connection for deploying migrations.
Next, we’ll update the `.env` file like so:
```plaintext
DATABASE_URL="postgresql://pooler-connection-string-from-neon?sslmode=require&connect_timeout=600&pgbouncer=true"
DIRECT_DATABASE_URL="postgresql://direct-connection-string-from-neon?sslmode=require&connect_timeout=300"
```
Be sure to configure your connection strings with the parameters at the end. Then, source your `.env` configuration in the terminal to apply these strings. This step is crucial, as it enables the app to access the database correctly using the specified parameters.
Now, we’ll add some basic schema definitions to our `schema.prisma` file. Update your schema so that it looks like this:
```json
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL") // This is the pooler connection string to your DB
directUrl = env("DIRECT_DATABASE_URL") // This is the direct connection string to your DB
}
model User {
id Int @id @default(autoincrement())
email String @unique
name String?
posts Post[]
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
model Post {
id Int @id @default(autoincrement())
title String
content String?
published Boolean @default(false)
author User? @relation(fields: [authorId], references: [id])
authorId Int?
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
```
Next, we will deploy our migrations using the following command:
```bash
npx prisma migration dev -n initUserAndPost
```
You should see the following output for this command, which should tell us that our migrations have been created and applied, as well as that our database and schema have been synced:
```plaintext
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Datasource "db": PostgreSQL database "neondb", schema "public" at "connection-name.ap-southeast-1.aws.neon.tech"
Applying migration `20231214095829_init_user_and_post`
The following migration(s) have been created and applied from new schema changes:
migrations/
└─ 20231214095829_init_user_and_post/
└─ migration.sql
Your database is now in sync with your schema.
✔ Generated Prisma Client (v5.7.0) to ./node_modules/@prisma/client in 41ms
```
Congratulations, you’ve successfully deployed your first migration using Prisma to NeonDB Postgres.
## Setting up Apollo Server to handle GraphQL and deploy to AWS Lambda
Apollo Server is a versatile GraphQL server implementation developed by the team behind the Apollo GraphQL client. It simplifies the process of building, deploying, and managing GraphQL APIs, providing a flexible and feature-rich framework for creating robust, efficient, and extensible GraphQL APIs.
AWS Lambda is a serverless computing service that allows you to run code without provisioning or managing servers. It provides automatic scaling, cost efficiency, and the ability to run code in response to events, making it ideal for deploying GraphQL servers.
Deploying our GraphQL server on AWS Lambda allows us to leverage the benefits of serverless architecture, ensuring that resources are allocated only when needed. This means you only pay for the compute time consumed by your GraphQL requests.
We're going to deploy Apollo Server to AWS Lambda for their combined benefits of serverless architecture, scalability, and cost efficiency. This approach leverages the strengths of both Apollo Server and AWS Lambda to create a highly performant and easily scalable GraphQL API.
Before we can get started with this step of our project, there are a few things you need to do:
* [Create an AWS account](https://aws.amazon.com/free/)
* [Install the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
* [Create an IAM user](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-creds-create)
Then, let’s begin by installing Apollo Server, GraphQL, and AWS Lambda integrations we need, like so:
```bash
npm install @apollo/server graphql @as-integrations/aws-lambda
```
Create a directory named `src` with the following command:
```shell
mkdir src
```
The `src` directory will organize our server code files. Inside `src`, create a new file named `server.ts`:
```typescript
touch server.ts
```
This file will hold our Apollo Server configurations. Add the following basic server setup in the `server.ts` file:
```typescript
import { ApolloServer } from '@apollo/server';
import { startServerAndCreateLambdaHandler, handlers } from '@as-integrations/aws-lambda';
const typeDefs = `#graphql
type Query {
test: String
}
`;
const resolvers = {
Query: {
test: () => 'Hello World!',
},
};
const server = new ApolloServer({
typeDefs,
resolvers,
});
export const graphqlHandler = startServerAndCreateLambdaHandler(
server,
handlers.createAPIGatewayProxyEventV2RequestHandler(),
);
```
This setup integrates Apollo Server with AWS Lambda, allowing us to trigger our GraphQL API by AWS Lambda functions. We initialized an Apollo Server with a simple GraphQL schema and query resolver, then exported a handler function that works with the AWS Lambda API Gateway integration.
Now that the basic configuration is done, we will create a deployment pipeline with Serverless Framework.
## Setting up the Serverless Framework for deployments
The Serverless Framework, or Serverless, is a powerful tool that simplifies the [deployment and management of serverless applications](https://blog.logrocket.com/building-serverless-app-typescript/). It’s an excellent choice for deploying Apollo Server on AWS Lambda. Let's set up the Serverless Framework for our deployments.
We’ll start by configuring our Serverless services. We’re going to create a file named `serverless.yml`, which will be responsible for deploying our GraphQL server to AWS Lambda:
```yaml
touch serverless.yml
```
Then, write the basic configuration for deployments as shown below:
```yaml
service: apollo-lambda
provider:
name: aws
region: ${opt:region, 'ap-south-1'}
runtime: nodejs18.x
httpApi:
cors: true
functions:
graphql:
# The format is: <FILENAME>.<HANDLER>
handler: src/server.graphqlHandler # highlight-line
events:
- httpApi:
path: /
method: POST
- httpApi:
path: /
method: GET
plugins:
- serverless-plugin-typescript
```
You can change the `region` and `runtime` Node version according to your needs. However, the handler name must be in the following format:
```yaml
filename.exportedHandlerName
```
In the above file, we’ve set our `handler` to `src/server.graphqlHandler`, where the file name is `server.ts` and the exported handler name is `graphqlHandler` — all of which we set up before.
Next, install the following Serverless plugin to set up TypeScript support:
```bash
npm install serverless-plugin-typescript --save-dev
```
Now, it’s time to update a couple of our files. First, let’s update our `package.json` file to the following:
```json
{
"name": "serverless-neon-prisma-graphql",
"module": "serverless.ts",
"type": "module",
"peerDependencies": {
"typescript": "^5.0.0"
},
"dependencies": {
"@apollo/server": "^4.9.5",
"@as-integrations/aws-lambda": "^3.1.0",
"graphql": "^16.8.1",
"@prisma/client": "^4.16.2"
},
"devDependencies": {
"prisma": "^4.15.0",
"serverless-plugin-typescript": "^2.1.5"
}
}
```
In the initial `package.json` file, the `dependencies`, `peerDependencies`, and `devDependencies` were all empty. We installed these dependencies in the steps above. This file provides a reference regarding which versions of these packages we used as a fallback in case the next versions break anything.
Then, update the `tsconfig.json` file to `include` the `serverless.ts` file and `exclude` the `.serverless/**/*/` folder:
```json
{
"compilerOptions": {
"lib": ["ESNext"],
"moduleResolution": "node",
"noUnusedLocals": true,
"noUnusedParameters": true,
"removeComments": true,
"sourceMap": true,
"target": "ES2020",
"outDir": "lib",
"allowSyntheticDefaultImports": true
},
"include": ["src/*.ts", "serverless.ts"],
"exclude": [
"node_modules/**/*",
".serverless/**/*",
".webpack/**/*",
"_warmup/**/*",
".vscode/**/*"
],
"ts-node": {
"require": ["tsconfig-paths/register"]
}
}
```
Then, set up your AWS credentials in the following path:
```plaintext
~/.aws/credentials
```
The file should look like this, replacing the access key placeholders with your own access key IDs:
```plaintext
[default]
aws_access_key_id = your-access-key-id
aws_secret_access_key = your-access-key-id-secret
```
At this point, our server is set up. To test the server locally, we’ll need to create a file named `query.json` with the following contents:
```json
{
"version": "2",
"headers": {
"content-type": "application/json",
},
"isBase64Encoded": false,
"rawQueryString": "",
"requestContext": {
"http": {
"method": "POST",
},
},
"rawPath": "/",
"routeKey": "/",
"body": "{\"operationName\": null, \"variables\": null, \"query\": \"{ test }\"}"
}
```
In this file, we added a JSON object that contains a basic GraphQL request with a basic query to the local server using the Serverless CLI. Invoke the following command to test the `serverless` config:
```shell
serverless invoke local -f graphql -p query.json
```
This command will use the payload from the `query.json` file and test it with our server setup. You should see the following output in your terminal:
```json
{
"statusCode": 200,
"headers": {
"cache-control": "no-store",
"content-type": "application/json; charset=utf-8",
"content-length": "28"
},
"body": "{\"data\":{\"Hello\":\"World!\"}}\n"
}
```
The output signifies that the GraphQL schema is correct and that the resolvers are working as well, as indicated by the `Hello : World!` that we should see.
Finally, to deploy, use the following command:
```bash
npx serverless deploy
```
## Using Prisma to connect Apollo Server to NeonDB
With Apollo Server and our `serverless` config set up, our next step is to connect Apollo Server to Neon using Prisma. First, we will add `serverless-dotenv-plugin` so we can use the `.env` variables in AWS Lambda:
```bash
npm install serverless-dotenv-plugin --save-dev
```
Then, let’s update our `serverless` configuration to match the following:
```json
service: apollo-lambda
provider:
name: aws
region: ${opt:region, 'ap-south-1'}
runtime: nodejs18.x
httpApi:
cors: true
functions:
graphql:
# The format is: <FILENAME>.<HANDLER>
handler: src/server.graphqlHandler # highlight-line
events:
- httpApi:
path: /
method: POST
- httpApi:
path: /
method: GET
plugins:
- serverless-plugin-typescript
- serverless-dotenv-plugin
package:
patterns:
- "!node_modules/.prisma/client/libquery_engine-*"
- "node_modules/.prisma/client/libquery_engine-rhel-*"
- "!node_modules/prisma/libquery_engine-*"
- "!node_modules/@prisma/engines/**"
```
In the above, we added a `dotenv` plugin so that env variables can be loaded. We also added `patterns` under `package` to exclude `!` some parts of the generated Prisma client that are not required as well as include the file ending in `rhel`, as it matches AWS Lambda’s architecture.
Next, we need the `prisma.schema` file to be available at runtime, as it generates the client and types for our project. Create a file named `template.yaml` and add the following content:
```yaml
Loader:
- .prisma=file
- .so.node=file
AssetNames: "[name]"
```
Add the following to your `.env` file to configure the Prisma CLI to use the correct binary targets for our project's deployment environment:
```plaintext
PRISMA_CLI_BINARY_TARGETS=native,rhel-openssl-1.0.x
```
Now in our `server.ts` file, we’ll import and initialize the Prisma client:
```typescript
import { PrismaClient } from "@prisma/client";
const prisma = new PrismaClient();
```
Then we’ll update our `typeDefs` in our GraphQL schema as shown below:
```typescript
const typeDefs = `#graphql
type Query {
fetchUsers: [User]
}
type User {
id: Int!
name: String!
email: String!
posts: [Post]
}
type Post {
id: Int!
title: String!
content: String!
published: Boolean!
}
type Mutation {
createUser(name: String!, email: String!): User!
createDraft(title: String!, content: String!, authorEmail: String!): Post!
publish(id: Int!): Post
}
`;
```
Previously, we only had a test query to test our configuration without making actual calls to the database. Now we’ve added GraphQL types to make the data accessible from the resolvers. So, update the resolvers to resolve the above queries and mutations:
```typescript
const resolvers = {
Query: {
fetchUsers: async () => {
const users = await prisma.user.findMany({
include: {
posts: true,
},
});
return users;
},
},
Mutation: {
// @ts-expect-error
createUser: async (parent, args) => {
const user = await prisma.user.create({
data: {
name: args.name,
email: args.email,
},
});
return user;
},
// @ts-expect-error
createDraft: async (parent, args) => {
const post = await prisma.post.create({
data: {
title: args.title,
content: args.content,
published: false,
},
});
return post;
},
},
};
```
Also, we need to update our `schema.prisma` file to accomodate the AWS Lambda architecture. To do that, add the following to the `generator` configuration:
```plaintext
generator client {
provider = "prisma-client-js"
binaryTargets = ["native", "rhel-openssl-1.0.x"]
}
```
This is a breaking change if you’re deploying from a Mac. To work around, that we’re going to set up GitHub Actions to create a deployment pipeline.
## Deploying with GitHub Actions
In a moment, we’re going to create workflow files that will deploy our GraphQL server to AWS Lambda using Serverless via GitHub actions. But first, let’s set up some repository secrets.
Head over to GitHub and create a repository if you haven’t already. Navigate to **Settings > Secrets and variables > Actions**:  Then hit the **New repository secret** button to create one:  As a reminder, secrets are variables that you create in an organization, repository, or repository environment. In this case, we’re creating one for GitHub Actions so that these are not accessible to anyone else. Name your variable and add the secret value:  Add the following variables:
* `AWS_ACCESS_KEY_ID`
* `AWS_SECRET_ACCESS_KEY`
* `ENV`
You should see these environment variables now in your repository secrets:  Note that the `ENV` secret should hold your current `.env` contents. The other two should contain your AWS credentials to allow us to use `aws-actions`.
Now, let’s add some scripts to the `package.json` file to automate our deployment process:
```json
"scripts": {
"prisma:generate": "npx prisma generate",
"build": "tsc",
"deploy": "npm run prisma:generate && npm run build && npx serverless deploy"
},
```
Then, we’ll create a workflow file, which will trigger a GitHub action when we push new code to the main branch. Create a new directory called `.github/workflows/` and then create a file in this directory called `deploy.yml`:  We need to use `aws-actions/configure-aws-credentials` to configure the root credentials for the npm script. This will allow us to deploy to the account with the configured credentials. Use the following config in the `deploy.yml` file:
```yaml
name: CDK Deployment
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ap-south-1
- name: Install dependencies
run: npm install
- name: Deploy infrastructure
run: |
export AWS_ACCESS_KEY_ID="${{ secrets.AWS_ACCESS_KEY_ID }}"
export AWS_SECRET_ACCESS_KEY="${{ secrets.AWS_SECRET_ACCESS_KEY }}"
# Use main account credentials
echo "${{ secrets.ENV }}" > .env
source .env
npm run deploy
```
Now all we need to do is push our changes to the main branch and deployment will begin. Head over to the **Actions** tab in your repository to see the progress:  If you drop down the **Deploy infrastructure** item, you should be able to see the deployment URL:  And with that, we’re all done. You can check out the [live demo](https://345g5ydhzd.execute-api.ap-south-1.amazonaws.com/), where you should see the final deployment hosted over AWS Lambda:  Apollo Studio self-documents the GraphQL schema that we have in the app. The screenshot above shows a query to list the users. The screenshot below shows a mutation with its outputs: 
## Conclusion
NeonDB, a serverless PostgreSQL solution, streamlines database-related tasks. Prisma ORM simplifies data modeling with its clean syntax, contributing to code readability and maintainability. Apollo Server seamlessly integrates GraphQL capabilities into applications, offering flexibility and efficiency.
In this tutorial, we combined the strengths of NeonDB, Prisma ORM, and Apollo Server within a serverless architecture to achieve a fully functional and scalable GraphQL API. We also used GitHub Actions to ensure a seamless and automated deployment.
Throughout this practical guide to creating a serverless GraphQL API, we also explored the significance of serverless databases, powerful ORM tools, and serverless deployment frameworks in modern web development. These tools allow us as developers to focus on creating superior user experiences.
You can check out the [final project’s code in this repository](https://github.com/nitishxyz/serverless-neon-prisma-graphql). Feel free to comment below if you have any questions.
---
## Monitor failed and slow GraphQL requests in production
While GraphQL has some features for debugging requests and responses, making sure GraphQL reliably serves resources to your production app is where things get tougher. If you’re interested in ensuring network requests to the backend or third party services are successful, [try LogRocket](https://lp.logrocket.com/blg/graphql-signup).
[](https://lp.logrocket.com/blg/graphql-signup)
[LogRocket](https://lp.logrocket.com/blg/graphql-signup) is like a DVR for web and mobile apps, recording literally everything that happens on your site. Instead of guessing why problems happen, you can aggregate and report on problematic GraphQL requests to quickly understand the root cause. In addition, you can track Apollo client state and inspect GraphQL queries' key-value pairs.
LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. [Start monitoring for free](https://lp.logrocket.com/blg/graphql-signup).
| leemeganj |
1,760,030 | Metaverse Education Platform Development: Future of Education | In an era characterized by rapid technological advancement and digital transformation, the... | 0 | 2024-02-13T13:04:18 | https://dev.to/bidbits/metaverse-education-platform-development-future-of-education-8oc | metaverseeducat, metaverse, ai, virtuallearning | In an era characterized by rapid technological advancement and digital transformation, the traditional paradigms of education are undergoing a profound evolution. Enter the metaverse – a virtual universe where immersive experiences redefine the way we interact, learn, and collaborate. At the forefront of this revolution lies **[Metaverse Education Platform Development](https://bidbits.org/metaverse-education-platform-development)**, a groundbreaking approach that promises to revolutionize the landscape of education as we know it.
The concept of the metaverse, popularized by science fiction and speculative fiction, is swiftly becoming a tangible reality. It encompasses a vast interconnected network of virtual worlds, augmented reality environments, and immersive simulations, offering endless possibilities for exploration and interaction. Within this digital realm, education emerges as a prime domain ripe for innovation and transformation.
Metaverse Education Platform Development leverages cutting-edge technologies such as virtual reality (VR), augmented reality (AR), and artificial intelligence (AI) to create immersive learning environments that transcend the constraints of physical space and time. Imagine stepping into a virtual classroom where students from around the globe gather in a shared digital space, engaging in collaborative projects, interactive lessons, and hands-on experiments in real-time. This is the promise of the metaverse – a dynamic, interconnected ecosystem where learning knows no bounds.
**Key features of Metaverse Education Platform Development include:
**
1. **Immersive Learning Environments**: Virtual classrooms, interactive simulations, and 3D educational content transport learners to new worlds, fostering engagement and curiosity.
2. **Collaborative Learning**: Students collaborate with peers and educators in virtual environments, promoting teamwork, communication, and social interaction.
3. **Customizable Content Creation**: Educators can create personalized lessons and curriculum tailored to individual student needs, leveraging AI-driven tools for content generation and adaptation.
4. **Data-Driven Insights**: Analytics and insights provide educators with valuable data on student progress, engagement, and learning outcomes, enabling personalized interventions and continuous improvement.
5. **Accessibility and Inclusivity**: The metaverse offers opportunities for accessible education, accommodating diverse learning styles, abilities, and backgrounds through customizable interfaces and adaptive technologies.
The potential impact of Metaverse Education Platform Development extends far beyond the confines of traditional education. From K-12 classrooms to higher education institutions, corporate training programs to lifelong learning initiatives, the metaverse democratizes access to quality education, transcending geographical barriers and socioeconomic constraints.
Furthermore, Metaverse Education Platform Development holds promise for addressing pressing challenges facing education today, including the need for personalized learning experiences, the demand for lifelong learning and upskilling in an ever-changing workforce, and the imperative to foster creativity, critical thinking, and digital literacy skills essential for success in the 21st century.
In this transformative landscape, **[BidBits](https://bidbits.org/)** emerges as a leading player in Metaverse Education Platform Development. With its innovative approach and commitment to leveraging AI-driven technologies, BidBits empowers educators and learners to navigate the metaverse with confidence and creativity. By integrating BidBits' solutions into their educational initiatives, institutions can unlock new possibilities for immersive learning experiences and achieve transformative outcomes in education.
As we embark on this transformative journey into the metaverse, it is crucial to prioritize ethical considerations, including data privacy, digital citizenship, and equitable access to technology. By harnessing the power of the metaverse responsibly and inclusively, we can unlock its full potential as a force for positive change in education and beyond.
In conclusion, Metaverse Education Platform Development represents a paradigm shift in education, offering immersive, interactive, and personalized learning experiences that transcend the limitations of traditional classrooms. With BidBits leading the way, let us embrace the possibilities of the metaverse and shape a future where education is not just a destination, but a transformative journey into the boundless realms of knowledge and discovery. | bidbits |
1,761,068 | React Error | hooks.tsx:613 React Router caught the following error during render Error: Cannot find module... | 0 | 2024-02-14T12:04:39 | https://dev.to/muhammadanas785/react-error-14j4 | hooks.tsx:613 React Router caught the following error during render Error: Cannot find module 'node_modules/react/index.js'
at newRequire (Grocery.js:10:23)
at newRequire (Grocery.js:10:23)
at localRequire (Grocery.js:10:23)
at parcelRequire.src/components/Grocery.js.react (Grocery.js:10:23)
at newRequire (Grocery.js:10:23)
at newRequire (App.f684dadd.js:21:18)
at localRequire (App.f684dadd.js:53:14)
how can i resolve? | muhammadanas785 | |
1,761,082 | Creating a RESTful API with Express.js: | Link: Creating a RESTful API using Node and Express 4 Overview: This tutorial, hosted on Scotch.io,... | 0 | 2024-02-14T12:18:03 | https://dev.to/aditya_raj_1010/creating-a-restful-api-with-expressjs-3pij | webdev, beginners, javascript, tutorial | Link: Creating a RESTful API using Node and Express 4
Overview: This tutorial, hosted on Scotch.io, provides a step-by-step guide to building a RESTful API using Node.js and Express.js. It is hands-on and suitable for developers who want to create a backend API for their web or mobile applications.
Key Steps and Concepts:
Setting Up a Node.js Project: The tutorial guides you through setting up a Node.js project, including installing dependencies using npm.
Creating Express Routes: Explains how to define routes for handling different HTTP methods (GET, POST, PUT, DELETE).
Handling Data with MongoDB: Demonstrates how to use MongoDB, a NoSQL database, for storing and retrieving data in the context of a RESTful API.
Middleware in Express: Covers the concept of middleware in Express and how it can be used for tasks like authentication and error handling.
Testing the API Endpoints: The tutorial introduces testing techniques for ensuring the correctness of the API. | aditya_raj_1010 |
1,761,141 | 💅 Button ~in 4 framework | Title: A simple Button is translating in almost all CSS frameworks. Well, it's more likely saying I... | 0 | 2024-02-14T13:19:48 | https://dev.to/jorjishasan/button-in-4-framework-o52 | **Title:** A simple **Button** is translating in almost all CSS frameworks.
Well, it's more likely saying `I love you 🐶` to your love in Mandarin, Spanish, French or any different language. But it holds the same meanings. Now it gets funny. Let's jump.
Likewise, now I will style a `button` in the top 4 popular frameworks:
- Tailwind
- Bootstrap
- MaterialUI
- Styled-Component
Once you go with me, you'll get excellent hands-on on all the frameworks.
---
**Pseudo Code:**
```
color → #1f1f1f,
border → 1px solid #1f1f1f,
padding → top & bottom = 20px + right & left = 48px
While hovering 👇
color → #fff
background → #9747ff
transition → 300ms
```

---
## In Tailwind
```jsx
//App.js
const InTailwind = () => {
return (
<button className="text-[#1f1f1f] px-[48px] py-[20px] text-[32px] cursor-pointer border rounded-[7px] transition-colors duration-300 hover:bg-[#9747ff] hover:text-[#fff]">
Button
</button>
);
};
```
[Configure](https://tailwindcss.com/docs/installation/framework-guides) your tailwind project considering your bundler. Mine was **parcel**. And if you're using VS-Code as a code editor, then make sure to install an extension called [Tailwind CSS IntelliSense](https://marketplace.visualstudio.com/items?itemName=bradlc.vscode-tailwindcss).
---
## In Bootstrap
```CSS
/* custom.css */
.btn-custom {
font-family: inherit;
background-color: #fff;
border: 1px solid #1f1f1f;
border-radius: 7px;
font-size: 32px;
cursor: pointer;
transition-duration: 0.3s;
padding: 20px 48px;
}
.btn-custom:hover {
background-color: #9747ff;
color: #fff;
border-color: #9747ff;
}
```
```jsx
//App.js
import "bootstrap/dist/css/bootstrap.min.css";
const InBootstrap = () => {
return <button className="btn btn-custom">Button</button>;
};
```
Bootstrap doesn't have built-in utility classes for our work done as required. To achieve the same properties(color, margin, padding, hover) as the description says, we used custom CSS.
---
## In MaterialUI
```jsx
//App.js
import { Button } from "@mui/material";
const InMaterialUi = () => {
return (
<Button
sx={{
fontFamily: "inherit",
backgroundColor: "#fff",
color: "#1f1f1f",
border: "1px solid #1f1f1f",
borderRadius: "7px",
fontSize: "32px",
cursor: "pointer",
transitionDuration: "0.3s",
padding: "20px 48px",
"&:hover": {
backgroundColor: "#9747ff",
color: "#fff",
borderColor: "#9747ff",
},
}}
>
Button
</Button>
);
};
```
[installation](https://mui.com/material-ui/). We have a few options(sx, theme, styled function) to customize CSS. Here, we used `sx={}` props which is available on every materialUI component. It's kinda like the normal style prop on HTML, but under the hood it gets converted into normal CSS classes and comes with some additional features.
Spotify, Netflix, Amazon, and Unity are all faang companies that use MaterialUI as their primary styling framework.
---
## In Styled-Component
```jsx
//App.js
import styled from "styled-components";
const StyledButton = styled.button`
font-family: inherit;
background-color: #fff;
border: 1px solid #1f1f1f;
border-radius: 7px;
font-size: 32px;
cursor: pointer;
transition-duration: 0.3s;
padding: 20px 48px;
&:hover {
background-color: #9747ff;
color: #fff;
border-color: #9747ff;
}
`;
const InStyledComponent = () => {
return <StyledButton>Button</StyledButton>;
};
```
[installation](https://styled-components.com/). This framework is built for performance and simplicity. Most mid-tier companies like [godaddy](https://www.godaddy.com/) use styled-components for styling.
---
Every framework has its way of doing things. Each is used for especial purposes. I use **Tailwind** on a regular basis. It helps me style the entire app without leaving the `jsx` file.
[Source Code →](https://github.com/jorjishasan/For-Articles/tree/test/BUTTON%20-%20Tailwind%2C%20MUI%2C%20Chakra%2C%20Bootstrap%2C%20Bulma)
| jorjishasan | |
1,761,176 | Invocar un flujo dentro de otro flujo en Azure Logic Apps | En este corto artículo te mostraré cómo puedes llamar a un flujo de Logic apps dentro de un flujo en... | 0 | 2024-03-26T14:48:05 | https://dev.to/veronicaguamann/invocar-un-flujo-dentro-de-otro-flujo-de-azure-logic-apps-5c33 | En este corto artículo te mostraré cómo puedes llamar a un flujo de Logic apps dentro de un flujo en el mismo recurso.
Aquí te cuento mi pequeña hazaña, era relativamente nueva en Azure Logic Apps y necesitaba invocar a un flujo dentro de otro flujo, los dos estaban en el mismo recurso, conocía la acción de **_HTTP_** y traté de conectarme mediante el URL que general el flujo de Logic apps pero no funcionaba, leí investigué pero no encontré la solución, hasta que encontré un componente que me sonaba "**_Invoke a workflow in this workflow app_**"
¡**Eureka**! Fue la solución a mi problema.
A continuación, te muestro como usarlo:
Agrega una acción a tu flujo actual.

Al seleccionar "**_Agregar una acción_**" te mostrará las diferentes acciones disponibles, puedes colocar "invoke" en el buscador para encontrarlo fácilmente

Lo seleccionas, y ya podemos utilizarlo, en la sección de **Workflow Name** se muestra la lista de todos los flujos que tienes actualmente en tu recurso de Logic Apps, debes elegir el que necesitas.

En el caso de que se requiera algo adicional podemos seleccionar las siguientes opciones en **Parametros avanzados**:

Y colocar la información que requiera, ejemplo:

Y con esto, concluimos. Aunque pueda parecer básico, este artículo está diseñado para ser de ayuda a aquellos que están empezando con Logic Apps y se encuentran en situaciones similares a las que yo enfrenté en su momento
Si tienes alguna pregunta o quieres compartir tus propias experiencias, no dudes en dejarla en la sección de comentarios. ¡Me encantaría leerlos!
Saludos, y nos encontramos en el próximo artículo ;)
| veronicaguamann | |
1,761,337 | Caffeine Critics: January update | Several small changes were made to the Caffeine Critics project within the past month. To gather all... | 0 | 2024-02-14T14:40:56 | https://dev.to/wagenrace/caffeine-critics-january-update-10pb | webdev, ux, opendevelopment, development | Several small changes were made to the Caffeine Critics project within the past month. To gather all these updates in one place, I have decided to share them here. So, here are the updates for Caffeine Critics within January.
## UX: Using stars for rating
With [vue-star-rating](https://www.npmjs.com/package/vue-star-rating), you can easily add a rating system to your project. The rating can now be rounded off to half stars for more precision. Clicking directly on the stars is simple, but there are also plus and minus buttons located next to them for added convenience when using a mobile device or the user has trouble with the mouse.

In the database, the rating will be stored as a value between 0 and 240. This allows for some flexibility in adjusting the rating system. For instance, 240 can be divided by the numbers 2, 3, 4, 5, 6, 8, 10, 12, 15, 16, 30, 40, 48, 60, 80, and 120, which means that I can swap the current 10-step rating system for a 48-step by simply altering the front-end without affecting the database.
## UX: Clear error message
When logging in, registering, or creating a new drink, you could encounter an error. In the past, this error was silent and went unnoticed, but with the integration of vue-toast-notification, users are now provided with clear messages.
I think this lack of clear messages resulted in my first user disappearing, as their account was created but never verified, making the app seem non-functional for them.
However, we can learn from this experience and strive to provide a better experience for future users.

## New feature: Filter on producer
With the addition of a new filter, searching for drinks on our platform has become even more convenient. Now, you can specifically search for drinks produced by "Albert Heijn", making it easier to find the one you want to review

## Bugfix: Github page + VUE.js = 404 for some files
Github Pages rely on Jekyll, which in turn uses `_` as a default ignore pattern for files. However, this caused an issue when vue-star-rating was introduced, because it created a file named `_plugin-vue_export-helper-*.js` during the build process. Fortunately, the solution was straightforward - creating an empty file called .nojekyll in the public folder resolved the problem. The amount of time spent searching for this simple fix will not be mentioned…
## UX: Populate database
An app launching with nothing in it will never succeed. Users can't do all the work, so I try to gather drinks from different producers to make it more user-friendly. Instead of covering multiple markets partially, my goal is to collect all teas and coffees sold in the Netherlands' big supermarket chains and add producers I come across. At the end of January, there were 555 total drinks.
**The producers added in January**
- Pickwick
- Clipper
- Albert Heijn
- Zonnatura
- Lipton
- Pukka
- Bellarom (Lidl brand)
- Bean Brothers
- Uno a basta
## Conclusion
The app is maturing rapidly, but unfortunately, no one is sticking around to use it. Our next goal is to scrape all tea and coffee products in Dutch supermarkets and make sure drinks and producers are fully CURD (create, update, read, and delete) for maximum efficiency.
| wagenrace |
1,761,361 | De 35 à 3 segundos: Melhorando a Performance de um Relatório | Um belo dia, recebo um ticket que reclamava de algo: Relatório está demorando demais ... | 0 | 2024-02-14T15:17:58 | https://dev.to/racoelho/de-35-a-3-segundos-melhorando-a-performance-de-um-relatorio-14l9 | csharp, dotnet, braziliandevs, performance | Um belo dia, recebo um ticket que reclamava de algo:
**Relatório está demorando demais**
## Contexto
Estou em um projeto de um SaaS onde se coleta centenas de sinais por minuto para serem inseridos no database do **BigQuery** e que, eventualmente, são processados para geração de relatórios.
No caso desse post, os relatório em questão busca os dados das viagens geradas por um motorista em um determinado intervalo de tempo e compara seu desempenho com o restante da frota.
## Disclaimer
*Este sistema ainda está em desenvolvimento e homologação, então, muitas decisões foram tomadas sem o mesmo volume de dados ou criados às pressas para melhorias futuras.*
## O Problema
Com o aumento do uso, o volume de dados aumentou e eventualmente algumas lentidões foram notadas:
O relatório, quando buscado num intervalo de 7 dias, tomava cerca de **30 segundos** para devolver a resposta.
E com este ticket foi traçada uma meta: resposta em 3 segundos.
## Solucionando
Com isso dito, comecei a analisar o código para procurar os pontos mais lentos.
Usando um `Stopwatch`, metrifiquei cada etapa do relatório para descobrir que dos 35 segundos da minha request, *metade* ocorria em 2 buscas ao banco que aconteciam no inicio do processo da Service.
Em resumo, a função recebia um StartDateTime e um EndDateTime e um identificador do veículo e realizava uma lógica parecida com a abaixo:
```csharp
# Service.cs
// Busca todas as viagens da frota
IEnumerable<VehicleTrip> fleetTrips = await facade.GetTripsAsync(startTime, endTime);
// Filtra as viagens daquele veículo em específico
IEnumerable<VehicleTrip> vehicleTrips = fleetTrips.Where(x => x.VehicleId == vehicleId).ToList();
foreach(var trip in vehicleTrips) {
// Busca uma lista de eventos para ocorridos na viagem
trip.Events = await facade.GetEventsAsync(...);
}
// Instancia o objeto de retorno onde é processada a média dos dados
var reportResult = new ReportExample(vehicleTrips);
if(fleetTrips.Any()) {
// Realizava a comparação de desempenho individual com o da frota inteira
reportResult.CalculatePercentageDifferenceToTheFleet(fleetTrips);
}
return reportResult;
```
Se você deu uma boa olhada, já deve ter conseguido encontrar onde estão pelo menos dois dos gaps.
Então, vamos para o primeiro:
### Alterando o facade.GetTripsAsync()
Como pode perceber, esse método captura todos os dados de todos os veículos da frota e só os usaria no final do processo, o que me pareceu um esforço desnecessário.
Claro, até havia caching. Assim, nos momentos em que era necessário buscar a quantidade de eventos críticos ou mesmo realizar alguma filtragem, os dados já estavam lá.
Mas mesmo assim, as informações que não seriam exclusivamente do veículo não pareciam ser necessárias em outro momento além daquele método `CalculatePercentageDifferenceToTheFleet` onde seriam comparadas as médias de desempenho entre o Veículo e a Frota para informações como:
- "O veículo está consumindo, em média, *X*% a mais do que a frota"
- "O veículo está andando, em média, *X*% a mais rápido do que a frota"
E não há necessidade de calcular a média da frota no código, uma vez que podemos buscá-la diretamente no banco.
Então essas foram as primeiras alterações:
- O método foi atualizado para buscar unicamente os dados do veículo ;
- Criação do método `facade.GetFleetStats()` para retornar as médias de toda a frota sem a necessidade de listar os dados;
- Adaptação do método `CalculatePercentageDifferenceToTheFleet` para receber o objeto retornado do `facade.GetFleetStats` ao invés da listagem de trips;
```csharp
# Service.cs
// Busca todas as viagens do veículo
var vehicleTrips = await facade.GetVehicleTripsAsync(startTime, endTime, vehicleId);
foreach(var trip in vehicleTrips) {
// Busca uma lista de eventos para ocorridos na viagem
trip.Events = await facade.GetEventsAsync(...);
}
// Executa o calculo das médias de sinais da Frota
FleetStats fleetStats = await facade.GetFleetStatsAsync(startTime, endTime);
// Instancia o objeto de retorno onde é processada a média dos dados
var reportResult = new ReportExample(vehicleTrips);
if(fleetStats != null) {
// Método alterado para receber FleetStats ao invés de IEnumerable<VehicleTrip>
reportResult.CalculatePercentageDifferenceToTheFleet(fleetStats);
}
return reportResult;
```
E o resultado foi: resposta média da API em 18 segundos.
Bom? Eu não achei.
O que me levou para a segunda alteração.
### Removendo multiplas chamadas ao facade.GetEventsAsync()
Para cada uma das viagens, a aplicação precisa buscar os eventos gerados dentro do intervalo dela.
E como já deve ter pensado por conta própria, realizar uma chamada assíncrona para cada uma das trips não era a melhor solução... então a alteração foi bem intuitiva e rápida:
Este código:
```csharp
var vehicleTrips = await facade.GetVehicleTripsAsync(startTime, endTime, vehicleId);
foreach(var trip in vehicleTrips) {
trip.Events = await facade.GetEventsAsync(trip.StartDateTime, trip.EndDateTime, trip.VehicleId);
}
```
Foi substituido por este:
```csharp
var events = await facade.GetEventsAsync(startTime, endTime);
var vehicleTrips = await facade.GetVehicleTripsAsync(startTime, endTime, vehicleId)
.Select(trip =>
{
trip.Events = events
.Where(x => x.DateTimeUTC >= trip.StartDateTime &&
x.DateTimeUTC <= trip.EndDateTime)
.ToList();
return trip;
});
```
E o resultado?
Resposta media de: 14 segundos.
Ainda não chegamos na meta, mas estávamos à caminho.
### Ajustando a busca no banco
Neste momento, eu fiz dezenas de alterações em queries tentando diminuir os dados processados e conversões.
O que só deve ter reduzido uma média de 2 segundos.
E onde estavam os gaps????? Nos MESMOS lugares: nas comunicações com o BigQuery.
Seria isso um problema de performance do mecanismo?
Eu tinha certeza que não, mas o Stopwatch me dizia claramente: nenhuma outra operação leva sequer **1s** enquanto as comunicações com o BigQuery levam em média **6s**.
Então, fui olhar a implementação da classe de conexão.
Aqui vai uma pequena explicação:
A classe `BigQuery.cs` que vou mostrar faz parte dos building blocks da aplicação e precisa ser genérico ao ponto de converter os dados nos formatos corretos das propriedades.
O código que encontrei foi parecido com esse:
```csharp
# BigQuery.cs
public async Task<List<T>> GetQueryResultsAsync<T>(string query)
{
using var bigQueryClient = bqClientFactory.Create();
var job = await bigQueryClient.CreateQueryJobAsync(query);
// Lista de retorno
var list = new List<T>();
var bigQueryRows = await bigQueryClient.GetQueryResultsAsync(job.Reference);
if ((bigQueryRows.SafeTotalRows ?? 0) > 0)
{
// Para cada linha retornada do banco...
foreach (BigQueryRow row in bigQueryRows)
{
T data = default;
// Converte a linha para o objeto T chamando o método de ParseRow.
data = ParseRow<T>(row);
// Adiciona na lista que será retornada
list.Add(data);
}
}
return list;
}
```
O primeiro problema a ser explorado:
- O uso do `List<T>`.
Se você não sabe como o List funciona, aqui vai uma breve explicação...
Para a criação de um array você precisa fornecer o tamanho dele, ou seja, quantos itens ele poderá guardar e este tamanho será imutável!
Ex.:
```csharp
var arr = new int[8];
Console.WriteLine("Tamanho do arr: {0}", arr.Length);
// Tamanho do arr: 8
var arr2 = new int[] { 0, 1, 2, 3, 4, 5, 6, 7 };
Console.WriteLine("Tamanho do arr2: {0}", arr2.Length);
// Tamanho do arr2: 8
```
Mas porque não precisamos informar um tamanho para o List?
Quando você gera um List, ele cria um array de tamanho **0** e quando você adiciona itens dentro dele com o `.Add()` ele gera um **NOVO** array com tamanho **4**, vai atribuir o valor do array anterior ao novo e descartar o velho.
*"E se eu rodar o `.Add()` 5 vezes?"*
Ele criará um novo array de tamanho **8** e assim seguirá: todas as vezes que exceder o tamanho, um novo array será criado com o DOBRO do tamanho que vai receber o valor do antigo que será descartado.
Ex.:
```csharp
var list = new List<int>();
Console.WriteLine("Itens: {0}, Capacidade: {1}", list.Count, list.Capacity);
// Itens: 0, Capacidade: 0
list.Add(0);
Console.WriteLine("Itens: {0}, Capacidade: {1}", list.Count, list.Capacity);
// Itens: 1, Capacidade: 4
list.Add(1);
list.Add(2);
list.Add(3);
list.Add(4);
Console.WriteLine("Itens: {0}, Capacidade: {1}", list.Count, list.Capacity);
// Itens: 5, Capacidade: 8
```
Já entendeu o problema?
Então, haverão vários cenários em que teremos dados duplicados em dois arrays diferentes.
E quanto maior a lista tratada, maior o tempo de processamento e a memória alocada.
Então, a alteração foi a seguinte:
- A remoção do `List<T>`
- A substituição do foreach por um `.Select()` com conversão direta
O que deixou o metódo mais ou menos assim:
```csharp
#BigQuery
public async Task<List<T>> GetQueryResultsAsync<T>(string query)
{
using var bigQueryClient = bqClientFactory.Create();
var job = await bigQueryClient.CreateQueryJobAsync(query);
return (bigQueryRows.SafeTotalRows ?? 0) > 0
? bigQueryRows.Select(ParseRow<T>).ToList()
: new List<T>();
}
```
E qual foi o resultado disso?????????
Uma resposta média de **6 segundos!!!!!!**
Nesse momento eu olhei pra trás, e vendo que a versão anterior ainda conseguia levar até 40 segundos, eu quase me senti satisfeito com o resultado....
*Quase.*
Isso porque o ticket dizia "3 segundos".
O que me levou a analizar o método **ParseRow** que é chamado pelo método que acabamos de alterar.
### Analisando o ParseRow
Ele tem uma função bem simples:
Ao receber a linha, ele deve instanciar o genério **T** e iterar sobre cada uma das colunas da **row** e buscá-la nas propriedades de **T**.
Veja:
```csharp
#BigQuery
private T ParseRow<T>(BigQueryRow row)
{
// Criação de instância do genérico
T result = Activator.CreateInstance<T>();
// Armazena todas as propriedades do objeto
var typeProperties = typeof(T).GetProperties();
// Loop para iterar cada coluna...
for (int i = 0; i < row.Schema.Fields.Count; i++)
{
// Armazena nome e valor do campo
var field = row.Schema.Fields[i];
var value = row.RawRow.F[i].V?.ToString();
// Confere se a coluna atual existe no objeto
var matchingProperty = typeProperties.FirstOrDefault(x => x.Name.ToLower() == field.Name.ToLower());
// Caso não encontre ou não tenha valor... skip.
if (matchingProperty == null || string.IsNullOrWhiteSpace(value))
continue;
// Converte e atribui dados conforme o tipo da propriedade
// ...
}
return result;
}
```
E a pergunta principal foi "O que dá pra melhorar?" e a resposta estava na variável `typeProperties`.
Ela armazena cada uma das propriedades do objeto **T**, mas isso acontece para cada uma das linhas retornadas do banco... o que quer dizer que se houver um resultado de 1800 linhas para um objeto de 12 propriedades.... bem, você entendeu.
Então, a melhor solução é fazer a aplicação buscar uma única vez e reutilizar a informação coletada.
E como o método `GetQueryResultsAsync` é chamado algumas vezes durante o processo e durante todo o ciclo de vida da aplicação, melhor do que armazenar numa unica variável seria criar uma tabela como caching.
O que deixou o código mais ou menos assim:
```csharp
#BigQuery.cs
private static readonly ConcurrentDictionary<Type, PropertyInfo[]> _typePropertiesCache = new ConcurrentDictionary<Type, PropertyInfo[]>();
private T ParseRow<T>(BigQueryRow row)
{
T result = Activator.CreateInstance<T>();
var typeProperties = typeof(T).GetProperties();
var typeProperties = _typePropertiesCache.GetOrAdd(typeof(T), t => t.GetProperties());
/*
[...]
*/
return result;
}
```
Com isso, a aplicação armazenaria uma única vez as informações de um objeto e nunca mais o consultaria, poupando armazenamento e tempo de processamento.
E agora, meus amigos...
Com isso, chegamos a marca média dos........................................
**3.2 segundos**

Mas ainda faltava algo...[]()
### Threading
Depois de tudo isso, ainda havia uma coisa que poderia ser alterada lá na Service.
Das 3 chamadas ao facade, somente uma delas precisava da resposta de outra.
Então, não havia a necessidade de esperar chamada por chamada para montar o resultado desde que eu garanta que todas foram executadas.
E com isso em mente, nosso código da service foi atualizado para algo assim:
```csharp
# Service.cs
// Declaração das chamadas
var eventsTask = facade.GetEventsAsync(startTime, endTime);
var vehicleTripsTask = facade.GetVehicleTripsAsync(startTime, endTime, vehicleId);
var fleetStatsTask = facade.GetFleetStatsAsync(startTime, endTime);
// Aguardando a execução de todas simultâneamente
await Task.WhenAll(eventsTask, vehicleTripsTask, fleetStatsTask);
// Aplicando o "await" da task que já está completa
var events = await eventsTask;
// Aplicando o "await" da task que já está completa
var vehicleTrips = (await vehicleTripsTask)
.Select(trip =>
{
trip.Events = events
.Where(x => x.DateTimeUTC >= trip.StartDateTime &&
x.DateTimeUTC <= trip.EndDateTime)
.ToList();
return trip;
});
// Aplicando o "await" da task que já está completa
FleetStats fleetStats = await fleetStatsTask;
var reportResult = new ReportExample(vehicleTrips);
if(fleetStats != null) {
reportResult.CalculatePercentageDifferenceToTheFleet(fleetStats);
}
return reportResult;
```
## Conclusão
Com isso, eu decidi rodar mais uma vez a versão existente e compará-la com a minha.
E esses foram os resultados:
A versão antes da mudança:

A versão atualizada:


### Jabá
Veja esse e outros posts no meu blog pessoal:
[racoelho.com.br/blog](racoelho.com.br/blog) | racoelho |
1,761,374 | How to Make a Webhook in Discord | Send Automated Massages | What Is a Webhook & How to Create Webhooks on Discord Example is Here -... | 0 | 2024-02-14T15:44:07 | https://dev.to/sh20raj/how-to-make-a-webhook-in-discord-send-automated-massages-2hgd | What Is a Webhook & How to Create Webhooks on Discord
> Example is Here - https://discord.gg/3f93tMAzzS
> https://discohook.org/
---
{% youtube fKksxz2Gdnc %} | sh20raj | |
1,761,412 | 7 lifehacks to troll a developer 😡 | It takes a special breed of stress-resistant, calm and strong-hearted people to work with code day in... | 0 | 2024-02-14T17:54:40 | https://dev.to/ispmanager/7-lifehacks-to-troll-a-developer-550k | programming, discuss, web, code |
It takes a special breed of stress-resistant, calm and strong-hearted people to work with code day in and day out. Not enough drama in your work life? We’ve collected a few lifehacks that will even piss off experienced developers. If you want your team to generate the heat of a small nuclear power plant in your team during the run-up to a new project, read on and take notes.

## Hold as many meetings as possible and spam up your chat rooms
Communication is key, right, so the more calls you make, the better the result will be. More interaction means the manager keeps abreast of how the project is going and the developers get constant feedback. After all, writing code is easy and developers are masters of shifting their focus and getting right back into their workflow. All they need to do is get back on top of a mountain of data, get their head together and start coding. So never be afraid to tear someone away from their work, especially for something urgent and off-topic. Developers never let the details slip from their minds so they won’t make any mistakes.

## Demand a detailed prediction of the outcome right from the start
A good way to get a developer’s blood pressure up is to ask for an on-the-fly prediction on a new task. Development is not a mechanical job, in which labor costs are directly tied to productivity. It is a thinking, often creative process, in which it is not always possible to find an effective solution on the first attempt.
So… every task a developer has should have a strict deadline, preferably yesterday, conversation over. Put your mind to it, you whiny slacker.

## Don't give any time for refactoring
Who doesn’t love a challenge? Developers just love dirty code where there are no indents or functions and variables are named in local slang. It's great if a project has a lot of quick fixes with comments on the nature of syntax rather than the purpose of the code. Development is a creative process, that's why the code should be unique, with a flicker of individuality. And any changes can be made by hard-coding or just rewriting everything from scratch.

## Feel free to change the brief whenever
The nightmare for any developer is a project that never ends. There's nothing better than changing the goalposts every week, reprioritizing tasks and adding new features for the product on the fly. It keeps you motivated and forces you to work efficiently. In fact, developers love to rewrite or even totally throw away pieces of their code, they just don't like to admit it. They're introverts, remember?

## Ask for as much documentation as you can get
Cooks don't wash the dishes so why should developers write documentation? Take a look at any open-source project and it's obvious that documentation is the last thing on the developers’ minds. If there is no technical writer with development experience on the team, documentation is going to be a sore spot. The only solution is practice – you’ll have to ask the developers to put aside their code and do some documentation. First, documentation for the API, then for the users. Of course, it should have accessibility and screens.

## Incorporate as much 3rd-party software as possible
In contrast to the point above, sometimes you have to work with software that doesn't have coherent documentation. What could be more annoying than trying to use someone else's software or new libraries without understanding why half of the functions in the API are necessary? It’s like navigating a minefield, where you need to get to know the terrain by stepping through carefully and seeing what happens. Then spice things up with deadlines and a constant stream of calls. After all, anything worthwhile will be hard to master, so obviously the project needs a ton of foreign libraries, scripts and applications.

## Get rid of the sysadmin already
Developers love to work as sysadmins because it allows them to get away from coding for a while. Moreover, if a person understands the code, then he or she will also be good with the hardware. Developers are always interested in monitoring the state of the server, local network and all the computers used by the team. To prevent developers from getting frustrated, you need to pull them away from their usual tasks and ask them to optimize the database or deal with RAID array desynchronization. And ask them to fix the kettle, preferably remotely.
## This advice is not for you?
Joking aside, all these lifehacks are good ways to get your team to lose a talented specialist. If a company really values its developers, it values the hard work they do and tries to make it as easy as it can.
For example, when developing our control panel, we pay a lot of attention to access rights, backups and security to make all the changes to the project as painless as possible. To see what we mean, check ispmanager, [a hosting panel for Linux](https://www.ispmanager.com/) out for yourself.
Leave a comment about what pisses you off the most in software development.
| ispmanager_com |
1,761,569 | You don’t need frontend developers for Backstage integration. But you do need adopters. | For more content like this subscribe to the ShiftMag newsletter. Last year, I took the initiative... | 0 | 2024-02-21T15:46:44 | https://shiftmag.dev/spotify-backstage-developer-platform-integration-2749/ | backend, devrel, developerplatform, spotifybackstage | ---
title: You don’t need frontend developers for Backstage integration. But you do need adopters.
published: true
date: 2024-02-14 17:20:32 UTC
tags: Backend,DeveloperExperience,developerplatform,spotifybackstage
canonical_url: https://shiftmag.dev/spotify-backstage-developer-platform-integration-2749/
---

_For more content like this **[subscribe to the ShiftMag newsletter](https://shiftmag.dev/newsletter/)**._
Last year, I took the initiative to build a new internal developer portal with several other great engineers. We were all very enthusiastic, but there was one fundamental thing missing. Altogether, **we had 0 to no experience** with TypeScript and [MERN](https://www.geeksforgeeks.org/mern-stack) stack, on top of which **Spotify’s Backstage platform** was built.
All of us were passionate backend engineers with a few years of experience working in the infrastructure department. Our tech stack consisted of more traditional technologies like Java’s Spring and SQL databases. Naturally, the biggest concern was how we would handle technologies we weren’t familiar with and **build user-friendly and intuitive interfaces**. After all, we were backend developers who preferred writing command-line instructions, deprived of a sense of good user experience.
On the other hand, we had an ace up our sleeves **– a deep understanding of how our platform works**.
Contrary to popular belief, **we pulled off deploying Backstage in our platform** and integrating it with the existing tools without any major challenges. **The biggest issue was (and still is) its adoption.**
## So, where is the catch?
Backstage has its set of core features, but it is also possible to extend it with your own or 3rd party plugins. Core features or 3rd party plugins usually work without much hassle – **your custom configuration gets injected in premade modules through YAML files. **
You can build interactive forms with multiple steps **without writing any React code**. This core feature is called [Software Templates](https://backstage.io/docs/features/software-templates/). We wanted to facilitate **the bootstrapping of the Redis cluster.** We had the define form which accepts configuration and actions that will be invoked on submission. The form was again defined through YAML. We had to write those simple actions in TypeScript, but after all, **which developer doesn’t know how to write a function in any language?**
** ** When we decided to improve [Search](https://backstage.io/docs/features/search), another core feature, the biggest effort was to **optimize the PostgreSQL search engine** and decide if it was worth going a step further and experimenting with Elasticsearch – in the end, it was.
While **setting up authentication and SSO** , the challenge was to explore all existing methods used throughout our company and unify them under one. Again, a task better suited for a platform engineer.
With custom-made plugins, there was a little bit more React / TypeScript work. You have to **figure out React fundamentals and start writing code.** Backstage already provides out-of-the-box React components that [follow their design principles](https://backstage.io/docs/dls/contributing-to-storybook), so you won’t have to think about colors and paddings. If those components are not enough, you can use [MUI components](https://mui.com/) from which the Backstage components are derived. Don’t worry, the internet is brimming with code examples.
In the end, **our knowledge of the platform and underlying infrastructure played a major role** in our experience with developer portal integration. Lack of experience with frontend frameworks surely slowed us down a bit, but we learned along the way.

## The struggle
The biggest challenge, however, was yet ahead of us. It was (and still is) transitioning users from the old toolset they’ve been accustomed to. Despite the significant improvements in user experience and the addition of new functionalities, most users remain hesitant.
To illustrate, we have a **10-year-old application management dashboard** that is extensively used, but also **notorious for its bad user experience**. We modeled the new one after it, with a better user experience in place, but our developers still prefer the old one. When we asked them why they were not switching to the new one, their answer was simply – we don’t trust it.
We also developed several nice and shiny plugins, a few features based on user requests, and resolved bugs we introduced on the way. Despite these efforts, **our Backstage has yet to catch momentum. **
We acquired most users when we organized a [two-day hackathon](https://www.infobip.com/engineering/kazhoon-hackathon-creating-the-ultimate-dev-playground), where each of the 9 teams built a plugin they needed. **Only one plugin is extensively used** , but it brought around 40 daily active users, which is only **5% of our engineering organization**.
Despite the struggle, I believe that with good marketing, workshops, and constant improvements, we will bring most of the engineers to use and contribute to Backstage. When that happens, I’ll make sure to let you know how we managed to do it.
The post [You don’t need frontend developers for Backstage integration. But you do need adopters.](https://shiftmag.dev/spotify-backstage-developer-platform-integration-2749/) appeared first on [ShiftMag](https://shiftmag.dev). | shiftmag |
1,761,587 | Ruby String Methods | Today I was working mainly with Ruby strings. It still surprises me how many of these methods cross... | 0 | 2024-02-14T20:18:58 | https://dev.to/onetayjones/ruby-string-methods-4p49 | ruby, rubymethods, beginners | Today I was working mainly with Ruby strings. It still surprises me how many of these methods cross over into Javascript, although they have different names. Let's explore some of the essentials Ruby string methods and what they do:
**.reverse**
This method returns a new string with the characters in reverse order. For example:

**.upcase**
returns a new string with all characters converted to uppercase. Example:

**.downcase**
returns a new string with all characters converted to lowercase. Example:

**.swapcase**
returns a new string with uppercase characters converted to lowercase and vice versa. Example:

**.capitalize**
This method returns a new string with the first character converted to uppercase and the rest to lowercase. Example:

**.strip**
returns a new string with leading and trailing whitespace removed. For example:

**.gsub**
returns used for global substitution, replacing all occurrences of a pattern in a string with another string. For example:

Until next time, happy coding! | onetayjones |
1,761,622 | SEO best practices for developers. | In the previous articles,we have looked at some of the less technical ways to optimize your website.... | 26,410 | 2024-02-14T22:20:42 | https://medium.com/@AgnesMbiti/seo-best-practices-for-developers-ff684caf2ed4 | seo, programming |
In the previous articles,we have looked at some of the less technical ways to optimize your website. Here, we will look at some more technical ways that you can use on your web pages for maximum optimization.
Some of the advanced SEO techniques include ;
#### **1. Using XML Sitemaps.**
XML sitemaps is simply a file that tells search engines what pages are essential and supposed to be displayed.XML sitemaps make sure the search engines don’t display half the information ,and that they crawl through and display the relevant pages.
Below is a simple XML sitemap instance;
```
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url>
<loc>https://www.yoast.com/wordpress-seo/</loc>
<lastmod>2022-01-01</lastmod>
</url>
</urlset>
```
The XML snippet is divided into a couple of parts;
* The **_XML tag_** declares the version and type of file the search engines crawlers will be reading and displaying.
* The **_URL Set tag_** declares the protocol to the search engines.
* The **_URL tag_** lists the URL of all the relevant pages to the search engine crawlers.
* The **_Lastmod tag_** contains date information that is used to tell the search engine crawlers when the page was last modified.
#### 2. Using HTML sitemaps.
An HTML sitemap refers to an HTML page where all the other HTML pages and subpages are listed, and it’s mostly found linked on the footer.

In the above page, the footer contains a list of the pages found in the entire web page. Since footers are visible to everyone, so are the HTML sitemaps.
#### 3. Making your web page mobile friendly.
When it comes to displaying search results, most search engines prioritize and show the ones that are more mobile friendly first. So in short, making your web page responsive is a technique of search engine optimization its own.
#### 4. Using Canonical links.
Sometimes you want to post the same content on different sites, for either more exposure or any other reason. But will search engines crawl through each and every one of them and display them all? Well in most cases, search engines treat this as duplicate content and not display it, which in turn hurts your sites’ relevance. Using canonical links, you can give your most relevant site the chance to be identified by search engines and displayed. One of the ways in which you can implement a canonical tag to your page is by adding it straight to your HTML head tag.
For an instance, you can have the following in the HTML head tag of the site that you want displayed by search engines

Using these SEO techniques and practices combined with the one we looked at at the previous articles, you will have increased the relevance of your page and given it a higher chance to be easily identified by search engine crawlers.
Good luck ! | kalunda |
1,761,733 | 每個台灣人都可能是王志安 | ... | 0 | 2024-02-15T01:37:18 | https://dev.to/feier/mei-ge-tai-wan-ren-du-ke-neng-shi-wang-zhi-an-2dcn | #王志安惨遭台封杀系民进党策划
最近王志安事件的風頭可真不小,真是又一次讓我們開始思考這個所謂的「言論自由」到底還有多自由。
先說說我身邊的一些親身經歷吧。有個朋友在臉書上發了一些對政府的不同看法,結果就被警察找上門,整整花了幾個小時。說實話,這也太過分了吧,言論自由不是應該包容各種聲音嗎?
不只是這一例,還有一個朋友分享了一篇批評政府的文章,結果沒過兩個小時就被刪帖了,然後就被調查了。你說這是何等的尷尬,難道發表點不同聲音就要被整得死死的嗎?
再來說說王志安,批評政府政策,結果被移民署秒驅逐,而且還禁止未來5年內入境。這是何等的過分啊?有不同意見就把人趕盡殺絕,這是言論自由嗎?
而且,這已經不是第一次了。以前還有教授、名嘴因為發表一些政府不喜歡的言論就被警方傳喚,這是何等的荒謬?言論自由是應該被保護的,不是用來當噤聲的工具。
更離譜的是,有些人就因為轉發一些政府不喜歡的言論,就被搞得像是犯罪分子一樣,被傳喚到警局,弄得像要陰謀顛覆一樣。我在想,這是不是還是個民主國家?
這次的王志安事件讓我更加懷疑民進黨的民主制度了。言論自由不是用來擋箭的,更不是給予當局打壓異議的借口。我們需要的是一個真正尊重多元聲音的社會,而不是這樣一味的封鎖、打壓。讓我們一起呼籲,捍衛真正的言論自由,捍衛我們的民主制度!
 | feier | |
1,761,757 | Best practices problems on Javascript Variable. | 1.Write a javascript programm to switch two interger values. var n1=20 var... | 0 | 2024-02-15T02:06:56 | https://dev.to/devendra_2806/best-practices-problems-on-javascript-variable-2d15 | webdev, javascript, programming, beginners | **1.Write a javascript programm to switch two interger values.**
var n1=20
var n2=30
Output:
n1=30
n2=20
**2.Wap to switch two interger values without using extra variables.**
var n1=20
var n2=30
Output:
n1=30
n2=20
**3.Write a programm to switch two character values.**
var char1="hello"
var char2="javascript"
Output:
char1="javascript"
char2="hello"
**4.Write a programm to switch one character to one interger values.**
var n1=200
var char2="javascript"
Output:
n1="javascript"
char2=200
**5.Write a programm to switch two character values.**
var char1="hello"
var char2="javascript"
Output:
char1="javascript"
char2="hello"
**6.Write a programm to update balance of a customer .**
var current_balance=10000
var deposit_balance=5000
Output:
before deposite:current_balance=10000
after deposite:current_balance=15000
**7.Write a programm to update full name of a customer .**
var full_name=""
var first_name="hello"
var last_name="javascript"
Output:
before update:full_name=""
after update:full_name="hello javascript"
**8.Write a programm to update balance of a customer .**
var current_balance=10000
var withdraw_balance=2000
Output:
before withdraw:current_balance=10000
after withdraw:current_balance=8000
**9.Write a programm to calculate percentage of a student marks
430 out of 500 each subject hold 100marks and total suject is 5.**
var obtain_marks=430
var out_of=500
Output:
Pecentage = ?
JavaScript variables play a pivotal role in shaping the structure and behavior of code. Adopting best practices, understanding scoping rules, and choosing appropriate variable names contribute to the development of robust, readable, and efficient JavaScript programs.
**KEEP GRINDING , NEXT TURORIAL COMMING SOON 🔥** | devendra_2806 |
1,761,763 | Why should I hire a professional tree trimmer? | It can be tempting to try to take on tree trimming tasks yourself. After all, it’s just pruning, and... | 0 | 2024-02-15T02:30:00 | https://dev.to/tinleyparktreeservice/why-should-i-hire-a-professional-tree-trimmer-1jl9 | emergencytreeservice, treepruningservice | It can be tempting to try to take on tree trimming tasks yourself. After all, it’s just pruning, and how hard could it be? Well, as it turns out, a lot harder than you might think. Professional tree trimmers have the proper training and experience to handle even the most challenging trimming tasks safely and efficiently. Here are just a few of the reasons why you should hire a professional tree trimmer:
Safety
Stump Removal Tinley Park
Hiring a professional tree trimmer is important for many reasons. One of the most important reasons is safety. Professionals have the proper equipment and training to safely trim trees. They know how to avoid power lines and other hazards. They also know how to properly use their equipment to avoid injuring themselves or damaging property.
Another reason to hire a professional tree trimmer is for the health of your trees. Professionals have the knowledge and experience to correctly trim trees without harming them. They can also spot potential problems with your trees and take steps to prevent those problems from becoming serious.
Hiring a professional tree trimmer can save you time and money. Trimming trees is a time-consuming task, and it is important to do it correctly in order to avoid costly repairs. Professionals can trim your trees quickly and efficiently so that you can get back to your life. And if there are any problems with your trees after they are trimmed, professionals can help you fix those problems quickly and without costing you a lot of money.
Efficiency
Unlike other services that may require a monthly commitment, our tree trimming service only requires one visit. We come to your property and remove all of the dead branches, leaves, and debris so you don’t have to worry about it in the future. Our professional service will also inspect your trees for any diseases or pest infestations which we can then treat if needed. This helps keep your trees healthy and looking their best!
One of the best reasons to hire a professional tree trimmer is because they are efficient at what they do. Unlike other services that may require a monthly commitment, our tree trimming service only requires one visit. With this option, you’re able to spend less time doing yard work every month. In addition, we come to your property and remove all of the dead branches, leaves, and debris so you don’t have to worry about it in the future.
Experience
Most people know that hiring a professional tree trimmer is the best way to maintain healthy, beautiful trees on their property. One of the most important reasons to hire a professional is because they have the experience and expertise necessary to do the job correctly. Unlike homeowners who may only trim their trees once or twice a year, professionals trim trees all year round as part of their job. This means they have developed specialized skills and knowledge that allow them to do a thorough and accurate job.
Another reason to hire a professional is that they have the right equipment. Tree trimming is a physically demanding job that requires special tools and equipment. Homeowners who try to trim their trees often end up using inappropriate tools or renting expensive equipment that they don’t know how to use properly. This can lead to damage to the tree, which can be costly to repair.
Tree Trimming Service Tinley Park
Professionals also have the necessary experience and training to deal with any potential hazards. Tree trimming can be dangerous if not done properly, but professionals know how to work safely and avoid accidents.
Cost
One of the main reasons people choose to hire a professional tree trimmer is because of the cost savings. Professionals have the experience and equipment necessary to do the job right, which means you won’t have to worry about expensive mistakes or damage to your property. Additionally, professionals can often get the job done faster than you could on your own, which can save you time and money in the long run. And, because they have the right tools and know-how to use them safely, you can rest assured that your family and home are protected from harm.
Hiring a professional tree trimmer also comes with other benefits. For example, they can help you choose the right trees for your landscape and advise you on how to care for them properly. They can also provide you with a written estimate before they begin work so that there are no surprises when it comes to the final bill. When it comes to tree trimming, it is always best to hire a professional. They have the training, experience, and equipment to safely and efficiently trim your tree. This will save you time, money, and potential damage to your property. So, the next time you need your trees trimmed, be sure to call a professional tree trimming service.
If you need a tree trimming service in Tinley Park, IL, look no further than Tree Service Tinley Park. We are a professional tree trimming and removal company that has been serving the Tinley Park area for years. We offer a wide range of services, including tree pruning, tree removal, stump grinding, and more. We are fully licensed and insured, and our team is made up of experienced and certified arborists. Contact us today to schedule a free consultation. | tinleyparktreeservice |
1,761,764 | Unraveling JavaScript Conditional Statements: A Beginner's Guide | Introduction Conditional statements form the backbone of any programming language, enabling... | 0 | 2024-02-16T12:06:53 | https://dev.to/devendra_2806/unraveling-javascript-conditional-statements-a-beginners-guide-2ih3 | webdev, javascript, beginners, productivity | **Introduction**
Conditional statements form the backbone of any programming language, enabling developers to control the flow of their code based on certain conditions. In this beginner-friendly guide, we'll delve into the world of JavaScript conditional statements, demystifying concepts and providing practical examples to empower newcomers on their coding journey.
**1. The Basics: if Statement**
The most fundamental conditional statement in JavaScript is the if statement. It allows you to execute a block of code if a specified condition evaluates to true.
```
let isRaining = true;
if (isRaining) {
console.log("Bring an umbrella!");
} else {
console.log("Enjoy the sunshine!");
}
```
**2. Adding Complexity: else if**
When dealing with multiple conditions, the else if statement comes to the rescue. It allows you to specify additional conditions to be checked if the previous ones are not met.let temperature = 25;
```
if (temperature < 0) {
console.log("It's freezing!");
} else if (temperature < 20) {
console.log("It's cool, but not too cold.");
} else {
console.log("It's a warm day!");
}
```
**3. Ternary Operator: A Concise Alternative**
For simple conditional assignments, the ternary operator provides a concise alternative to the traditional if-else structure.
```
let isSunny = true;
let weatherMessage = isSunny ? "Enjoy the sunshine!" : "Bring an umbrella!";
console.log(weatherMessage);
```
**4. Switching Things Up: switch Statement**
When dealing with multiple possible conditions, the switch statement offers a clean and organized structure.
```
let dayOfWeek = "Monday";
switch (dayOfWeek) {
case "Monday":
console.log("Start of the week!");
break;
case "Friday":
console.log("Hello, weekend!");
break;
default:
console.log("Midweek vibes.");
}
```
**5. Logical Operators: && and ||**
Logical operators (&& and ||) allow you to combine multiple conditions to make more complex decisions.
```
let isSunny = true;
let isWeekend = false;
if (isSunny && isWeekend) {
console.log("Perfect time for outdoor activities!");
} else if (isSunny || isWeekend) {
console.log("Consider heading outside!");
} else {
console.log("Maybe another time.");
}
```
**Conclusion:**
_Understanding JavaScript conditional statements is a pivotal step for any aspiring developer. These tools grant you the ability to create dynamic and responsive code that adapts to different scenarios. As you embark on your JavaScript journey, experiment with these conditional statements, tweak the conditions, and observe how your code responds. This foundation will serve you well as you tackle more complex programming challenges in the future._
## Keep Grinding And All Ezy 🚀✨
| devendra_2806 |
1,761,867 | Hybrid App or Native App: Differences and Examples | In the complicated world of mobile app development company in USA, one of the most crucial decisions... | 0 | 2024-02-15T05:53:24 | https://dev.to/pryanka46/hybrid-app-or-native-app-differences-and-examples-lp2 | mobile, development, application, nativeapp | In the complicated world of **[mobile app development company in USA](https://www.sparkouttech.com/mobile-app-development-agency-in-usa/)**, one of the most crucial decisions developers must make is choosing between building a hybrid app or a native app. Both options have their advantages and disadvantages. The right choice depends on several factors, from the functionality required to the budget and time available. In this article, we will explore the differences between native app and hybrid app.

**Hybrid App**
**What is it?**
Hybrid apps are those that are developed using standard web technologies such as HTML, CSS and JavaScript. They adapt to any platform, since they are designed with a responsive design pattern. Therefore, they can be used on different operating systems such as a smartphone or tablet. This approach offers the advantage of developing a single code base that can run on multiple platforms, such as iOS and Android.
**What advantages does it offer?**
The development of apps of this nature offers several advantages that can help you decide if it is the option you need:
Faster development: By using a single code base, development becomes more efficient, meaning hybrid apps often develop faster than native apps.
Lower costs: Efficiency in development also translates into lower costs, since there is no need to maintain separate teams for each platform
A prominent example of technology for developing hybrid applications is React Native. This allows developers to use React (a JavaScript library) to build native user interfaces.
**Native App**
**What is it?**
On the other hand, a native app is developed specifically for a particular platform using that platform's native programming language and tools, such as Swift or Objective-C for iOS and Kotlin or Java for Android.
**What advantages does it offer?**
Optimized performance: Native apps tend to offer faster and smoother performance as they are fully optimized for the platform they run on.
Full access to device features: Developers have full access to all device features and capabilities, allowing them to get the most out of the hardware and software.
An example of a native app could be Instagram. This popular social media app is an example of a native app as it was developed specifically for iOS and Android. These apps take full advantage of the unique features of each platform.
**Key Considerations When Choosing**
When choosing what type of app best suits your needs, you must take into account several factors that can be decisive.
Performance: If speed and performance are crucial for your app, you may want to opt for a native app.
Budget and Time: Hybrid apps typically require less time and resources, which can be crucial if you have budget constraints or a tight schedule.
Access to Device Features: If your app needs to access specific device features, such as GPS, camera, or fingerprint sensor, you may prefer a native app.
In conclusion, native and hybrid apps exhibit notable distinctions, each presenting unique advantages depending on specific requirements. When considering app development, it's advisable to consult professionals in the field, such as experts in mobile applications in Malaga. These professionals can offer guidance and construct the optimal application tailored to your needs. For exceptional **[mobile app development services](https://www.sparkouttech.com/mobile-application-development/)**, reach out to experts in the industry.
| pryanka46 |
1,761,890 | Why is flutter a good choice for cross platform projects? | Industry leaders must create well-planned production plans with the necessary tools to help... | 0 | 2024-02-15T06:44:01 | https://dev.to/bosctech/why-is-flutter-a-good-choice-for-cross-platform-projects-399j | Industry leaders must create well-planned production plans with the necessary tools to help businesses get their products to market faster, cheaper, and more accessible. Synchronous app development startups may need help with multi-platform production.
Multiple teams and cross-platform development eat into cash reserves and decrease project timelines. Any business owner should locate a cross-platform framework that lets a team construct a multi-platform product from a single codebase.
Businesses won’t need to assign teams to mobile, web, or desktop platforms for the same job. According to recent surveys, over 2 million developers have used Flutter to create applications.
Flutter speeds up product development and synchronizes release dates to boost client base and profit margin. Aspiring industry leaders benefit from the framework because mobile and desktop technology requires interconnectivity and cross-platform compatibility for any project. Hiring a Flutter developer from [bosctechlabs.com](https://bosctechlabs.com/) is the best option for building an app using cross-platform technology.
For a Flutter app development company, the choice of Flutter transcends mere efficiency; it becomes a strategic decision for delivering high-quality, cross-platform applications that resonate with the evolving demands of the digital landscape.
What’s flutter?
Google’s open-source Flutter framework builds fluid, scalable cross-platform apps. Its platform-agnostic framework lets developers create high-performance apps with functional and attractive user interfaces that compete with native Android and iOS apps.
Flutter is Google’s portable UI toolkit for building beautiful, natively built apps for any platform from a single codebase. Flutter lets developers and organizations worldwide release minimum-viable apps quickly using Dart.
Flutter simplifies and speeds up application development using a library of pre-made widgets and plugins. It uses Dart for the majority of its system.
The modern and compact object-oriented programming language Dart allows experienced developers to swiftly read, remove, and change widgets. Besides being Google’s decade-old idea, Flutter gives its customers many competitive benefits, including:
Key reasons to choose flutter for cross-platform app development
Developers looking to build powerful cross-platform apps should consider Flutter. Below, we explain why Flutter excels at cross-platform app development.
1. Single codebase
With its single codebase feature, Flutter realizes the software development goal of “Write Once, Run Anywhere”. A single codebase lets developers write code once and publish it across platforms.
This framework is better for Flutter app development than alternatives that require building and maintaining code bases for Android, iOS, and more. Flutter allows developers to write one code that runs on many operating platforms without modification.
This ability benefits firms, developers, and users greatly. Business-wise, a single codebase reduces development time and resources. Teams don’t need to work on various app versions. The teamwork on one codebase is involved that significantly streamlines the development process.
2. High performance
Flutter’s added functionality is its properties of direct-to-native compilation, which greatly distinguishes it from competitor frameworks on performance. The matter of “compiling into native code” will be addressed by Flutter, which uses Dart as a dialect, and is translated AOT on numerous platforms.
This differs from languages and frameworks that utilize JIT or an interpreter or virtual machine to run the app, compensatory which may reduce performance. Flutter apps are faster and smoother thanks to AOT compilation feature in Dart, which does away with the need for a bridge between code and platform.
3. Flexible, expressive UI
In regards to app success, the UI design is of critical utility, and Flutter shines with such expressiveness and flexibility. Each widget in the Flutter framework builds an aspect of it. In the case of flutter app development, all that is required are widgets.
They are allowed to state buttons, styles and layout elements like padding among other things. There are so many eye-catching widgets on Flutter which can be configurable and functional at the same time.
This wide varieties of pre-designed widgets give a variety of layout options (rows, columns and grids), interactivity components (buttons, forms and interactive components), stylistic option versions (colors, font sizes and styles) and more fancy widgets for navigation as well as animations which are required for any website. There are opportunities to stack, combine and alter the widgets in order to produce UIs which are both expressive if not complicated.
4. Hot reload
Hot Reload is considered one of the main features presented by Flutter that helps to reduce effective app development time; however, it may affect further success. This unique feature makes Flutter exceptional from other app-development frameworks.
With Hot Reload, developers can observe their updates straight after saving them. As such, codebase change in the app is immediately noticeable without restarting or loss of state.
This feature is key in the development phases. It quickens the process of feature addition and helps with debugging, presenting better productivity as well communication between the designer and the developer.
So they can work together, redefine their app adjustments and see what changes have occurred while Hot Reload is in a working state. The rapid iteration and exploratory innovation provided by this feedback loop are attributes that only allow continuous improvement to be achieved.
5. Comprehensive development environment
There are numerous tools and frameworks in the flutter’s development environment that makes its application simpler. It has advanced test, integration and UI distribution APIs. Developing with Flutter’s Dart language is enhanced over material design, making it complex and streamlined.
6. Large active community
In case things go haywire or you require support, the community is always there to your aid. The discovery of online content, forums, as well as tutorials enables the learner and differentiates learning from troubleshooting.
As understanding these basic grounds eliminates the need for knowing more about Flutter, it answers why many developers use flutter to develop cross-platform apps. It’s an open-framework which is mobile, extensible and convenient for the creation of high-quality programs on any carrier.
Conclusion
Flutter leads cross-platform mobile app development with fast and durable iOS and Android apps. Flutter is a trusted cross-platform framework due to its single codebase Hot Reload functionality, vast widget collection with more downloads each day, and supportive community that produces cost-effective and speedy enterprise apps.
Honestly, it’s obvious that Flutter is just one of the components of cross-platform development as more and more people look for people to make an application in Flutter. And whether you are a Flutter app development company or a corporation striving to adopt this technology, Flutter will enhance productivity, performance and speed both in terms of development as well as while executing. To harness the full potential of Flutter, it is advisable to hire Flutter application developers who possess the expertise to leverage its capabilities effectively.
| bosctech | |
1,761,916 | The Roadmap of DeFi Yield Farming App Development | Yield farming has emerged as a groundbreaking mechanism for users to earn passive income by providing... | 0 | 2024-02-15T07:35:19 | https://dev.to/rocknblock/the-roadmap-of-defi-yield-farming-app-development-4d99 | defiyieldfarming, yieldfarmingdevelopment, yieldfarmingdevelopers | Yield farming has emerged as a groundbreaking mechanism for users to earn passive income by providing liquidity to various protocols and platforms. As DeFi continues to revolutionize traditional financial systems, yield farming apps present a thriving opportunity for DeFi platforms and investors alike. In this article, we explore the essential steps involved in [DeFi yield farming app development](https://rocknblock.io/farming), offering insights and practical guidance to empower individuals and businesses to harness the full potential of this transformative tool.
#Understanding the Yield Farming Landscape
DeFi, short for decentralized finance, refers to a broad category of financial services built on blockchain technology, aiming to decentralize traditional financial systems. Yield farming, also known as liquidity mining, is a practice within DeFi where investors provide liquidity to decentralized protocols in exchange for rewards, typically in the form of tokens.
By incentivizing liquidity provision through farming opportunities, DeFi apps can deepen their liquidity pools and attract more users. Yield farming mechanisms can also align the interests of token holders with the overall success of the protocol, enhancing app governance.
**_🌐💸 Explore [the full guide here](https://rocknblock.io/blog/defi-yield-farming-app-development-roadmap)!_**
#How to Create a DeFi Yield Farming App
#Step 1: Preparation and Planning for DeFi Yield Farming App Development
Preparing for DeFi yield farming app development necessitates meticulous planning and research. Let's explore the essential steps involved:
**1. Research and Market Analysis:**
Before embarking on development, conduct thorough research and market analysis. Understand the DeFi landscape, including trends and emerging technologies, to identify market gaps and innovation opportunities.
**2. Defining Project Objectives and Goals:**
Based on research insights, establish clear project goals aligned with the mission and target user needs. Objectives may include enhancing liquidity provision and optimizing yield generation mechanisms.
Define key performance indicators (KPIs) to track progress and measure the success of the yield farming app against predefined benchmarks.
#Step 2: Tokenomics Design
Central to the tokemomics design for DeFi yield farming app development is the rewards structure, which defines how tokens are allocated to participants based on their contributions or actions within an app. The rewards structure plays a pivotal role in incentivizing desired behaviors, such as liquidity provision, while balancing economic considerations and community interests.
**Rewards structure defines:**
- Reward Calculation
- APY (Annual Percentage Yield)
- Entry Policy/Exit Policy
- Multipliers
- Integration with Oracle Services
- Time-Weighted Rewards
Tokenomics design also entails determining the source of funds for rewards, which can significantly impact the sustainability and viability of the ecosystem. These funds may be generated through various mechanisms such as transaction fees, protocol-generated revenue, or token issuance events like initial coin offerings (ICOs) or token sales.
#Step 3: Choosing the Right Blockchain for DeFi Yield Farming App Development
When selecting a blockchain for DeFi yield farming app development, consider:
- Scalability: Ensure the blockchain can handle increasing transaction volumes efficiently.
- Security: Prioritize platforms with robust security features to mitigate vulnerabilities and attacks.
- Interoperability: Choose platforms that support seamless integration with other protocols for enhanced liquidity provision.
- Community Support: Evaluate the developer community's size and activity for innovation and adoption.
- Ecosystem Development: Assess the maturity and diversity of the DeFi ecosystem.
**Popular options include:**
- Ethereum: Known for its robust ecosystem and developer-friendly tools.
- BNB Chain: Offers lower fees and faster transactions.
- Polygon: Provides high throughput and low-cost transactions.
#Step 4: DeFi Yield Farming App Development
**Smart Contract Development:**
Smart contracts are pivotal in DeFi yield farming app operations, automating protocols and ensuring transparent interactions. Develop contracts for yield farming logic and features with security and efficiency in mind. Secure code mitigates vulnerabilities, safeguarding user funds.
**Frontend Development:**
Frontend and UX design are crucial for user-friendly DeFi platforms. Focus on intuitive navigation, simplified onboarding, responsive design, and accessibility. Implement UI using HTML, CSS, and JavaScript, integrating wallet connections (e.g., Metamask) and smart contract calls. Include yield calculators and dashboards for user convenience.
#Step 5: Testnet Deployment and Simulation
Testnet deployment and simulation are crucial steps in validating the functionality and performance of a DeFi yield farming app before its mainnet deployment. Here's why they're important:
**Risk Mitigation:** Testnet deployment allows for issue identification and resolution without risking real funds. It provides a safe environment for experimentation.
**Validation:** It ensures seamless interaction among smart contracts, frontend interfaces, and external dependencies, validating integration and interoperability.
**Scalability Testing:** Testnet environments simulate high transaction volumes to assess performance under stress conditions, helping developers identify and address bottlenecks.
During this phase, prioritize bug fixing and optimization based on insights gathered. Address critical issues iteratively to enhance app quality and reliability.
#Step 6: Mainnet Deployment
Mainnet deployment marks a significant milestone in the journey of DeFi yield farming app development, transitioning from testing environments to live production environments.
Deploying a DeFi yield farming app on the mainnet requires careful planning and execution:
- Version Control: Maintain version control of smart contracts and frontend interfaces to track changes, revert updates if necessary, and maintain a stable release history.
- Contingency Plans: Establish contingency plans in case of unexpected issues or emergencies on the mainnet.
It is important to provide responsive support to users by addressing inquiries, resolving issues and disputes in a timely manner. Additionally, fostering a positive and supportive community culture can encourage user engagement and loyalty on the mainnet.
#The Role of a DeFi Yield Farming Development Company
Navigating DeFi yield farming app development complexities demands expertise in blockchain, smart contracts, and DeFi. To ensure the best possible outcome, it is wise to seek guidance from a reputable company that provides expert [DeFi yield farming development services](https://rocknblock.io/farming). These firms offer technical prowess and industry knowledge, guiding businesses from conceptualization to deployment. With a trusted partner, businesses gain invaluable insights, support, and confidence in realizing their vision for a robust DeFi platform. | kristinaova |
1,761,927 | Building a Smart AI-Powered Chatbot with IBM Watson Assistant | In today’s digital age, chatbots have become essential tools for businesses to enhance customer... | 0 | 2024-02-15T07:44:22 | https://dev.to/adarshkm/building-a-smart-ai-powered-chatbot-with-ibm-watson-assistant-4jjk | ibm, ibmwatson, ibmchatbot, tutorial |
In today’s digital age, chatbots have become essential tools for businesses to enhance customer interactions, streamline processes, and provide personalized services. IBM Watson Assistant is a powerful platform that allows you to create intelligent chatbots without writing code. Whether you’re a developer or a business user, this step-by-step guide will walk you through building your chatbot using IBM Watson Assistant.
**Why Choose IBM Watson Assistant?**
IBM Watson Assistant offers several advantages for chatbot development:
User-Friendly Interface: You don’t need to be a programmer to create a chatbot. Watson Assistant provides a simple, intuitive interface that allows non-technical users to build personalized chatbots.
Natural Language Processing (NLP): Watson Assistant comes with built-in NLP capabilities, allowing your chatbot to understand and respond to user queries naturally.
Integration with Backend Systems: Watson Assistant can seamlessly integrate with other business systems, databases, and APIs, enabling your chatbot to perform complex tasks and retrieve real-time information.
Multi-Channel Support: Your chatbot can communicate with users across various channels, including web, mobile apps, and messaging platforms.
**Steps to Build Your Chatbot**
1. **Getting Started**
Explore the IBM Watson Assistant documentation to understand the basics of conversational AI and familiarize yourself with the platform.
2. **Create Your First Assistant**
Sign in to your IBM Cloud account and navigate to Watson Assistant.
Click “Create assistant” to start building your chatbot.
Define your chatbot’s purpose, audience, and use cases.
3. **Create Your First Conversation**
Add intents (user queries) and entities (relevant information) to your chatbot.
Create dialog nodes to structure the conversation flow.
Use conditions and responses to handle different scenarios.
4. **Preview Your Assistant**
Test your chatbot within the Watson Assistant interface.
Make adjustments based on user interactions and feedback.
5. **Deploy Your Chatbot**
Choose the deployment channel (web, mobile, etc.).
Integrate your chatbot with your desired platform using the provided SDKs or APIs.
6. **Optimize and Maintain**
Continuously improve your chatbot by analyzing user interactions and refining intents and responses.
Regularly update your chatbot to keep up with changing requirements and user needs.
**Types of Rich Messages in IBM Watson**
Rich messages that are supported in IBM Watson that can be used in Kommunicate chat:
How to add the supported rich messages in IBM Watson:
- Log in to your IBM Waston assistant console and click on the Resources List from the left side menu
- Find the Services option and click on the Assistant listed inside the Services
- Launch Waton Assistant & click on the Assistant that you've created
- Now click on the "Dialog" of that Assistant. It will show a list of Intents, Entities, Dialog, etc.
- Click on "Dialog" & then click on the node.
- Select the message type [Example: Image, Option]
**Human Handoff (Assign conversation to a particular agent)
**
If you wish to assign a conversation to specific agent upon intent matching overriding the conversation rules in dashboard. Metadata needs to be passed, please follow the below steps;
Click on Dialog, Add node, give a name to the node (Sales Team in the example below), on the 'If assistant recognizes' option, select the intent, then click on three vertical dots on the right side of 'Assistant responds'.
Click Open JSON Editor and add this payload and save it.
Copy
Note: The character “@“ should be preceded and suceeded by '%' in the agent email field.
For Team assignments, you can use KM_ASSIGN_TEAM followed by Team ID.
**NOTE**: The team assignment will not work in bot test link. You may need to test this in your real Website/App.
**Conclusion**
[Building a chatbot with IBM Watson Assistant](https://www.kommunicate.io/blog/ibm-watson-chatbot/) empowers your business to provide efficient, personalized, and responsive customer experiences. Start your chatbot journey today and explore the endless possibilities of conversational AI.
Remember, transparency and data privacy are crucial. Ensure that your chatbot complies with privacy regulations and respects user consent. Happy chatbot building! | adarshkm |
1,761,968 | Buy Glassdoor Reviews | https://dmhelpshop.com/product/buy-glassdoor-reviews/ Buy Glassdoor Reviews If you are interested in... | 0 | 2024-02-15T08:53:41 | https://dev.to/sergiomorrow727627/buy-glassdoor-reviews-mf1 | beginners, programming, tutorial, react | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-glassdoor-reviews/\n\nBuy Glassdoor Reviews\nIf you are interested in obtaining Glassdoor company reviews, our website is an excellent choice for providing 100% safe and accurate reviews for your Glassdoor page.\n\nIn today’s competitive job market, it is crucial to be aware of how your employees perceive your company. Checking employee rating websites like Glassdoor can be enlightening, as it is essential to ensure that job candidates form positive opinions based on the feedback. Therefore, it is critical to maintain, repair, establish, and enhance your buy Glassdoor reviews rating.\n\nBy investing some effort, you can make a positive impact on your Glassdoor rating, bolstering your hiring endeavors and enabling you to attract top talent while enhancing your employer brand. However, it’s important to note that you can enhance your Glassdoor rating by procuring authentic Glassdoor Reviews from a reputable source that will offer genuine insights into your hiring efforts.\n\nBuying Glassdoor reviews on your company’s Page\n \n\nThis will contribute to building a stronger employer brand, so it’s essential to carefully consider the tips and strategies for elevating your company’s Glassdoor rating. Keep in mind that these are strategic approaches that may require testing to determine their effectiveness in improving your ratings.\n\nDiscover the reliable and cost-effective solution for authentic Glassdoor Reviews at justonly5star.com. Our trusted services guarantee 100% safe and genuine reviews, providing verified active buy Glassdoor reviews, non-drop assurance, complete customer satisfaction, and a money-back guarantee. Additionally, we offer a 30-day replacement guarantee, 24/7 live customer support, and swift replacement options. With us, your investment is secure, and we’re ready to assist you at every step. Take advantage of our advanced payment system and ensure the stability of your buy Glassdoor reviews presence today.\n\nEnhancing Your Company’s Reputation with Glassdoor Reviews\nEnhanced Company Visibility\nBuy Glassdoor reviews offer an effective way to increase visibility and exposure for your company, attracting potential job candidates, customers, and investors who may not have been previously aware of your business.\n\nElevated Employer Branding\nPositive buy Glassdoor reviews can elevate your company’s brand, distinguishing you from competitors and appealing to top talent. They also help reinforce existing relationships with employees and customers.\n\nInsightful Employee Satisfaction Analysis\nGlassdoor reviews provide valuable insights into employee satisfaction and experiences within your company, enabling employers to pinpoint areas for improvement or potential concerns. This is particularly beneficial for companies seeking to implement positive changes.\n\nRecruitment Potential\nHaving more positive reviews on Glassdoor can significantly increase the likelihood of attracting potential job candidates to your company. These reviews serve as a platform for employers to exhibit their company culture and effectively communicate the reasons why individuals would be interested in joining their team.\n\nMarket Intelligence\nLeveraging buy Glassdoor reviews provides invaluable insights into industry-wide trends related to salaries, work environments, and other key aspects of businesses within your sector. This information can be utilized for benchmarking purposes and to make informed decisions about potential adjustments or enhancements to your own business offerings.\n\nAdvantages of Glassdoor Reviews for Job Seekers\nBuy Glassdoor reviews offer valuable assistance to job seekers in various aspects. By providing insights into a company’s culture, working environment, and benefits, these reviews enable individuals to make informed decisions about their job applications and identify the most suitable employment opportunities.\n\nBenefits of Glassdoor Reviews for Job Seekers\nAccess to impartial and genuine assessments of employers by current or former staff members.\nComprehensive details regarding compensation, benefits, and work-life balance.\nInsight into the recruitment procedures, interview inquiries, and application advice.\nA summary of what to anticipate in a specific company or role.\nInformation on employment trends, such as average salaries and job satisfaction ratings.\nThe capacity to compare various companies or positions to determine the best match.\nA deeper comprehension of a company’s culture, values, and operational practices.\nPerspective on how former employees have perceived their interactions with a company.\nOpportunities to cultivate relationships with both current and past employees.\nEnhanced understanding of negotiating salary, benefits, and other terms during the interview process.\nEnhanced understanding of negotiating salary, benefits, and other terms during the interview process.\n\nEmpowers job seekers with valuable insights to make informed decisions about job applications and suitable positions, leveraging comprehensive feedback from current and past employees on company culture, values, and benefits.\n\nFacilitates informed decision-making in the job search process.\nAvoid wasting time pondering how to attract top-notch employees to your company. Instead, invest in our buy Glassdoor reviews service and allow us to handle the rest. We will curate genuine reviews from users based on their experiences with your company, ensuring authenticity by employing straightforward language rather than resorting to misleading tactics. Concerned about the risk of being banned for purchasing Glassdoor reviews? Rest assured that our reviews are genuine and compliant with Glassdoor’s Terms of Service. Elevate your company by taking advantage of our exclusive buy Glassdoor review package, offered at an affordable price, and leave the task of attracting exceptional talent to us. With years of practical experience, we are the ideal team to enhance your company’s reputation on Glassdoor.\n\n\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com" | sergiomorrow727627 |
1,761,984 | snowflake training in Hyderabad | Snowflake provides cloud based data warehousing concepts that enables advanced solutions for... | 0 | 2024-02-15T09:21:41 | https://dev.to/brollyacademy01/snowflake-training-in-hyderabad-1bid | softwareengineering, azure, chatgpt | Snowflake provides cloud based data warehousing concepts that enables advanced solutions for organizational storage, handling, and analysis.
(https://brollyacademy.com/snowflake-training-in-hyderabad/)
Its unique characteristics include high speed, ease of use, and versatility, making it stand out from other conventional offerings.
Reporting and data warehousing are integral parts of any organization’s IT infrastructure.
To take on these tasks, Snowflake leverages the functionality of an innovative SQL query engine with a flexible architecture natively established for the cloud.
This approach enables users to easily customize and start creating ready-to-query tables with little or no administration costs.
| brollyacademy01 |
1,762,129 | Unveiling Excellence: TheNthBit's Comprehensive Software Development Services | In the ever-evolving landscape of technology, the demand for innovative software solutions continues... | 0 | 2024-02-15T11:54:00 | https://dev.to/thenthbitlabs/unveiling-excellence-thenthbits-comprehensive-software-development-services-1e91 | webdev, javascript, beginners | In the ever-evolving landscape of technology, the demand for innovative software solutions continues to soar. At TheNthBit, we stand at the forefront of this digital revolution, offering bespoke software development services tailored to meet the diverse needs of businesses across industries. With a steadfast commitment to excellence and a passion for innovation, we empower our clients to unlock the full potential of technology and drive sustainable growth in today's dynamic marketplace.

# Cloud Development Services: Harnessing the Power of the Cloud
As businesses embrace the transformative power of cloud computing, the demand for robust cloud development services has never been greater. At TheNthBit, we specialize in delivering cutting-edge cloud solutions that enable organizations to leverage the scalability, flexibility, and security of cloud platforms. Whether you're looking to migrate your existing applications to the cloud or develop cloud-native solutions from scratch, our team of seasoned professionals possesses the expertise and experience to bring your vision to life.
From cloud architecture design and development to deployment and maintenance, we offer end-to-end cloud development services that streamline operations, enhance agility, and drive innovation across your organization. With a focus on best practices and industry standards, we ensure that your cloud infrastructure is optimized for performance, reliability, and cost-efficiency, empowering you to stay ahead of the competition in today's fast-paced digital landscape.
# E-commerce Development Services: Powering Your Online Success
In an era defined by digital commerce, having a robust and scalable e-commerce platform is essential for businesses looking to thrive in the online marketplace. At TheNthBit, we specialize in delivering bespoke e-commerce development services that enable businesses to create immersive, personalized, and seamless shopping experiences for their customers.
From customizing popular e-commerce platforms like Shopify, Magento, and WooCommerce to developing fully bespoke e-commerce solutions tailored to your unique requirements, we leverage the latest technologies and best practices to deliver e-commerce experiences that drive engagement, conversions, and revenue growth. Whether you're a startup looking to launch your first online store or an established enterprise seeking to optimize your existing e-commerce platform, our team of experts is dedicated to helping you achieve your goals and maximize your online potential.
# Software Development Services: Innovating for Success
At the heart of our offering lies our comprehensive suite of software development services, designed to address the full spectrum of our clients' technology needs. From web and mobile application development to enterprise software solutions and beyond, we combine technical expertise with creative vision to deliver software solutions that exceed expectations and drive tangible business results.
With a focus on collaboration, communication, and transparency, we work closely with our clients to understand their unique challenges, objectives, and aspirations, ensuring that every solution we deliver is aligned with their strategic vision and business goals. Whether you're looking to streamline internal processes, enhance customer experiences, or unlock new revenue streams, our team of seasoned professionals is committed to delivering innovative, scalable, and future-proof software solutions that propel your business forward.
# Unlock Your Potential with TheNthBit
In today's hyper-connected world, the ability to innovate, adapt, and differentiate is more critical than ever. At TheNthBit, we empower businesses to harness the power of technology and unlock their full potential in the digital age. With our comprehensive suite of software development services, including cloud development, e-commerce development, and beyond, we help our clients navigate the complexities of the modern marketplace and seize new opportunities for growth and success.
# Get in Touch
Ready to take your business to new heights? Contact us today to learn more about our software development services and discover how TheNthBit can help you achieve your technology goals. Together, let's embark on a journey of innovation, transformation, and unparalleled success in today's dynamic digital landscape.
At TheNthBit, the possibilities are limitless, and the future is yours to shape.
**For More Information**:
Phone: +91 9810089684
Address: SF-6, Pearls Omaxe Tower, Netaji Subhash Place, Delhi -110034
[TheNthBit](https://thenthbit.com/?utm_source=BlogPost&utm_medium=SEO&utm_campaign=Marketing)
[software development company India](https://thenthbit.com/software-development-company-india/?utm_source=BlogPost&utm_medium=SEO&utm_campaign=Marketing)
[Custom Software Development Company](https://thenthbit.com/software-development-services/?utm_source=BlogPost&utm_medium=SEO&utm_campaign=Marketing)
| thenthbitlabs |
1,762,138 | ثبت سایت در گوگل سرچ کنسول | ثبت سایت در سرچ کنسول گوگل معتبر، از جمله گوگل، امری اساسی برای بهبود دسترسی و نمایش وبسایت شما در... | 0 | 2024-02-15T12:02:35 | https://dev.to/tabliqcom/thbt-syt-dr-gwgl-srch-khnswl-4obk | [ثبت سایت در سرچ کنسول گوگل](https://tokanweb.com/how-to-add-website-to-google-search-console/) معتبر، از جمله گوگل، امری اساسی برای بهبود دسترسی و نمایش وبسایت شما در نتایج جستجو است. در این مقاله، به نحوهی ثبت سایت در گوگل به صورت سریع و کارآمد پرداخته میشود.
ساخت وبمستر تولز:
قبل از هرچیز، حتماً در وبمستر تولز (Google Search Console) حساب کاربری ایجاد کنید. این ابزار به شما امکان مدیریت و نظارت بر نحوهی نمایش سایت شما در گوگل را میدهد.
اضافه کردن سایت:
در پنل وبمستر تولز، سایت خود را اضافه کنید. این اقدام از طریق دستورات سادهای انجام میشود که توسط وبمستر تولز ارائه میشود.
تأیید صحت سایت:
برای اطمینان از اینکه شما مالک واقعی سایت هستید، باید سایت خود را تأیید کنید. این امر معمولاً با افزودن یک کد تأیید به فایل هدر یا از طریق DNS انجام میشود.
ارسال نقشه سایت:
نقشه سایت (sitemap) فایل XML است که لیست تمامی صفحات و محتوای سایت شما را به گوگل اطلاع میدهد. نقشه سایت خود را در وبمستر تولز ارسال کنید.
بررسی وضعیت فنی:
مطمئن شوید که سایت شما برای رباتهای جستجو به درستی قابل دسترس است. از ابزارهای وبمستر تولز برای بررسی این موضوع استفاده کنید.
بهبود SEO:
بررسی کنید که عناوین صفحات، توضیحات متا، و ساختار URL ها بهینه شده باشند. استفاده از کلمات کلیدی مرتبط با محتوای سایت نیز به SEO کمک میکند.
نظارت بر عملکرد:
با استفاده از وبمستر تولز، عملکرد سایت خود را در نظر بگیرید. اطلاعات دربارهی نمایش در جستجوها، کلیکها، و رتبهبندی صفحات مختلف را بررسی کنید و تغییرات لازم را اعمال کنید.
با اجرای این مراحل، سایت شما به بهترین شکل در نتایج گوگل نمایش داده میشود و افزایش دسترسی کاربران به محتوای شما را تضمین میکند. | tabliqcom | |
1,762,150 | What are Github Actions and how to set them up | I hadn't dabbled much with GitHub Actions (GHA), until I set up a project using GHA for a project I... | 0 | 2024-02-15T12:50:39 | https://dev.to/sumisastri/what-are-github-actions-and-how-to-them-up-4jm7 | webdev, programming, githubactions, github |
I hadn't dabbled much with GitHub Actions (GHA), until I set up a project using GHA for a project I was working on about 8 months ago.
GHA workflows are a means to automate code builds in the dev, testing and deployment pipelines and is one of several continuous integration and continuous delivery (CI/CD) tools.
GHA allows developers to run workflows based on events.
An event is a specific activity in a code repository that triggers a workflow run.
For example, activity can originate from GitHub when someone creates a pull request (PR), opens an issue, or pushes a commit to a repository.
You can run a workflow to automatically add the appropriate labels whenever someone creates a new issue in your repository.
A workflow contains one or more jobs which can run in sequentially or in parallel.
Each job will run inside its own virtual machine runner, or inside a container. The job has one or more steps that either run a script that you define or run in an action.
## Setting up GHA configuration or config files
GHA requires a `yml.config` file to configure local actions to a cloud-based platform. These config files need to be in the root directory and placed in a folder which is a `.github` folder to be parsed (read).
In the subfolder of workflows you can determine as many config files as you require.
For example, `.github/workflows/config1.yml` outlines the first set of rules to configure. To add multiple configs for different jobs, all you need to do is to append another file in the same folder for these additional config rules - `.github/workflows/config2.yml` - and add the required configs.
You can also add a template for how the PR should look in a markdown or `.md` file, in a `pr_template.md` file as well as a similar template to report issues.
## The difference between issues and PRs
An issue is a discussion topic that does not change the code base.
A pull request triggers the peer-review process, where other developers review the PR making suggestions to change and sanitise the code base making it cleaner, more efficient and maintainable.
## The `config.yml` file
GHA requires a `config.yml` to connect local configurations to the GHA cloud.
## What is YAML?
Yml, or YAML (an acronym for Y Aint Markup Language), is a human-friendly data serialisation language that is language agnostic. It’s a strict superset of JavaScript Object Notation or JSON, with completely different syntax.
YAML is lighter and more flexible than JSON and is considered great for config files. JSON is more inflexible and therefore is considered better for data interchanges.
### Fields in the `config.yml` file
The `config.yml` file has the following fields
-`name` - Name for this config - eg: PR Checks
- `on` - Where the GitHub action takes place as in the example in the code below.
```
on:
pull_request:
branches:
- name of branch
```
- `jobs` - Lists jobs to be run during the process (work flow), in this example, the PR checks
A job is a set of steps in a workflow that is executed on the same runner.
Each step is either a shell script that will be executed, or an action that will be run.
Steps are executed in order and are dependent on each other.
Since each step is executed on the same runner, you can share data from one step to another.
For example, you can have a step that builds your application followed by a step that tests the application that was built. In the example below are a jobs config
```
jobs:
test:
name: job name (eg: Check formatting with Prettier)
runs-on: system (eg: ubuntu-ltest)
<!-- steps: (list of steps, uses, with and run commands) -->
steps: - name: Checkout
uses: actions/checkout@v3 - uses: actions/setup-node@v2
with:
node-version: "16" - name: name of commands that will run (eg: Ensure Prettier Formatting Passes)
<!-- run: these are the commands that will run the code checks -->
run: |
npm ci
npm run prettier-check
```
## How to protect your main or master branch with GHA
You can configure a GHA workflow to be triggered when an event occurs in your repository, such as a PR being opened or an issue being created.
Workflows in Github refer to the process of software development and maintaining control over versions over several iterations and changes to the code base.
A workflow configures an automated process that will run one or more jobs.
To maintain this control over version control in Github, you can take actions to protect your key branches. The main branch - formerly known as the master branch, is usually is the production ready-branch.
Additionally each branch you create can be sanitised with GHA, leading to cleaner and more efficient merges of branches into the main branch from development, integration or test environments.
These steps allow for continuous integration, of the app into the production branch and continuous delivery. CI-CD, as it is referred to, keeps the main branch ready for release on a continuous basis as well as continuous deployment. New features/ bug-fixes can be released into production and making these features available to customers as soon as they are tested and production-ready reducing time-to-market.
Workflows are defined by a `.yml` file and will run automatically, or they can be triggered manually, or at a defined schedule.
The most important branch to protect is the main branch.
- No direct changes can be made to main (best practice)
- Branches must be made from main (the first branch therefore production ready)
- Only named people can merge a branch into main (improve security)
- This can be done by clicking the `protect-this-branch` on your repository in GitHub and checking the boxes that you would like depending on what rules you want to set to protect the branch
## How to protect sub-branches with GHA
Each branch or sub-branch (from an integration or development branch head) also needs to be protected so that code is sanitised before it reaches the main branch and merged into master.
Some best practices:
- Commit history - clean messages and description of reason for change
- PR required before branch can be merged - to discuss and make changes
- Tests written must pass in the PR environment
- A minimum number of people required to review code before merging
- Named people review code
- Security and best practice - the person who writes the code, changes the code and merges the code into master
- Stale branches - changes already made on master pulled into branch and updated before merging
- Conflicts - all merge conflicts to be cleared before merging
- A PR format
- An issues format
I hope this will help you set up GHA in a project without feeling too intimidated to experiment!
Photo Credit: Photo by <a href="https://unsplash.com/@campaign_creators?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Campaign Creators</a> on <a href="https://unsplash.com/photos/man-writing-on-white-board---kQ4tBklJI?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>
| sumisastri |
1,762,203 | Handling SpreadSheets In Javascript | Spreadsheets are very common in today's world and most people prefer to visualize and interact with... | 0 | 2024-03-02T07:48:12 | https://dev.to/kalashin1/handling-spreadsheets-in-javascript-3fe3 | javascript, typescript, node | Spreadsheets are very common in today's world and most people prefer to visualize and interact with their data via a spreadsheet because of the awesome features they provide. Spreadsheets are also a reliable way to store data given that the data is not bound to change much. This is especially useful for companies that do not have the technicality required to maintain a traditional database, they usually resort to using a spreadsheet as a kind of drop-in replacement and it works well for most of the time.
There are lots of UIs for interacting with a spreadsheet and there is no shortage of options to choose from in this category. However we are developers and we don't have the time and patience to use the spreadsheet with a UI we'd rather write code that would work for us, to make us feel smart for being lazy. Today's Post is going to be centered around how we can interact with a spreadsheet using our favorite programming language, [Javascript](https://developer.mozilla.org/en-US/docs/Learn/JavaScript/First_steps/What_is_JavaScript). We will consider the following talking points.
- Project Setup and Installation
- Parsing a Spreadsheet
- Converting Data to Spreadsheets
## Project Setup and Installation
To get started we need to bootstrap a [React](https://react.dev/) application with [Vite](https://vitejs.dev/), so run the following command to set up your project;
```bash
npm create vite@latest excel-app -- --template react
```
This command will help us to set up our React project, although you will need to follow some prompts to complete the setup. Now we have our React application successfully bootstrapped for us. We need to install the dependencies but first, let's navigate to the project directory.
```bash
cd excel-app && npm install
```
When the installation is complete we need to verify that everything is working as intended thus we need to start the project in development mode.
```bash
npm run dev
```
You should see your app running on port 5173, open up localhost:5173 in your browser to see the default React template that comes when we set up a vite React project. We now need to install the library that will help us deal with the spreadsheet. Run the following command to install the library;
```bash
npm install xlsx
```
## Parsing a spreadsheet
Now let's see how we can parse an Excel spreadsheet. Before we jump into that we need to set up a component that will allow users to upload an Excel spreadsheet but before we can do that we need to set up a component that will enable the user to select a spreadsheet.
```jsx
// src/uploadForm.jsx
const UploadForm = () => {
return (
<>
<form
ref={formRef}
onSubmit={(e) => handleSubmit(e, formRef, readData, handleError)}
>
<input type="file" name="excel" />
<button type="submit">ParseFile</button>
</form>
</>
);
};
export default UploadForm;
```
We need to import this component into our `App.jsx` file to use it;
```jsx
import "./styles.css";
import UploadForm from "./uploadForm";
export default function App() {
return (
<div className="App">
<UploadForm />
</div>
);
}
```
Now we are going to create a helper file that will contain the function for processing the uploaded file.
```jsx
// src/helper.js
export const handleSubmit = (e, formRef, cb, errCb) => {
e.preventDefault();
const fileReader = new FileReader();
const { excel } = formRef.current;
const file = excel.files[0];
fileReader.readAsArrayBuffer(file);
fileReader.onload = () => {
cb(fileReader.result);
};
fileReader.onerror = errCb;
};
```
The function above accepts four arguments, the first is the event object while the second is a reference to the form we want to process, the third argument is a success callback function and the last is an error callback function. Inside the function, we call the `preventDefault` method on the event. Then we create a new [`FileReader`](https://developer.mozilla.org/en-US/docs/Web/API/FileReader) object, then we extract an input whose name is `excel` from the form after which we create a reference to the first uploaded file from the input.
We call the `readAsArrayBuffer` method on the `fileReader`, this method asynchronously reads the uploaded file as an ArrayBuffer, which is more efficient for binary data like Excel files. We define an event listener for the load event, which is fired when the file is read successfully. Inside the listener, we call the success callback `cb(fileReader.result)` with the arrayBuffer as its argument. This allows you to use the file data further in our application. Let's go back to the `uploadForm` component to import and use this function, we also need to adjust this component.
```jsx
// src/uploadForm.jsx
import { useRef, useState } from "react";
import { read, utils } from "xlsx";
import { handleSubmit } from "./helper";
const UploadForm = () => {
const formRef = useRef(null);
const [tableData, setTableData] = useState([]);
function readData(arrayBuffer) {
const workbook = read(arrayBuffer);
const sheet = workbook.Sheets[workbook.SheetNames[0]];
const data = utils.sheet_to_json(sheet);
setTableData(data);
}
function handleError(error) {
console.error("Error reading file:", error);
}
return (
<>
<form
ref={formRef}
onSubmit={(e) => handleSubmit(e, formRef, readData, handleError)}
>
<input type="file" name="excel" />
<button type="submit">ParseFile</button>
</form>
</>
);
};
export default UploadForm;
```
We have updated the `uploadForm` component, we imported `useRef` and `useState` from React then we imported `read` and `utils` from the `xlsx` library we installed when we were setting up our project. Then we import `handleSubmit` from the helper file. Inside the `UploadForm` component we have created a `formRef` variable which is a ref object and we set its current value to null. Then we create a stateful variable `tableData`.
Next, we create a function `readData`. The function takes an arrayBuffer as input, which is the Excel file data read using the FileReader in handleSubmit. We use the read function from the xlsx library to create a workbook object from the provided arrayBuffer. `const sheet = workbook.Sheets[workbook.SheetNames[0]]`; retrieves the first sheet within the workbook. The SheetNames array stores the names of all sheets, and you're accessing the first element with index 0. Then we Convert the sheet data into JSON format using the sheet_to_json function from the utils object in the xlsx library. Then we set the tableData state to the converted data.
Under that function, we have defined an error handler function that will be called when an error happens while the `fileReader` is trying to convert the uploaded file to an `arrayBuffer`. Then we set the `ref` attribute on the form to the `formRef` and then we call the `handleSubmit` function when the form is submitted. Now for us to see the uploaded data we need to create another component this will be a table that displays the data from the uploaded file, take note I already know the structure of the data from the Excel file so I can make assumptions.
```jsx
// src/table.jsx
const DataTable = ({ tableData }) => {
return (
<table>
<thead>
<tr>
{Object.keys(tableData[0]).map((title, index) => (
<th key={index}>{title}</th>
))}
</tr>
</thead>
<tbody>
{tableData.map((data, index) => (
<tr key={index}>
<td>{data._id.slice(0, 6)}</td>
<td>{data.crowd}</td>
<td>{data.units}</td>
<td>{data.price}</td>
<td>{data.external_id}</td>
<td>{new Date(data.createdAt).toDateString()}</td>
<td>{new Date(data.updatedAt).toDateString()}</td>
</tr>
))}
</tbody>
</table>
);
};
export default DataTable;
```
The DataTable component is a simple React component that takes an array of objects as its tableData prop. It renders a table with headers and rows, where each row represents an object in the tableData array. The headers are generated from the object keys of the first object in the tableData array. The values of each header are displayed as table cells. The DataTable component uses the map method to iterate over the tableData array and render a table row for each object. We will import and use this function inside the `uploadForm` component.
```jsx
// src/uploadForm.jsx
// ...cont'd
import { handleSubmit, downloadData } from "./helper";
import DataTable from "./table";
const UploadForm = () => {
// ...cont'd
return (
<>
<form
ref={formRef}
onSubmit={(e) => handleSubmit(e, formRef, readData, handleError)}
>
<input type="file" name="excel" />
<button type="submit">ParseFile</button>
</form>
{tableData && tableData.length > 0 && (
<div>
<div className="download-button">
<button onClick={() => downloadData(tableData)}>
Download Data
</button>
</div>
<DataTable tableData={tableData} />
</div>
)}
</>
);
}
// ...cont'd
```
## Converting Data to Spreadsheets
We have imported and used the `DataTable` component to display the data in the uploaded Excel sheet, we have also added a button to download the data back as an Excel file and we have also imported a new function `downloadFile` from the `helper` file so we need to go and define that function
```javascript
export const downloadData = (data) => {
const ws = utils.json_to_sheet(data);
const wb = utils.book_new();
utils.book_append_sheet(wb, ws, "Data");
writeFileXLSX(wb, "data.xlsx");
};
// ...cont'd
```
The function `downloadData` is designed to export a given array of data (data) as an Excel file. `const ws = utils.json_to_sheet(data);` uses the `json_to_sheet` function from the `xlsx` library to convert the data array into a worksheet object suitable for Excel representation. `const wb = utils.book_new();` creates a new empty Excel workbook object using the book_new function. `utils.book_append_sheet(wb, ws, "Data");` adds the previously created worksheet to the workbook, assigning it the name "Data".
`writeFileXLSX(wb, "SheetJSReactAoO.xlsx");` utilizes the writeFileXLSX function from the `xlsx` library to write the constructed workbook object to a physical Excel file named "data.xlsx".
The `downloadData` function is used by the UploadForm component to export the data from the uploaded Excel file. When the "Download Data" button is clicked, the `UploadForm` component calls the `downloadData` function with the tableData state as the argument.
That's going to be it for this post guys, Hope you found this useful. What are your thoughts on this approach as a means of data collection in your application, would you personally implement this approach? Have you worked with other Javascript Excel libraries and do you think they do a better job than XLSX? Please leave your thoughts on all this and more in the comment section and I will see you in the next post. | kalashin1 |
1,762,447 | How to use enum attributes in Ruby on Rails | This article was originally written by Jeffery Morhous on the Honeybadger Developer Blog. Enums, or... | 0 | 2024-03-04T08:00:00 | https://www.honeybadger.io/blog/how-to-use-enum-attributes-in-ruby-on-rails/ | ruby, rails | *This article was originally written by [Jeffery Morhous](https://www.honeybadger.io/blog/authors/jefferymorhous/) on the [Honeybadger Developer Blog](https://www.honeybadger.io/blog/how-to-use-enum-attributes-in-ruby-on-rails/).*
Enums, or enumerations, are an incredibly common way to represent options for an attribute of a model in Rails. If a model has an attribute-like status, it is often a string with predefined options. In Rails, you can represent this as an integer and automatically convert it to a string using built-in methods! This makes presenting options to a user as strings while storing these selections as a number straightforward and maintainable.
In this article, we'll discuss why you might want to use enums in your code, how to add an enum attribute to an existing table, how to create a new table with an enum, and how to use enums in your application.
## Why enums are useful
Enums in Rails allow us to map a set of symbolic keys to specific integer values in the database. This makes them an efficient way to represent and store a limited set of values for an attribute,
such as a status, without having to use strings or create additional tables mapping strings to integers. Using enums also brings consistency and readability to the code, making it easier for developers to understand the possible values of an attribute without having to dig through the database schema.
In addition to simplifying data representation and storage, enums also come with useful utility methods. For example, enums automatically generate scope methods for each value, making it easy to query and filter records based on the enum attribute. They also generate methods for each enum value, so the code can easily check the current value of an enum attribute in a more readable and expressive manner. Enums are often leveraged in form helpers, making presenting options in a form straightforward.
## Adding an enum to an existing table
To add an enum attribute to an existing table in Rails, start by creating a new migration.
For example, if you're adding an attribute `status` to the model `course`, run the following:
```bash
rails generate migration AddStatusToCourse status:integer
```
If you want the attribute to have a default value, open the newly created migration in `db/migrate` and edit the `add_column` method call to have the default value you want:
```ruby
add_column :courses, :status, :integer, default: 0
```
To apply the migration to your database, run the following:
```bash
rails db:migrate
```
Next, edit your model to declare the enum and its mappings of strings to integers.
```ruby
class Course < ApplicationRecord
enum status: {
pending: 0,
active: 1,
archived: 2
}
end
```
That's it! You've added an attribute to your model and defined it as an enum; all that's remaining is to actually use the enum.
## Creating an enum with a new database table
If you don't already have a model with which you'd like to use an enum, adding an enum attribute is as easy as any other attribute you'll need for that model. If you haven’t generated the
`course` model, for example, you can begin by generating a new model.
```bash
rails generate model Course name:string status:integer
```
Open up your latest migration, and ensure it looks like this:
```ruby
class CreateFamilies < ActiveRecord::Migration[7.0]
def change
create_table :users do |t|
t.string :name
t.integer :status
end
end
end
```
Finally, run the migration:
```bash
rails db:migrate
```
Finally, add the enum definition to your model, as shown in the
previous section:
```ruby
class Course < ApplicationRecord
enum status: {
pending: 0,
active: 1,
archived: 2
}
end
```
## Setting an enum value
Setting the value of an enum attribute of a model is readable, even though the data is saved as an integer.
In what is probably the most common method, you can set the value using the symbolic key:
```ruby
course.status = :active
course.save
```
You can also set the value using the string representation of the symbolic key:
```ruby
course.status = "active"
course.save
```
If you're initializing the object, you can set the value during initialization:
```ruby
course = Course.new(status: :active)
course.save
```
You can even use the bang methods for each option generated by the enum:
```ruby
course = Course.new
course.active!
```
These are some of the most common ways to set the value of an enum attribute in Rails. It's generally recommended to use the symbolic keys or their string representation rather than the integer directly, as they make your code more readable and expressive.
## Checking an enum value
While you can technically check the integer value of an enum, Rails generates predicate methods that make checking the value of an enum easy and readable:
```ruby
course = Course.new(status: :active)
course.pending? # returns false
course.active? # returns true
course.archived? # returns false
```
You can also compare the enum attribute directly with a symbolic key or even its string representation:
```ruby
course = Course.new(status: :active)
course.status == :active # returns true
course.status == "active" # also returns true!
```
There are more ways to compare enum values, but these are the most common.
## Conclusion
Overall, enums in Ruby on Rails offer a clean and efficient way to represent a finite set of values for a model attribute. Their integration with Rails' built-in features, such as scope and predicate methods, makes them a powerful and convenient choice for managing such attributes. In turn, this leads to improved
code readability, maintainability, and a more enjoyable development experience.
Leaning on Rails conventions when using enums ensures the seamless translation of symbols into stored integer values and affords
straightforward methods for checking those values.
| honeybadger_staff |
1,762,526 | Phase 2 | Phase 2 is in the books...kinda waiting for grades to come in, I should be fine. React was the focus... | 0 | 2024-02-15T19:40:43 | https://dev.to/austin424/phase-2-528n | Phase 2 is in the books...kinda waiting for grades to come in, I should be fine.
React was the focus of our learning. In the the beginning it felt like everything we learn in the first phase got flipped on it's head.
We first learned about components (what even is that?) turns out to be a really cool way to write my JSX (Reacts version of HTML). What started of as a complicated concept turned out to be my favorite part of the phase, writing one set of code for a header or footer and being able to reuse it and avoid retyping the same code over and over is such a time saver.
Functions became easier to write for me as well; i struggled heard last phase to write them, they irked me to know end.
Props was easy for me to get conceptually but not technically. I understood how they worked but for the life of me, passing a prop through components eluded me until recently when my cohort member Kirstyn (Hi Kirstyn, thanks for the help) helped me finish the code challenge that required it.
State was a middle ground area, again i understand how it's suppose to be use and set up but i forget how to apply sometimes.
Effect was fairly simple, we surround a fetch in it to be able to access an external source without causing re-renders or infinite loops.
Forms i will need to continue to work on, getting it on the page is fine but in tandem with with a POST is still a struggle to me. I will however continue to practice React when we move on the phase 3 so i can keep up on the language and add more projects to my portfolio as well.
Overall all I like react a lot and I'm glad to have learned it. Glad to have had Sakib as my instructor, hopefully i can keep the enthusiasm up in Phase 3.
| austin424 | |
1,762,534 | 10 Ideas for Enhancing Industrial Cleaning Services | 10 Ideas for Enhancing Industrial Cleaning Services In the bustling industrial landscape of Eustis,... | 0 | 2024-02-15T19:56:12 | https://dev.to/autoplex764/10-ideas-for-enhancing-industrial-cleaning-services-76g | cleaning, webdev, beginners |
10 Ideas for Enhancing Industrial Cleaning Services
In the bustling industrial landscape of Eustis, FL, maintaining a high standard of cleanliness is crucial for businesses to operate efficiently and safely. Industrial cleaning services play a pivotal role in ensuring a pristine working environment, but constant improvement is essential to meet the evolving needs of industries.
**Here are 10 innovative ideas to enhance Industrial Cleaning Services in Eustis, FL and beyond.
**

## 1. Embrace Technological Advancements for Efficiency
To elevate industrial cleaning services, integrating cutting-edge technology is paramount. Utilizing robotic cleaners and automated systems can streamline the cleaning process, reducing both time and manpower. Incorporating state-of-the-art cleaning equipment enhances precision and effectiveness, ensuring a thorough cleaning of industrial spaces.
## 2. Implement Green Cleaning Practices
Environmental consciousness is increasingly vital in modern industrial operations. Incorporating eco-friendly cleaning products and methods minimizes the environmental impact of industrial and [Commercial Cleaning Services in Eustis FL](https://royal.cbsservicesusa.com/commercial-cleaning-eustis/). This not only contributes to sustainability but also aligns businesses with environmentally responsible practices, which is a growing concern in Eustis, FL.
## 3. Strengthen Safety Protocols
Safety is paramount in industrial settings, and a robust safety protocol is essential for any industrial cleaning service. Regular safety training, the use of personal protective equipment, and stringent adherence to safety guidelines ensure the well-being of cleaning staff and contribute to accident prevention.
## 4. Introduce Specialized Cleaning Solutions
Different industries have unique cleaning requirements. Offering specialized cleaning solutions tailored to specific industrial needs can set a cleaning service apart. Understanding the nuances of each sector in Eustis, FL, and customizing cleaning approaches accordingly can lead to increased client satisfaction.
## 5. Foster a Culture of Safety Among Cleaning Staff
Creating a culture of safety among cleaning staff is crucial for the overall success of industrial cleaning services. Encourage open communication, provide ongoing safety training, and recognize and reward safety-conscious behavior. A safety-focused culture not only protects employees but also enhances the overall efficiency of cleaning operations.
## 6. Optimize Cleaning Schedules for Minimal Disruption
Industrial cleaning services should aim to minimize disruption to regular business operations. By optimizing cleaning schedules to non-peak hours or strategically planning cleaning activities, businesses can maintain productivity while ensuring a consistently clean environment. Flexibility in scheduling is key to accommodating diverse industrial needs.
## 7. Invest in Employee Training and Development
Well-trained and skilled cleaning staff are essential for delivering high-quality industrial cleaning services. Investing in continuous training and development programs ensures that cleaning staff are up-to-date with the latest cleaning techniques, technology, and safety protocols, enhancing overall service quality.
## 8. Implement Quality Assurance Measures
Quality assurance measures, such as regular inspections and feedback loops, help maintain service standards. Establishing a systematic approach to quality control ensures that industrial cleaning services consistently meet or exceed client expectations. This proactive approach can lead to long-term client satisfaction and loyalty.
## 9. Enhance Communication with Clients
Open and transparent communication with clients is crucial for understanding their specific needs and expectations. Regularly seeking feedback, addressing concerns promptly, and providing updates on cleaning progress fosters a positive client-provider relationship. This can lead to long-term partnerships and positive word-of-mouth referrals in Eustis, FL.
## 10. Prioritize Sustainability in Cleaning Practices
As sustainability becomes a global priority, incorporating green and sustainable cleaning practices is a wise strategy. From using environmentally friendly cleaning products to implementing energy-efficient cleaning equipment, prioritizing sustainability aligns industrial cleaning services with the broader goals of corporate responsibility.
## Conclusion
In conclusion, adopting these 10 ideas can significantly enhance [industrial cleaning services in Eustis, FL](https://royal.cbsservicesusa.com/industrial-cleaning-clermont/). By embracing technology, prioritizing safety, and tailoring services to specific industry needs, cleaning services can elevate their efficiency, cost-effectiveness, and overall client satisfaction. | autoplex764 |
1,762,619 | The Era of LLM Infrastructure | API access to large language models has opened up a world of opportunities. We have seen many simple... | 0 | 2024-02-15T21:18:36 | https://dev.to/einstack/the-era-of-llm-infrastructure-5ai | chatgpt, genai, llms, llmops | ---
title: The Era of LLM Infrastructure
published: true
description:
tags: chatgpt, genai, llms, llmops
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c53to7bgrf947axpq292.png
# published_at: 2024-02-15 20:59 +0000
---
API access to large language models has opened up a world of opportunities. We have seen many simple proof-of-concept applications show promise in being effective. However, as the complexity of these applications grows, several crucial issues arise when putting these systems into production. These issues include unreliable API endpoints, slow token generation, LLM lock-in, and cost management. Clearly, the LLM era will require solutions to manage LLM API endpoints.
[Glide](https://glide.einstack.ai/get-started/introduction) is a cloud-native LLM gateway that provides a lightweight interface to manage the complexity of working with multiple LLM providers.

## Unified API
Glide offers a comprehensive API that facilitates interaction with multiple LLM providers. Instead of dedicating considerable time and resources to developing custom integrations for individual LLM providers, Glide provides a single API interface that allows users to interact with any LLM provider. Adopting this approach can significantly **enhance application development efficiency**. By working off a standardized API, engineers can minimize complexity and development time, leading to faster and more efficient application development. Additionally, there is zero LLM model lock-in, as underlying models can be switched without knowledge from the client application.
## Glide Routers
A fundamental principle in Glide is the concept of routers. Routers enable you to group models together for shared logic. An excellent example of this is illustrated by a RAG power chatbot, which allows users to search over a documentation set. It is directly built on GPT-3.5 Turbo and entirely depends on OpenAI to keep its API operable. This dependency poses a significant risk to the application and user experience. Therefore, it is recommended to set up a Glide router in resilience mode by adding a single backup model to a router. If the OpenAI API fails, Glide will automatically send the API call to the next model specified in the configuration. In addition, **model failure knowledge is shared across all routers**, reducing wasteful retries when an LLM provider has a known issue.
Another essential router type is the least-latency router. This router selects the model with **the lowest average latency per generated token**. Since we don’t know the actual distribution of model latencies, we attempt to estimate it and keep it updated over time. Over time, old latency data is weighted lower and eventually dropped from the calculation. This ensures latencies are constantly updated. As with all routers, if a model becomes unhealthy, it will pick the second-best, etc.
Other routing modes are available, such as round-robin, which is excellent for A/B testing, and weighted round-robin, which helps specify the percentage of traffic that should be sent to a set of models.
One Glide deployment can support multiple applications with diverse requirements since it can support numerous routers. There are also exciting routers on the roadmap, such as intelligent routing, which ensures your request is sent to the model best suited for that request.
## Declarative Configuration
Glide simplifies the setup process through declarative configuration, which defines the state of the Glide gateway in one place. This also means that secret management is centralized, enabling the rotation of API keys from a single location.
Furthermore, this approach enables the separation of responsibilities between teams. One team can manage the infrastructure, deploy Glide, and make it available to other teams (such as AI/DS teams) while also being responsible for rotating keys. Meanwhile, other teams can solely focus on working with models and not worry about these configurations.
Here is a bare-bones configuration example:
```yaml
routers:
language:
- id: my-chat-app
strategy: priority
models:
- id: primary
openai:
model: "gpt-3.5-turbo"
api_key: ${env:OPENAI_API_KEY}
- id: secondary
azureopenai:
api_key: ${env:AZUREOAI_API_KEY}
model: "glide-GPT-35" # the Azure OpenAI deployment name
base_url: "https://mydeployment.openai.azure.com/"
```
With this simple configuration a priority/fallback router has been created. All requests will be sent to OpenAI, should the OpenAI API fail the request will be sent to an Azure OpenAI deployment.
## What's Next?
The future of LLM applications will be multi-modal, with text, speech, and vision models employed together to create rich user experiences. Glide will be the go-to gateway for these applications. Glide plans to support various features over the next several months, including exact and semantic caching, embedding endpoints, speech endpoints, safety policies, and monitoring features.
---
If you are interested in using Glide, here is a list of links for you to check out:
- 🛠️ Github: https://github.com/EinStack/glide
- 📚 Docs: https://glide.einstack.ai/
- 💬 Discord: https://discord.gg/upVttzRfpn
- 🗺️ Roadmap: https://github.com/EinStack/glide/blob/develop/ROADMAP.md | roma_glushko |
1,762,641 | Getting Started with Python: Writing Your First Program | Introduction Welcome to the exciting world of Python programming! This article will guide... | 26,728 | 2024-02-15T22:48:24 | https://dev.to/tlayach/getting-started-with-python-downloading-and-the-hello-world-program-3kic | python, beginners, programming | # Introduction
Welcome to the exciting world of Python programming! This article will guide you through creating your very first program, the classic "Hello, world!". Through this simple example, you'll gain fundamental knowledge of Python's syntax and execution, paving the way for further exploration in the language.
# Index
- Setting Up
- Writing the "Hello, World" Program
- Running the Program
- Conclusion
# Setting Up
Before diving into code, ensure you have Python installed on your system. You can download it from the official website [here](https://www.python.org/downloads/).
Once installed, you can use either the command line or an Integrated Development Environment (IDE) to write and run your Python code. This article covers both methods:
# Writing the "Hello, World" Program
Here's the code for your first Python program:
```python
print("Hello, world!")
```
Output:
```
Hello, world!
```
## Explanation
- The `print()` function is used to display output on the console.
- In this case, the string `"Hello, world!"` is passed as an argument to the `print()` function, instructing it to print that message.
# Running the Program
## Executing the script from the command line
1. Save the code in a file with a `.py` extension (e.g., `hello_world.py`).
2. Open a terminal or command prompt and navigate to the directory where you saved the file.
3. Use the `python` (or `python3` depending on the Operating System) command followed by the filename to execute the script:
```bash
python hello_world.py
```
Output:
```
Hello, world!
```
## Executing the script using IDLE
1. Open IDLE, which is usually included with Python installations.
2. In the IDLE window, either:
- Click on **File** > **New Window** to create a new file.
- Click on **File** > **Open** to open the `hello_world.py` file you created earlier.
3. Paste or type the code into the editor window.
4. To run the program, you have two options:
- Click on **Run** > **Run Module** (or press F5).
- Click on the **Run** button in the toolbar (a green arrow icon).
Output:
The "Hello, world!" message will appear in the shell window at the bottom of the IDLE window.
# Conclusion
This article provided a gentle introduction to Python programming through the "Hello, world!" program. You learned the basics of writing and running Python code using both the command line and IDLE, equipping you with the foundation for your future programming endeavors. As you progress, you'll encounter more complex concepts and functionalities, but remember that every journey begins with a single step. So, keep practicing, exploring, and creating with Python! | tlayach |
1,919,535 | Add user to group | Hi I am new to python and trying to understand a code that adds user to a group. The code was... | 0 | 2024-07-11T09:58:29 | https://dev.to/marco_puzzo_be7af5e0bd233/add-user-to-group-53ao | Hi
I am new to python and trying to understand a code that adds user to a group. The code was provided in sections and I am trying to understand if the sections of code are alternatives or it is something to replace or if it belongs to the block of code. I have separated the sections with *** on newline.
def add_user_to_group():
username = input("Enter the name of the user that you wantto addto agroup: ")
output = subprocess.Popen('groups', stdout=subprocess.PIPE).communicate()[0]
print("Enter a list of groups to add the user to")
print("The list should be separatedby spaces, for example:\r\n group1 group2 group3")
print("The available groups are:\r\n " + output)
chosenGroups = input("Groups: ")
*************************
output = output.split(" ")
chosenGroups = chosenGroups.split(" ")
print("Add To:")
groupString = ""
for grp in chosenGroups:
for existingGrp in output:
if grp == existingGrp:
found = True
print("-Existing Group : " + grp)
groupString = groupString + grp + ","
if found == False:
print("-New Group : " + grp)
groupString = groupString + grp + ","
else:
found = False
************************
groupString = groupString[:-1] + " "
confirm = ""
while confirm != "Y" and confirm != "N" :
print("Add user '" + username + "' to thesegroups? (Y/N)")
confirm = input().upper()
if confirm == "N":
print("User '" + username + "' not added")
elif confirm == "Y":
os.system("sudo usermod -aG " + groupString + username)
print("User '" + username + '" added) | marco_puzzo_be7af5e0bd233 | |
1,762,649 | Oh My Posh: O Oh My Zsh do Windows | O Oh My Zsh é uma ferramenta bem conhecida para customizar o terminal Zsh, usado em distribuições... | 0 | 2024-02-15T23:09:29 | https://dev.to/cslemes/oh-my-posh-o-oh-my-zsh-do-windows-250p | devops, opensource | O Oh My Zsh é uma ferramenta bem conhecida para customizar o terminal Zsh, usado em distribuições Linux e no macOS. Além de oferecer uma variedade de temas, o Oh My Zsh também possui uma ampla gama de plugins, como completions, que ajudam a aumentar a produtividade.
Para aqueles que utilizam o Windows e desejam explorar algumas possibilidades oferecidas pelo Oh My Zsh, decidi escrever este artigo sobre o Oh My Posh. O Oh My Posh pode ser usado em vários shells, não apenas no PowerShell, e também é multiplataforma. Isso significa que ele pode ser executado tanto no Windows quanto em qualquer shell do Linux e no macOS, além de ser gratuito e de código aberto. No entanto, ele se limita à funcionalidade de temas e não oferece gerenciamento de plugins como o Oh My Zsh. O PowerShell, por sua vez, possui várias funções de completions que podem ser configuradas, mas estão fora do escopo deste artigo. Demonstrarei como instalá-lo no Windows.
O primeiro passo é garantir que você tenha um terminal adequado no Windows. Para isso, vá até a Microsoft Store e baixe o Windows Terminal. Você também pode usar um gerenciador de pacotes via linha de comando.
- Via Winget (nativo do Windows 11):
```powershell
winget install --id Microsoft.WindowsTerminal -e
```
- Via [Chocolatey](https://chocolatey.org/) (não oficial):
```powershell
choco install microsoft-windows-terminal
```
- Via [Scoop](https://scoop.sh/) (não oficial):
```powershell
scoop bucket add extras
scoop install windows-terminal
```
Em seguida, instale o PowerShell Core. No Windows padrão, o PowerShell antigo vem pré-instalado, mas você pode optar por instalar o PowerShell Core usando o pacote .msi disponibilizado pela Microsoft ou usando o Winget. Como não encontrei referências na documentação da Microsoft sobre a instalação usando outros gerenciadores, não posso recomendar neste momento.
- Via MSI:
[PowerShell-7.4.1-win-x64.msi](https://github.com/PowerShell/PowerShell/releases/download/v7.4.1/PowerShell-7.4.1-win-x64.msi)
- Via Winget:
```
winget install --id Microsoft.PowerShell --source winget
```
Após instalar o PowerShell Core, abra o Windows Terminal e defina-o como o terminal padrão. Para fazer isso, vá em Configurações, clique na seta para baixo ao lado do botão '+' na aba de título da janela e, em seguida, em Perfil Padrão, escolha PowerShell (não Windows PowerShell).
Agora, finalmente, vamos instalar o Oh My Posh, seguindo as instruções da documentação oficial. Existem opções para instalar usando gerenciadores de pacotes, mas neste caso, vamos utilizar o script PowerShell para instalar.
No Windows Terminal com PowerShell Core, execute a seguinte linha de comando e pressione Enter:
```powershell
Set-ExecutionPolicy Bypass -Scope Process -Force; Invoke-Expression ((New-Object System.Net.WebClient).DownloadString('https://ohmyposh.dev/install.ps1'))
```
Agora, você deve configurar o Oh My Posh para iniciar junto com seu terminal. Para isso, edite o arquivo de perfil do PowerShell. A localização do arquivo fica armazenada na variável de ambiente **`$PROFILE`**. Você pode usar o Notepad para isso:
```
notepad $PROFILE
```
Vá para a última linha do arquivo e adicione:
```
oh-my-posh init --shell pwsh | Invoke-Expression
```
Salve e feche o Notepad, e reinicie seu terminal. Deve abrir conforme a imagem abaixo, porém sem os ícones.

Para carregar os ícones, é necessário ter uma fonte que suporte esses caracteres. Para isso, vamos utilizar o [Nerd Fonts](https://www.nerdfonts.com/). Basta escolher uma fonte, baixá-la e instalá-la com um duplo clique no Windows. Você também pode fazer isso por linha de comando usando o cmdlet do Oh My Posh para instalar fontes. Por exemplo, para instalar a fonte MesloLG, você pode executar o seguinte comando:
```powershell
oh-my-posh font install MesloLG
```
Agora, é necessário ir nas configurações do Windows Terminal. Em "Perfis" > "Padrões" > "Aparência", selecione o tipo de fonte e troque para a fonte escolhida. Salve e feche as configurações. O Windows Terminal aplicará as alterações na janela atual.
Referências
https://learn.microsoft.com/pt-br/windows/terminal/
https://learn.microsoft.com/pt-br/powershell/scripting/install/installing-powershell-on-windows
https://ohmyposh.dev/ | cslemes |
1,762,660 | Enikoko, Level 3 (Cosplore3D: Pt:14) | Intro This is a series following Cosplore3D, a raycaster game to learn 3D graphics. This... | 26,268 | 2024-02-16T01:30:00 | https://dev.to/chigbeef_77/enikoko-level-3-cosplore3d-pt14-4d8f | go, gamedev, learning | ## Intro
This is a series following [Cosplore3D](https://github.com/Chig-Beef/Cosplore3D), a raycaster game to learn 3D graphics. This project is part of 12 Months 12 Projects, a challenge I set myself. We've done a lot of work on levels 1 and 2, but we haven't created a way to get from level 2 to 3 (we haven't created level 3 yet, but we can start by making the way to this level).
## The Progressor And Trigger
In the lore of our game, the player places the Cosmium they got from Ankaran, and places it into a reactor. Then, they head back to navigation and land on Enikoko. In game logic, this means that the player walks over to a reactor, and the Cosmium is taken out of their inventory. They then walk back to navigation (which is where they spawned), and if they don't have Cosmium, then they progress to the next level.
```golang
type Trigger struct {
x float64
y float64
w float64
h float64
action Action
}
func (t *Trigger) check_collide(g *Game) {
x := g.player.x
y := g.player.y
if t.x <= x && x <= t.x+t.w {
if t.y <= y && y <= t.y+t.h {
t.action(g)
}
}
}
```
As you can see, `Trigger`s are pretty simple. We will place a trigger in front of the reactor (at the back of The Cosplorer), and it will take away the player's Cosmium.

## Designing Enikoko
Now that we can make it to Enikoko, we should start to design it.

Now, I know, it's a small level, and also, no enemies? Remember, in this level the only enemy is a boss, which we haven't even made yet, so we can't _really_ put it in the level if it doesn't even exist yet. Also, the larger room in the map (which is meant to be for the boss) will probably end up being a lot bigger. It should also end up having a few columns, etc. Having a blank room won't make the boss fight that interesting, but we will implement that when we actually have a boss to create a world around.

And look at those trees! (so life-like).
And here is the end of the level, which in the lore is when we find more Cosmium and get onto a new ship.

## Next
We've got 2 more levels to create, one for the new ship, another for Schmeltool. Once we've finished these levels we have a lot of the gameplay finished, so we can start by making things look nicer. For example, I would like for items in the player's inventory to show up in the HUD. Furthermore, you can never tell when you're shooting, so some sort of animation would be nice. | chigbeef_77 |
1,762,681 | How to debug common problems on Aptos framework | A post by Ola-Balogun Taiwo | 0 | 2024-02-16T00:17:05 | https://dev.to/titre123/how-to-debug-common-problems-on-aptos-framework-17ab | titre123 | ||
1,762,808 | Slider pricing | Check out this Pen I made! A slider with gradual pricing for your product | 0 | 2024-02-16T04:12:34 | https://dev.to/cyna/slider-pricing-3ej1 | codepen | Check out this Pen I made! A slider with gradual pricing for your product
{% codepen https://codepen.io/pineapplecoding/pen/LYaqxeL %} | cyna |
1,762,847 | Finding the Perfect SMB Data Protection Strategy | Creating a robust data protection strategy is vital for small and medium-sized businesses (SMBs) in... | 0 | 2024-02-16T05:44:09 | https://dev.to/camwhitmore0033/finding-the-perfect-smb-data-protection-strategy-1d0o | dataprotection, backupsolution, businesscontinuity | Creating a robust [data protection](https://dzone.com/articles/choosing-the-right-smb-backup-solution) strategy is vital for small and medium-sized businesses (SMBs) in today's digital landscape. With increasing threats such as cyber-attacks, data breaches, and accidental data loss, having an effective data protection plan is not just beneficial; it's essential for the survival and continuity of your business. This article will guide you through the key components of finding the perfect SMB data protection strategy, ensuring your business's data remains secure, accessible, and recoverable.
## 1. Understand Your Data
Identify and Classify: The first step in protecting your data is understanding what data you have. Identify and classify your data based on its sensitivity and importance to your business operations. This will help you determine the level of protection needed for different types of data.
## 2. Implement a Multi-Layered Security Approach
#### Antivirus and Anti-Malware:
Ensure that all devices and networks are protected with up-to-date antivirus and anti-malware solutions to defend against malicious software.
#### Firewalls and Encryption:
Use firewalls to protect your network and encrypt sensitive data both at rest and in transit. This ensures that even if data is intercepted, it remains unreadable to unauthorized users.
#### Access Control:
Implement strict access controls to ensure that only authorized personnel have access to sensitive data. Use strong passwords, two-factor authentication, and role-based access controls to minimize the risk of unauthorized access.
## 3. Regular Backups
#### Automated Backup Solutions:
Implement automated backup solutions to regularly back up all critical data. This ensures that in the event of data loss, you have up-to-date backups available for recovery.
#### Off-site and Cloud Backups:
In addition to on-site backups, consider off-site or cloud-based backups to protect against physical disasters such as fires or floods. Cloud backups offer scalability, flexibility, and remote access to data.
## 4. Disaster Recovery Planning
#### Disaster Recovery Plan:
Develop a comprehensive disaster recovery plan that outlines how your business will recover from various data loss scenarios. This should include steps for restoring data from backups, roles and responsibilities, and communication plans during a disaster.
#### Regular Testing:
Regularly test your disaster recovery plan to ensure it works as expected. Simulate different disaster scenarios to identify any weaknesses or areas for improvement in your plan.
## 5. Employee Training and Awareness
#### Cybersecurity Training:
Regularly train your employees on cybersecurity best practices, including how to recognize phishing attempts, the importance of using strong passwords, and safe internet browsing habits.
#### Data Handling Protocols:
Educate your employees on proper data handling protocols to prevent accidental data loss. This includes secure file sharing, data encryption, and the use of authorized devices and software.
## 6. Stay Informed and Compliant
#### Regulatory Compliance:
Stay informed about data protection regulations that apply to your business, such as GDPR, CCPA, or HIPAA. Ensure your data protection strategies are compliant with these regulations to avoid legal and financial penalties.
#### Stay Updated:
Cyber threats are constantly evolving, so it's important to stay informed about the latest security trends and threats. Regularly update your security measures and protocols to defend against new types of attacks.
## Conclusion
Finding the perfect SMB data protection strategy involves a comprehensive approach that includes understanding your data, implementing multi-layered security measures, regular backups, disaster recovery planning, employee training, and staying informed and compliant with regulations. By taking these steps, you can protect your business from the devastating effects of data loss and ensure the continuity of your operations in the face of digital threats. Remember, investing in data protection is not an expense; it's an investment in your business's future. | camwhitmore0033 |
1,762,931 | Revolutionizing Content Creation: OpenAI Introduces SORA, The Next Frontier in Text-to-Video Technology | Introduction: In the ever-evolving landscape of artificial intelligence, OpenAI has once again... | 0 | 2024-02-16T06:28:19 | https://dev.to/nitin-rachabathuni/revolutionizing-content-creation-openai-introduces-sora-the-next-frontier-in-text-to-video-technology-mn | webdev, programming, ai, career | Introduction:
In the ever-evolving landscape of artificial intelligence, OpenAI has once again positioned itself at the forefront of innovation with the introduction of SORA, a groundbreaking text-to-video technology. This leap forward represents not just a significant advancement in AI capabilities but also a new era for content creators, marketers, educators, and storytellers across the globe. In this article, we delve into what makes SORA a game-changer, its potential applications, and how it might shape the future of digital content.
The Genesis of SORA:
SORA is born out of OpenAI's continuous pursuit of bridging the gap between human creativity and artificial intelligence. Leveraging the sophisticated algorithms and deep learning techniques that powered previous successes, SORA is designed to understand complex textual inputs and convert them into engaging, high-quality videos. This technology encapsulates the essence of storytelling by transforming written narratives into visual journeys, making it a potent tool for various industries.
How SORA Works:
At its core, SORA utilizes advanced NLP (Natural Language Processing) to grasp the nuances of text inputs, coupled with a generative model that has been trained on a vast dataset of videos and their corresponding descriptions. This dual approach allows SORA to not only understand the context and emotions conveyed in the text but also to creatively visualize and produce video content that aligns with the given narrative.
Unleashing Creative Potential:
The implications of SORA for content creation are immense. Here are just a few ways it could revolutionize the industry:
Marketing and Advertising: Brands can create compelling video content directly from text-based campaign ideas, significantly reducing production time and costs.
Education: Educators can convert lessons or summaries into engaging video content, providing students with a more interactive learning experience.
Entertainment: Writers and storytellers can see their narratives come to life, opening up new avenues for storytelling beyond traditional media.
News and Journalism: Journalists can quickly produce visual summaries of news stories, making information more accessible and engaging to the public.
Challenges and Ethical Considerations:
While SORA presents exciting opportunities, it also brings forth challenges, particularly concerning ethics and authenticity. The ease of creating realistic videos from text raises concerns about misinformation, copyright, and the need for clear guidelines to ensure responsible use. OpenAI is committed to addressing these issues by implementing strict ethical standards and encouraging transparent use of SORA in content creation.
The Road Ahead:
As we stand on the cusp of this new frontier in AI-driven content creation, the potential of SORA to democratize video production is clear. It promises to empower individuals and organizations to convey their messages more effectively, creatively, and personally than ever before. However, as we navigate this promising yet uncharted territory, it's crucial to foster a culture of innovation, responsibility, and ethical use to fully realize SORA's potential while mitigating its risks.
Conclusion:
OpenAI's SORA is not just a technological marvel; it's a testament to the limitless possibilities when human ingenuity meets artificial intelligence. As SORA begins to weave its narrative in the tapestry of digital content creation, its impact is poised to be as profound as it is transformative. For creators and consumers alike, the journey has just begun, and the future of storytelling is brighter and more accessible than ever.
---
Thank you for reading my article! For more updates and useful information, feel free to connect with me on LinkedIn and follow me on Twitter. I look forward to engaging with more like-minded professionals and sharing valuable insights.
| nitin-rachabathuni |
1,762,955 | Ammar Turizm ile Umre Fiyatları ve Hizmetleri | Umre ibadeti, Müslümanlar için büyük bir öneme sahip olan kutsal bir yolculuktur. Bu manevi deneyimi... | 0 | 2024-02-16T07:33:26 | https://dev.to/favsites/ammar-turizm-ile-umre-fiyatlari-ve-hizmetleri-lg5 | ammar, turizm, umre, fiyatlar | Umre ibadeti, Müslümanlar için büyük bir öneme sahip olan kutsal bir yolculuktur. Bu manevi deneyimi gerçekleştirmek isteyenler için Ammar Turizm, kaliteli hizmetleri ve uygun fiyatlarıyla öne çıkmaktadır. Bu makalede, Ammar Turizm'in umre hizmetleri ve sunulan fiyatlar hakkında bilgi vereceğiz.
Ammar Turizm, uzun yıllara dayanan deneyimiyle umre ibadetini yerine getirmek isteyenlere profesyonel destek sağlamaktadır. Şirket, umre organizasyonunda güvenilir bir partner olarak tanınmaktadır ve ibadet sürecindeki tüm ayrıntıları düzenlemektedir. Ammar Turizm'in deneyimli ve bilgili personeli, yolcuların ihtiyaçlarına özenle odaklanarak unutulmaz bir umre deneyimi yaşamalarını sağlamaktadır.
Umre hizmetleri konusunda Ammar Turizm, çeşitli paket seçenekleri sunmaktadır. Bu paketler, farklı tercih ve bütçelere uygun olarak düzenlenmektedir. Umre süresi, konaklama türü, ulaşım detayları ve rehberlik hizmetleri gibi unsurlar, paketlerin içeriğini belirleyen faktörler arasında yer almaktadır.
Ammar Turizm'in [umre fiyatları](https://ammarturizm.com/umre-fiyatlari) ise sunulan hizmetlere ve paket seçeneklerine göre değişiklik göstermektedir. Genellikle umre tarihlerine, otel seçimine ve konaklama süresine bağlı olarak fiyatlandırma yapılmaktadır. Ammar Turizm, rekabetçi fiyatlarıyla kaliteli bir umre deneyimi sunmayı hedeflemektedir.
Umre fiyatları, genellikle konaklama türüne bağlı olarak değişiklik göstermektedir. Ammar Turizm, umre süresince otel konaklamalarını genellikle Mekke'de yapmaktadır. Bu konaklama seçeneği, ibadet sürecini kolaylaştırmakta ve misafirlere daha yakın bir deneyim sunmaktadır. Ammar Turizm, farklı otel seçenekleri sunarak müşterilerine çeşitli fiyat aralıklarında seçenekler sunmaktadır.
 | favsites |
1,763,006 | The Power of Systematic Investment Plans (SIPs) in Mutual Funds | Investing in Mutual Funds through Systematic Investment Plans (SIPs) has emerged as a popular avenue... | 0 | 2024-02-16T08:58:50 | https://dev.to/jatinsharma123/the-power-of-systematic-investment-plans-sips-in-mutual-funds-1251 | finance, business, money, investment | 
Investing in [Mutual Funds](https://www.mysiponline.com/mutual-funds.php) through Systematic Investment Plans (SIPs) has emerged as a popular avenue for individuals seeking to build wealth steadily over time. SIPs offer a disciplined and convenient approach to investing in the financial markets, allowing investors to contribute small amounts regularly towards their investment goals. In this article, we delve into the benefits of SIPs and their role in Mutual Fund investments.
**## Understanding SIPs
**
A SIP is a method of investing a fixed amount regularly, typically monthly, into a Mutual Fund scheme of your choice. The investment amount is deducted automatically from your bank account and allocated to the chosen Mutual Fund scheme. SIPs provide investors with the flexibility to invest small amounts systematically over time, regardless of market fluctuations.
**## Benefits of SIPs
**
**Rupee Cost Averaging**: One of the key advantages of SIPs is the concept of rupee cost averaging. Since SIPs involve investing a fixed amount at regular intervals, investors buy more units when prices are low and fewer units when prices are high. This averaging out of purchase costs helps mitigate the impact of market volatility and potentially lowers the average cost per unit over time.
**Disciplined Investing**: SIPs instill discipline in investors by encouraging regular investment habits. By automating the investment process, SIPs eliminate the need for timing the market and help investors stay invested for the long term, which is crucial for wealth [creation](https://dev.to/).
**Compounding Benefits**: The power of compounding is amplified through SIPs. As returns generated by Mutual Funds are reinvested back into the scheme, investors benefit from compounding returns over time. The longer the investment horizon, the greater the compounding effect, leading to significant wealth accumulation in the long run.
**Affordability and Accessibility**: SIPs make investing in Mutual Funds accessible to a wide range of investors, including those with limited capital. With the option to start SIPs with small amounts, investors can participate in the financial markets without the need for a large initial investment. This affordability encourages more individuals to start investing and reap the benefits of long-term wealth creation.
**Dollar-Cost Averaging**: Similar to rupee cost averaging, SIPs also employ the concept of dollar-cost averaging for investors who wish to invest in foreign Mutual Funds. By investing fixed amounts regularly in foreign currency-denominated Mutual Funds, investors can benefit from averaging out their purchase costs over time, reducing the impact of currency fluctuations.
## **Role of SIPs in Mutual Fund Investments**
SIPs play a crucial role in Mutual Fund investments by providing a structured and disciplined approach to wealth creation. Here's how SIPs contribute to the success of Mutual Fund investments:
**Regular Investment**: SIPs ensure that investors continue to invest regularly, irrespective of market conditions. This regular investment pattern helps investors accumulate wealth gradually over time and achieve their financial goals.
**Risk Mitigation**: By spreading investments over time, SIPs help mitigate the risk of investing a lump sum amount during volatile market conditions. Since investments are made at different price points, the impact of market fluctuations is reduced, leading to a smoother investment journey.
**Long-Term Wealth Creation**: SIPs are ideal for investors with long-term investment horizons. By staying invested for the long term and benefiting from the power of compounding, investors can potentially create substantial wealth over time with SIP-based Mutual Fund investments.
**Goal-Based Investing**: SIPs allow investors to align their investments with specific financial goals, such as retirement planning, education funding, or buying a house. By investing regularly towards these goals, investors can make steady progress and achieve their objectives within the desired timeframe.
## Conclusion
**[Systematic Investment Plan](https://www.mysiponline.com/)** (SIPs) have revolutionized the way individuals invest in Mutual Funds, offering a disciplined, affordable, and effective approach to wealth creation. With benefits such as rupee cost averaging, disciplined investing, and compounding returns, SIPs empower investors to build wealth steadily over time and achieve their financial goals. As a cornerstone of Mutual Fund investments, SIPs continue to attract investors looking for a reliable and convenient way to grow their wealth in the long run. | jatinsharma123 |
1,763,033 | What Is The Best Way To Set Up ASP.NET Version For A Website In Plesk? | It is possible to select an ASP.NET version based on the code generated for the website. For ASP.NET... | 0 | 2024-02-16T09:44:15 | https://dev.to/mileswebhosting/what-is-the-best-way-to-set-up-aspnet-version-for-a-website-in-plesk-59b9 | aspnet, webdev, plesk, tutorial | It is possible to select an ASP.NET version based on the code generated for the website.
For ASP.NET `2.0–3.5,` select version `3.5`, and for ASP.NET 4.0 and higher, select version 4.5.
ASP.NET can be set in Plesk in two ways:
- From Hosting Settings
- From ASP.NET Settings
## Here are the steps to set the ASP.NET version in Plesk from Hosting Settings:
1. Enter your login credentials to access the Plesk control panel.

2. On the dashboard, select Websites & Domains.

3. Select Hosting Settings from the menu.

4. Select the Microsoft ASP.NET support version from the drop-down box in the Web scripting and Statistics section. After selecting the required version, click OK.

## Here are the steps to set the ASP.NET version in Plesk from ASP.NET Settings:
1. The first step is to log in to the Plesk control panel by entering your login credentials.

2. On the dashboard's left side, click Websites & Domains.

3. Select ASP.NET Settings from the menu.

4. Click on Change Version.

5. From the Selecting the ASP.NET Version window select the version you need and click OK.

And, that's done!
In this manner, you have successfully set up the ASP.NET version for a website in Plesk.
| mileswebhosting |
1,763,044 | The Next Frontier in Brand Communication: Custom QR Codes and Their Impact | In the ever-evolving landscape of brand communication, staying ahead of the curve is paramount. With... | 0 | 2024-02-16T10:16:11 | https://dev.to/divsly/the-next-frontier-in-brand-communication-custom-qr-codes-and-their-impact-5152 | customqrcode, personalisedqrcode, customqrcodegenerator | In the ever-evolving landscape of brand communication, staying ahead of the curve is paramount. With the advent of technology, brands are constantly seeking innovative ways to engage with their audience. One such innovation that has gained significant traction in recent years is custom QR codes. These digital matrices have transcended their traditional use in inventory tracking to become powerful tools for brand communication and marketing. In this blog, we delve into the phenomenon of custom QR codes, their significance, and the impact they have on brand communication strategies.
## Understanding Custom QR Codes
QR codes, short for Quick Response codes, are two-dimensional barcodes that can store a variety of information, such as URLs, text, or other data. Initially popularized in the automotive industry for inventory tracking, QR codes have now permeated various aspects of daily life, from digital payments to marketing campaigns.
[Custom QR code](https://divsly.com/features/custom-qr-code)s are a step further in the evolution of this technology. Unlike generic black-and-white QR codes, custom QR codes are designed with branding elements, such as colors, logos, and patterns, to align with the visual identity of a particular brand. This customization not only enhances the aesthetic appeal but also reinforces brand recognition and memorability.
## The Rising Popularity of Custom QR Codes
The resurgence of QR codes can be attributed to several factors. Firstly, the widespread adoption of smartphones equipped with built-in QR code scanners has made QR code interaction seamless for consumers. Additionally, the COVID-19 pandemic accelerated the use of QR codes for contactless interactions, such as digital menus in restaurants or touchless payments.
Custom QR codes capitalize on this growing trend by offering brands a unique way to connect with their audience. By integrating branding elements into QR codes, companies can seamlessly blend offline and online experiences, driving engagement and fostering brand loyalty.
## Impact on Brand Communication
**Enhanced Brand Visibility:** Custom QR codes serve as visual markers that draw attention to brand messaging. Whether displayed on product packaging, promotional materials, or storefronts, these branded codes serve as a gateway to immersive digital experiences.
**Storytelling Opportunities:** QR codes can link to multimedia content, such as videos, interactive experiences, or augmented reality (AR) applications. Brands can leverage this capability to narrate compelling stories, showcase product features, or provide behind-the-scenes glimpses, thereby deepening consumer engagement.
**Trackable Analytics:** Unlike traditional print media, QR code interactions are trackable, providing valuable insights into consumer behavior and campaign effectiveness. Brands can monitor metrics such as scan rates, location data, and time of interaction to refine their communication strategies and optimize ROI.
**Personalized Experiences:** Custom QR codes can be tailored to deliver personalized content based on factors such as location, demographics, or purchase history. By delivering relevant and timely information, brands can create meaningful connections with consumers, fostering brand affinity and advocacy.
**Seamless Integration:** Custom QR codes seamlessly integrate with existing marketing channels, such as social media, email campaigns, or physical signage. This versatility enables brands to create omnichannel experiences that resonate with consumers across various touchpoints, driving brand recall and conversion.
## Case Studies: Success Stories with Custom QR Codes
Burberry: The luxury fashion brand integrated custom QR codes into its flagship stores, allowing customers to access exclusive content, such as runway shows, product details, and styling tips. This immersive experience not only elevated the in-store shopping experience but also reinforced Burberry's reputation for innovation and craftsmanship.
McDonald's: In response to the COVID-19 pandemic, McDonald's implemented QR codes for contactless ordering and payment in its restaurants worldwide. By seamlessly integrating branded QR codes into the ordering process, McDonald's prioritized customer safety while streamlining operations and driving digital engagement.
## Conclusion
Custom QR codes represent the next frontier in brand communication, offering a versatile and impactful tool for engaging with consumers in an increasingly digital world. By combining visual branding with interactive technology, brands can create memorable experiences that resonate with their audience, driving brand awareness, loyalty, and advocacy. As the adoption of QR codes continues to soar, embracing this technology can empower brands to stay ahead of the curve and forge meaningful connections with their customers in the digital age. | divsly |
1,763,067 | What is Typescript ? A step by step guide for beginners | In the early 90s, when the web was just beginning to take shape, there was a demand for interactive... | 0 | 2024-02-16T10:40:53 | https://www.swhabitation.com/blogs/what-is-typescript-a-step-by-step-guide-for-beginners | typescript, javascript, node, webdev | In the early 90s, when the web was just beginning to take shape, there was a demand for interactive web pages. People wanted websites that did more than just show static data. JavaScript was born out of this desire.
JavaScript was originally known as Mocha. Later, it was renamed LiveScript and finally became JavaScript. It's a bit confusing, I know 😄
**Mocha => Livescript => Javascript**
JavaScript was created by a guy named Brendan Eich in 1995 while he was working at Netscape Communications.
Although JavaScript was not the first language to be used for web development, it quickly gained popularity due to its ease of use and its ability to enable web developers to add features such as animations, interactive forms and dynamic content to websites.
In the years that followed, JavaScript continued to evolve and develop, adding new features, fixing bugs and making the language more powerful.
Fast forward to today, JavaScript's influence extends far beyond web development. Its versatility spans across mobile applications, server-side applications, and even desktop software.
Typescript is a modernized version of JavaScript . It includes static typing and more. It wasn’t a complete rewrite of JavaScript; it simply built on its foundations and gave developers the option of a more structured development process.
The history of TypeScript begins in 2012, when Microsoft took the lead in developing it. By adding static typing to JavaScript, TypeScript was designed to improve code maintainability and scalability, as well as improve developer productivity. Since then, it’s gained a lot of traction, especially in large projects where the advantages of static typing are at their best.
Today, TypeScript stands as an example of how constantly web development technologies evolve and adapt.
## Why use Typescript ?
**Helps Catch Mistakes Early:** TypeScript checks your code for errors before executing it. It’s like having a structural engineer check your plan for flaws before starting construction.
**Adds Structure to Code:** With TypeScript, you can define different data types (e.g. number, string, or custom type) and control how these data types are used. This helps to make your code more structured and readable.
**Improves Collaboration:** When multiple people are working on a project together, TypeScript helps them understand each other’s code better because it improves readability and consistency.
**Enhances Productivity:** TypeScript has built-in features such as code autocompletion and enhanced tooling support that can reduce coding time and minimize mistakes.
**Maintainability:** As the size of your projects increases, it becomes more difficult to manage them all. This is where TypeScript comes in. It offers features such as interfaces and type checking, which make your code easier to keep up with and update over time.
**Enhances Code Readability and Maintainability:** In addition to TypeScript’s robust typing, it also has features such as interfaces and type annotations that make code easier to read and understand. When someone is reading your code, they’ll be able to easily see the types of data that each variable or function returns and expects. This makes it easier to maintain and understand your codebase over time, which reduces the risk of introducing errors or making unintentional changes when changing existing code.
## Types in TypeScript

**Number:** This type is used to represent integers such as 1, 2 and 3. It contains both whole numbers (floating-point numbers) and decimal-point numbers.
Example: let myNumber: number = 5;
**String:** This type is used to represent text or strings of text, such as “hello” or “TypeScript”.
Example: let myString: string = "hello";
Would you like to read the rest of the post? Here it is *[How to install cuttlebelle? A Step-by-Step Installation Guide Cuttlebelle ](https://www.swhabitation.com/blogs/how-to-install-cuttlebelle-a-step-by-step-guide-for-beginners)
Don't forget to follow us on [Medium](https://medium.com/@swhabitation) for more engaging content covering web development and technology!
Would you like to read the rest of the post? Here it is [What Is Typescript ? A Step By Step Guide For Beginners](https://www.swhabitation.com/blogs/what-is-typescript-a-step-by-step-guide-for-beginners)
Don't forget to follow us on [Medium](https://medium.com/@swhabitation) for more engaging content covering web development and technology! | swhabitation |
1,763,186 | Métodos para tratar array | Recentemente me deparei em uma situação interessante, qual é o momento em que devemos usar map,... | 0 | 2024-02-16T12:30:16 | https://dev.to/hemershon/metodos-para-tratar-array-59jb | ruby, rails, algorithms | Recentemente me deparei em uma situação interessante, qual é o momento em que devemos usar **map**, **select** e o *each*, normalmente eu uso o **each** com frequência aí veio a dúvida em qual momento devemos usar os outros métodos e qual é a diferença entre eles?
Dica: antes de procura em toda parte da web vá primeiro na documentação leia se não entendeu procure contexto diferentes de explicação, por que é importante você saber usar a documentação. Procurando na documentação você se depara com a seguinte situação;
**MAP**
O **map** invoca o bloco fornecido uma vez para cada elemento de self. Criando um novo array contendo os valores retornados pelo bloco, se caso nenhum bloco for fornecido, um Enumerador será retornado
Detalhes do **map**:
O podemos usar ele com ***arrays***, ***hashes*** e ***ranges*** o principal uso do ***map*** é transformar dados.
por exemplo:
Dada uma matriz de strings, você pode passar por cima de cada string e tornar cada caractere maiúsculo
Sintaxe do map no Ruby
```
array = ["a", "b", "c"]
array.map { |string| string.upcase } # ["A", "B", "C"]
```
Collect
A **collect** invoca o bloco fornecido uma vez para cada elemento de self, ele criar um novo array contendo os valores retornados pelo bloco, se não estiver nenhum bloco ele será retornado um Enumerador, no caso é a mesma coisa que o map.
Detalhes do collect
O collect() de enumerable é um método embutido em ruby que retorna um novo array com resultados da execução do bloco uma vez para cada elemento em enum O objeto é repetido todas as veses para cada enum, caso nenhum objeto seja fornecido, ele retorna nil para cada enum
Sintaxe:
```
(r1..r2).collect {|obj| bloquado }
```
Parâmetros: A função pega o objeto e o bloco que é para cada enum, também leva r1 e r2 que decide o número de elementos no enumerável retornado.
valor do retorno: Retorna um novo array
retorna o enumerador
```
enu = (2..6).collect { |x| x * 10 } # [20, 30, 40, 50, 60]
retorna o valor nil
enu = (2..6).collect {} # [nil, nil, nil, nil, nil]
```
fonte: Geeksforgeeks
Select
O **select** funciona de duas maneiras únicas; Primeiro: pega o bloco para que possa ser usado como array#select Segundo: Modifica a instrução SELECT da consulta para que apenas determinados campos sejam recuperados.
Exemplo:
```
[1, 2, 3, 4, 5].select { |num| num.event? } # [2, 4]
```
Exemplo 2:
```
a = %w{ a b c d e f }
a.select { |v| v =~ /[aeiou]/ } # ["a", "e"]
```
Detalhes do select
Select é um método de calsse array que retorna um novo array contendo todos os elementos do array para os quais o bloco dado retorna um valor verdadeiro.
Sintaxe: array.select()
Parâmentro: Matriz
Retorno: Uma nova matriz contendo todos os elemetos da matriz para os quais o bloco fornecido retorna um valor verdadeiro.
fonte: Geeksforgeeks
O each chama o bloco fornecido uma vez para cada elemento em self, passando esse elemento como um parâmetro, no caso ele retornar o próprio array, se caso nenhum bloco for fornecedor ele também traz um Enumerador.
```
a = [ "a", "b", "c" ]
a.each { |x| print x, "--" } # a--b--c-- => ["a", "b", "c"]
```
Isso foi um resumo para entender melhor como tratar seus array usando métodos diferentes para situações diferentes.
Fontes: [Apidock](https://ruby-doc.org/core-2.7.0/Array.html) | hemershon |
1,763,232 | Build a customizable LLM Chatbot in Unity | In this tutorial, we will guide you through the process of integrating personalized AI chatbots into... | 0 | 2024-02-16T13:31:16 | https://www.edenai.co/post/build-a-customizable-llm-chatbot-in-unity | ai, api, opensource | _In this tutorial, we will guide you through the process of integrating personalized AI chatbots into your Unity game using AskYoda by Eden AI. This integration will allow you to create engaging and responsive non-player characters (NPCs) with the help of Language Model (LLM) providers such as OpenAI, Cohere, Anthropic, Google Cloud, and AI21 Labs._
## What is Unity?

Founded in 2004, Unity is a leading gaming company offering a robust game development engine that empowers developers to create immersive games for various platforms.
Unity's integration with artificial intelligence (AI) enables developers to incorporate intelligent behaviors, decision-making processes, and advanced functionalities into their games.

Unity offers multiple pathways for integrating AI into applications. One notable option is the Unity Eden AI Plugin, which seamlessly interfaces with the Eden AI API, streamlining the integration of AI features, including customizable AI chatbot functionality, into Unity applications.
This integration simplifies the process of incorporating AI into Unity projects, enhancing interactivity, personalization, and immersion in gaming experiences
## Custom Chatbots vs. Standard Chatbots
Standard chatbots typically rely on pre-existing datasets for training, which may not cover all potential user queries or scenarios. As a result, they may struggle to understand or provide accurate responses to certain types of inquiries.
Customizable chatbots offer a solution to these limitations by allowing organizations to tailor the chatbot's capabilities and responses according to their specific needs.

Organizations can provide their own datasets to train the chatbot, ensuring that it's equipped to handle the specific queries and scenarios relevant to their domain or industry. This enables the chatbot to better understand and respond to user inquiries within that context.
## Benefits of using Custom Chatbots in Unity
Customizable AI chatbots, tailored to specific data in video games, elevate player experiences and equip developers with potent tools for crafting vibrant, personalized game worlds. Consider the following benefits:
### 1. Enhanced Realism
In terms of Enhanced Realism, chatbots trained on specific data forge conversations that are not only more realistic but also contextually pertinent. NPCs, armed with this capability, delve into detailed aspects of the game world, immersing players more deeply.
### 2. Personalized Player Interaction
Personalized Player Interaction reaches new heights as AI chatbots, attuned to individual player data, dynamically adapt responses according to unique player choices and preferences. NPCs can recall and reference past interactions, providing a personalized and tailored gaming experience for each player.
### 3. Adaptive Storytelling
For Adaptive Storytelling, chatbots trained on specific narrative elements contribute to more engaging and dynamic storytelling. NPCs respond dynamically to in-game events, player decisions, and the overall progression of the storyline.
### 4. Targeted Educational Content
In the realm of Targeted Educational Content, customizable chatbots trained on educational data deliver precision. In educational games, NPCs provide context-specific explanations, quizzes, and learning materials tailored to the player's current knowledge level.
### 5. Cultural and Historical Accuracy
Cultural and Historical Accuracy is assured as chatbots, versed in specific cultural or historical data, engage in conversations reflecting the historical or cultural context. This adds authenticity to the gaming experience.
### 6. Genre-Specific Interactions
For Genre-Specific Interactions, chatbots trained on genre-specific data craft dialogues and interactions that seamlessly fit the thematic consistency of the game. In a fantasy game, NPCs use genre-specific language, references, and terminology.
## 7. Simulating Professions and Expertise
Chatbots excel at Simulating Professions and Expertise when trained on profession-specific data. NPCs with medical knowledge, for instance, provide accurate information in a healthcare-themed game, adding realism and depth.
### 8. Dynamic Response to Real-World Events
For Dynamic Response to Real-World Events, chatbots trained on real-world data dynamically respond to external events. NPCs can incorporate real-world news or seasonal events into dialogues, keeping the game world relevant and up-to-date.
### 9. Increased Player Retention
The Increased Player Retention stems from the personalized and dynamic nature of chatbot interactions. Players are more likely to continue playing a game where the story and interactions evolve based on their choices and preferences.
In conclusion, customizable AI chatbots trained on specific data empower game developers to fashion more authentic, adaptive, and engaging gaming experiences, ultimately leading to a more satisfying and immersive journey for players.
## How to Integrate Customizable AI Chatbot into Your Unity Game
Step 1. Install the Eden AI Unity Plugin
### Step 1. Install the [Eden AI Unity Plugin](https://github.com/edenai/unity-plugin?referral=how-to-integrate-custom-chatbot-in-unity)

Ensure that you have a Unity project open and ready for integration. If you haven't installed the Eden AI plugin, follow these steps:
1. Open your Unity Package Manager
2. [Add package](https://github.com/edenai/unity-plugin.git?referral=how-to-integrate-custom-chatbot-in-unity) from GitHub
### Step 2. Obtain your Eden AI API Key
To get started with the Eden AI API, you need to [sign up for an account on the Eden AI platform](https://app.edenai.run/user/register?referral=how-to-integrate-custom-chatbot-in-unity) (receive free credits upon registration!).
[Try Eden AI for FREE](https://app.edenai.run/user/register?referral=how-to-integrate-custom-chatbot-in-unity)
Once registered, you will get an API key which you will need to use the Eden AI Unity Plugin. You can set it in your script or add a file auth.json to your user folder (path: _~/.edenai_ (Linux/Mac) or _%USERPROFILE%/.edenai/_ (Windows)) as follows:
```
{ "api_key": "YOUR_EDENAI_API_KEY"}
```
Alternatively, you can pass the API key as a parameter when creating an instance of the _EdenAIApi_ class. If the API key is not provided, it will attempt to read it from the auth.json file in your user folder.
### Step 3: Create your AskYoda Project on Eden AI
[Go to AskYoda](https://app.edenai.run/bricks/edenai-products/askyoda/default?referral=how-to-integrate-custom-chatbot-in-unity) on the Eden AI platform. Here is the [link](https://github.com/edenai/unity-plugin?tab=readme-ov-file#ask-your-data?referral=how-to-integrate-custom-chatbot-in-unity) to the GitHub repository.
Initiate your AskYoda project, obtaining the unique project ID.

### Step 4. Integrate Custom AI Chatbot with [AskYoda](https://docs.edenai.co/docs/ask-yoda?referral=how-to-integrate-custom-chatbot-in-unity)
Revitalize your non-player characters (NPCs) by allowing them to articulate thoughts through the integration of customizable chatbot functionality with large language models (LLMs).
AksYoda by Eden AI is a chatbot builder allowing users to create customized AI assistants using their own data. It is powered by LLMs and is designed to be easily integrate an array of AI Chatbot APIs, such as OpenAI, Google, and Replicate, into your Unity project (please refer to our [documentation](https://github.com/edenai/edenai-apis/blob/master/AVAILABLES_FEATURES_AND_PROVIDERS.md?referral=how-to-integrate-custom-chatbot-in-unity)).
This capability empowers you to personalize the engine model and adjust the assistant's behavior, providing a versatile solution to tailor the desired ambiance of your game.
1. Open your script file where you want to implement AskYoda functionality.
2. Import the required namespaces at the beginning of your script:
```
using EdenAI;
using System;
using System.Threading.Tasks;
```
3. Create an instance of the Eden AI API class by passing your API key as a parameter. If the API key is not provided, it will attempt to read it from the auth.json file in your user folder.
```
EdenAIApi edenAI = new EdenAIApi();
```
4. Implement the SendYodaRequest function with the necessary parameters:
```
class Program
{
static async Task Main(string[] args)
{
string projectID = "YOUR_YODA_PROJECT_ID";
string query = "Which product is the most expensive?";
EdenAIApi edenAI = new EdenAIApi();
YodaResponse response = await edenAI.SendYodaRequest(projectID, query);
}
}
```
_Note: When using the chat functionality, it's important to note that the chatbot is designed to provide responses in the same language as the incoming message. For instance, if you send a message in French, the chatbot will respond in French. The language specification is handled automatically without the need for explicit instructions in the request._
### Step 5: Handle the AskYoda Response
The SendYodaRequest function returns a YodaResponse object.
Access the response attribute result for the large language model (LLM) response:
```
if (!string.IsNullOrEmpty(response.result))
{
// Use the LLM response as needed in your Unity project
}
else
{
// Handle the case where the interaction with AskYoda fails
}
```
### Step 6: Customize Parameters (Optional)
Adjust optional parameters based on your preferences:
- query (string) : The question or query about the data.
- history (List<Dictionary<string, string>>) (optional) : A list containing all the previous conversations between the user and the chatbot AI. Each dictionary item in the list should contain alternating "user" and "assistant" messages, with their associated roles and text. For example : new List<Dictionary<string, string>>{new Dictionary<string, string> { { "user", "Hi!" }, { "assistant", "Hi, how can I help you?" }}};.
- k (int) (optional) : The number of result chunk to return.
- llmModel (string) (optional) : The model to use for language processing.
- llmProvider (string) (optional) : The provider for the large language model (LLM) for processing. For a list of available providers, please refer to our documentation.
### Step 6: Test and Debug
Run your Unity project and test the AskYoda functionality. Monitor the console for any potential errors or exceptions, making adjustments as necessary.
## Conclusion
Now, your Unity project is equipped with AskYoda functionality, allowing you to personalize AI-powered NPCs using your data and large language models. Experiment with different queries and responses to create engaging and responsive in-game characters.
Feel free to refer back to this tutorial whenever you need to implement or refine AskYoda capabilities in your Unity projects. Now, go ahead and bring unique and personalized interactions to your AI-powered NPCs!
Feel free to explore additional [AI functionalities on Unity](https://github.com/edenai/unity-plugin?referral=how-to-integrate-custom-chatbot-in-unity) offered by Eden AI to further elevate your game development.
## About Eden AI
Eden AI is the future of AI usage in companies: our app allows you to call multiple AI APIs.

- Centralized and fully monitored billing
- Unified API: quick switch between AI models and providers
- Standardized response format: the JSON output format is the same for all suppliers.
- The best Artificial Intelligence APIs in the market are available
- Data protection: Eden AI will not store or use any data.
[Create your Account on Eden AI](https://app.edenai.run/user/register?referral=how-to-integrate-custom-chatbot-in-unity)
| edenai |
1,763,284 | Using LangServe to build REST APIs for LangChain Applications | In the vast ocean of content that is YouTube, the ability to quickly and accurately summarize video... | 0 | 2024-02-16T15:18:30 | https://www.koyeb.com/tutorials/using-langserve-to-build-rest-apis-for-langchain-applications | langchain, ai, webdev, tutorial |
In the vast ocean of content that is YouTube, the ability to quickly and accurately summarize video content is not just a luxury, but a necessity for many users and businesses.
Whether you're a content creator wanting to provide concise summaries for your audience, a researcher looking to extract key information, or a business aiming to analyze video content at scale, having the right tools and techniques is crucial.
This guide delves deep into the world of YouTube video summarization, harnessing the power of cutting-edge technologies including [Deepgram](https://deepgram.com/) for superior audio transcription, [Langchain](https://www.langchain.com/) for harvesting the power of the LLM, and [Mistral 7B](https://mistral.ai/news/announcing-mistral-7b/), a state-of-the-art and open-source LLM.
Together, these technologies form a formidable trio, enabling us to extract, process, and summarize video content with unparalleled accuracy and efficiency.
In this tutorial, you'll construct a fully functional [Streamlit](https://streamlit.io/) application from the ground up. Streamlit lets you turn simple data scripts into web applications without traditional front-end tools. This application will be capable of downloading audio from any YouTube video, transcribing it using Deepgram, and then summarizing the content with the assistance of Mistral 7B, all streamlined through the capabilities of Langchain.
You can deploy and preview the YouTube Summarization application from this guide using the [Deploy to Koyeb](https://www.koyeb.com/docs/build-and-deploy/deploy-to-koyeb-button) button below:
[](https://app.koyeb.com/deploy?name=youtube-summarization&type=git&repository=koyeb/example-youtube-summarization-langchain&branch=main&run_command=streamlit%20run%20main.py&env[DEEPGRAM_API_KEY]=REPLACE_ME&ports=8501;http;/)
**Note**: Remember to use a larger instance for faster processing times and to replace the value of the `DEEPGRAM_API_KEY` environment variable with your own information (as described in the section on integrating Deepgram).
## Requirements
To successfully develop this application, you will need the following:
- Python installed on your machine, here we use version 3.11
- A [Koyeb account](https://app.koyeb.com/) to deploy the application
- A [Deepgram account](https://deepgram.com/) for using their API. Deepgram offers new accounts $200 credit without providing credit card details.
- A [GitHub account](https://github.com/) to store your application code and trigger deployments
## Steps
1. [Install and setup Streamlit](#install-and-setup-streamlit)
2. [Download audio with yt-dip](#download-audio-with-yt-dlp)
3. [Transcribe audio with Deepgram](#transcribe-audio-with-deepgram)
4. [Summarize transcript with Langchain and Mistral 7B](#summarize-transcript-with-langchain-and-mistral-7b)
5. [Combine everything together in the Streamlit app](#combine-them-all-together-in-the-streamlit-app)
6. [Deploy to Koyeb](#deploy-to-koyeb)
## Install and Setup Streamlit
First, start by creating a new project. You should use `venv` to keep your Python dependencies organized in a virtual environment.
Create a new project/folder locally on your computer with:
```bash
# Create and move to the new folder
mkdir YouTubeSummarizer
cd YouTubeSummarizer
# Create a virtual environment
python -m venv venv
# Active the virtual environment (Windows)
.\venv\Scripts\activate.bat
# Active the virtual environment (Linux and MacOS)
source ./venv/bin/activate
```
To install [Streamlit](https://streamlit.io/), you just need to run the pip command:
```bash
pip install streamlit
```
A Streamlit application can consist of more than one page, but in this case, you will create a single-page application.
Streamlit allows you to design the page by adding different components to that page. The components can be text, like headings and sub-headings, but also objects like input widgets and download buttons.
To start your Streamlit application, create a new file called `main.py`, with the following initial contents:
```python
import streamlit as st
# Set page title
st.set_page_config(page_title="YouTube Video Summarization", page_icon="📜", layout="wide")
# Set title
st.title("YouTube Video Summarization", anchor=False)
st.header("Summarize YouTube videos with AI", anchor=False)
```
This code snippet uses the Streamlit library to set up the web application interface:
- The code starts by importing the Streamlit library, which is aliased as `st` for easier reference throughout the code.
- Next, the `st.set_page_config` function is called to configure the page settings. The page title is set to "YouTube Video Summarization", a scroll emoji is set as the page icon, and the layout is set to "wide" which provides a wider layout compared to the default.
- Following the page configuration, the `st.title` and `st.header` functions are called to set the main title and header of the page respectively. Additionally, the `anchor` parameter is specified as `False` in both functions, indicating that there won't be HTML anchor links associated with the title and header.
This setup lays down the basic structure and design of the web page for the YouTube Video Summarization application, creating a user-friendly interface.
You can run the application with:
```bash
streamlit run main.py
```
You will add more elements to this code when you build the final application in the last section, but for now you have the initial basic Streamlit application:

## Download audio with yt-dlp
To download audio from YouTube videos, you'll utilize the widely used [yt-dlp](https://github.com/yt-dlp/yt-dlp) library, which can be installed using the pip command as follows:
```bash
pip install yt-dlp
```
Now, let's proceed to craft the download logic to retrieve the audio file from a YouTube video.
For better organization and to keep the logic separate, you'll place this logic in a new file. Go ahead and create a file named `download.py`.
```python
from yt_dlp import YoutubeDL
def download_audio_from_url(url):
videoinfo = YoutubeDL().extract_info(url=url, download=False)
length = videoinfo['duration']
filename = f"./audio/youtube/{videoinfo['id']}.mp3"
options = {
'format': 'bestaudio/best',
'keepvideo': False,
'outtmpl': filename,
}
with YoutubeDL(options) as ydl:
ydl.download([videoinfo['webpage_url']])
return filename, length
# Testing by running this file
if __name__ == "__main__":
url = "https://www.youtube.com/watch?v=q_eMJiOPZMU"
filename, length = download_audio_from_url(url)
print(f"Audio file: {filename} with length {length} seconds")
print("Done!")
```
Here’s a breakdown of what each section of the code does:
- The `YoutubeDL` class is imported from the `yt_dlp` library. It is used to interact with and download video/audio from YouTube.
- An instance of `YoutubeDL` is created and its `extract_info` method is called with the `url` argument to gather information about the video without downloading it (`download=False`). The information is stored in the `videoinfo` variable.
- The `filename` variable constructs a file path where the audio will be saved, using the video's unique id and an `.mp3` extension.
- An `options` dictionary is created to specify the download options:
- `'format': 'bestaudio/best'` specifies to download the best audio quality available.
- `'keepvideo': False` indicates that the video portion should not be kept after downloading.
- `'outtmpl': filename` specifies the output template for the filename.
- A new `YoutubeDL` instance is created with the `options` dictionary as the argument, and within a `with` block, its `download` method is called with the video URL to download the audio.
This function encapsulates the process of downloading audio from a YouTube video, preparing the necessary file path and download options, and utilizing the `yt_dlp` library to perform the audio download.
You can test the audio download with the sample URL provided in the code or you can use your own. Run the file with:
```python
python download.py
```
A similar output to this will be shown:
```bash
[youtube] Extracting URL: https://www.youtube.com/watch?v=q_eMJiOPZMU
[youtube] q_eMJiOPZMU: Downloading webpage
[...]
[download] Destination: audio\youtube\q_eMJiOPZMU.mp3
[download] 100% of 3.09MiB in 00:00:00 at 16.97MiB/s
Audio file: ./audio/youtube/q_eMJiOPZMU.mp3 with length 242 seconds
Done!
```
You can check the audio file at the path mentioned in the output.
## Transcribe audio with Deepgram
To summarize the video, a transcript is required. In the realm of audio or video recordings, **a transcript is a document encompassing the textual representation of all spoken content**, serving as a medium to access and read the recording's content in text form.
For transcribing, you'll use the remarkable [Deepgram](https://deepgram.com/) library. You can [register](https://console.deepgram.com/signup) for free and receive a $200 credit without providing credit card details.
Post registration, you'll gain access to the Dashboard where you can generate the necessary API key:

You can define your API key with these options (you can give it a different name if you would like):

Then just click on the 'Create Key' button to create your API key and you will see the newly generated key:

Make sure to copy this key to a safe place, like a text file. You will need it later on.
To keep your API key safe and not exposed in the code, create a `.env` file where you will keep any necessary secrets:
```python
DEEPGRAM_API_KEY=<your key here>
```
To use the keys from the `.env`file, you will use the `python-decouple` library, which can be installed with:
```bash
pip install python-decouple
```
You should also install the Deepgram SDK:
```bash
pip install deepgram-sdk
```
Now you have all the necessary configuration to write the transcribe functionality.
Create a new file called `transcribe.py` where you will place this logic:
```python
from decouple import config
from deepgram import Deepgram
DEEPGRAM_API_KEY = config('DEEPGRAM_API_KEY')
def transcribe_audio(filename):
dg_client = Deepgram(DEEPGRAM_API_KEY)
with open(filename, 'rb') as audio:
source = {'buffer': audio, 'mimetype': 'audio/mp3'}
response = dg_client.transcription.sync_prerecorded(source,
model='nova-2-ea',
smart_format=True)
transcript = response['results']['channels'][0]['alternatives'][0]['transcript']
return transcript
```
Here's a breakdown of each part of the code:
- The function `transcribe_audio` is defined with a parameter `filename`, which is expected to be the path to the audio file to be transcribed.
- An instance of the Deepgram client is created named `dg_client`, using `DEEPGRAM_API_KEY` as the authentication key, which comes from the `.env` file.
- The `with open(filename, 'rb') as audio:` line is using a context manager to open the specified audio file in binary read mode (`'rb'`). This ensures that the file is automatically closed after use.
- A dictionary named `source` is created to hold the audio buffer and its mime type. The `buffer` key holds the audio file object, and the `mimetype` key specifies that the audio file is in MP3 format.
- A transcription request is made to Deepgram using the `sync_prerecorded` method of the `dg_client` object. The method is passed several arguments:
- `source`: The `source` dictionary containing the audio buffer and mime type.
- `model`: Specifies the model to be used for transcription, in this case, `nova-2-ea`.
- `smart_format`: When set to `True`, this option enables smart formatting of the transcript.
- The transcript text is extracted from the response dictionary by navigating through its nested structure: `response['results']['channels'][0]['alternatives'][0]['transcript']`.
This function encapsulates the process of transcribing an audio file using the Deepgram API, extracting the transcript text from the response, and returning it.
## Summarize transcript with Langchain and Mistral 7B
So far you have downloaded an audio file from YouTube and created a transcript. Now is the time to create a summary of that transcript of the orginal video, so that it captures the most important points.
For this, you will use the [Langchain](https://www.langchain.com/) library, a powerful toolkit for orchestrating language models. At the core of this orchestration is the Map-Reduce pattern, powered by Langchain's `MapReduceDocumentsChain`, to process the transcript in a systematic and scalable manner.
It loads the transcript and then segments it into manageable chunks, which are individually summarized (mapped) before being consolidated into a final summary (reduced).
You will also leverage the prowess of [Mistral 7B](https://mistral.ai/news/announcing-mistral-7b/), a state-of-the-art language model, orchestrated through a series of chains and templates provided by Langchain.
- In the use of the Langchain library, **"chains" refer to a sequence of processing steps where the output from one step is used as the input for the next**, facilitating complex text processing tasks like summarization through an orchestrated flow.
- On the other hand, **"templates" are predefined textual frameworks called "PromptTemplates" which structure the prompts given to language models**, incorporating placeholders and specific instructions that mold the model's outputs towards a targeted outcome, ensuring consistency and direction in the tasks being executed.
The final result is a well-organized, consolidated summary of the main points from the transcript, achieved with a high degree of efficiency and accuracy.
Start by installing the Langchain library, transformers, and ctransformers, as usual with a pip command:
```bash
pip install langchain ctransformers transformers
```
The `transformers` library is an open-source, state-of-the-art machine-learning library developed by Hugging Face. It provides a vast collection of pre-trained models specialized in various NLP tasks, including text classification, summarization, translation, and question-answering.
Now you can create a new file called `summarize.py` that will contain the summarization logic:
```python
import os
import time
from langchain.chains import MapReduceDocumentsChain, LLMChain, ReduceDocumentsChain, StuffDocumentsChain
from langchain.llms import CTransformers
from langchain.prompts import PromptTemplate
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders import (TextLoader)
# (... code continues in following snippets)
```
First, you start with the imports for the necessary chains, prompt template, text splitter, and also load the transcript text.
To run Mistral 7B, you will use the CTransformers library from Langchain itself.
Next, you expand the file to load the transcript and LLM (Mistral 7B):
```python
# summarize.py
# (... previous code ...)
def summarize_transcript(filename):
# Load transcript
loader = TextLoader(filename)
docs = loader.load()
# Load LLM
config = {'max_new_tokens': 4096, 'temperature': 0.7, 'context_length': 4096}
llm = CTransformers(model="TheBloke/Mistral-7B-Instruct-v0.1-GGUF",
model_file="mistral-7b-instruct-v0.1.Q4_K_M.gguf",
config=config,
threads=os.cpu_count())
```
Here is the breakdown of the code:
- A function named `summarize_transcript` is defined with `filename` as its argument, which is intended to hold the path to the transcript file.
- An instance of `TextLoader` is created and the `load` method is called to load the contents of the transcript file into a variable named `docs`.
- A language model instance (`CTransformers`) is initialized using a specific model and configuration, which is stored in a variable named `llm`.
- The `config` variable holds the LLM configuration:
- **`max_new_tokens`**: This specifies the maximum number of tokens that the language model is allowed to generate in a single invocation. In this case, the model should not generate more than 4096 tokens.
- **`temperature`**: This controls the randomness of the output generated by the language model. A temperature closer to 0 makes the model more deterministic, favoring more likely outcomes. A higher temperature leads to more diversity but can also result in less coherent outputs. A value of **`0.7`** suggests a balance between randomness and determinism.
- **`context_length`**: This indicates the number of tokens from the input that the model should consider when generating new content or making predictions. In this case, the model takes into account up to 4096 tokens of the provided input data to understand the context before generating any output.
For the LLM you will use a version of Mistral 7B from [TheBloke](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF), which is optimized to run on the CPU. When the code is run, the model will be automatically downloaded.
The next step is to build the prompt templates that will generate the prompts that run at the different stages of the chain:
```python
# summarize.py
# (... previous function code ...)
# Map template and chain
map_template = """<s>[INST] The following is a part of a transcript:
{docs}
Based on this, please identify the main points.
Answer: [/INST] </s>"""
map_prompt = PromptTemplate.from_template(map_template)
map_chain = LLMChain(llm=llm, prompt=map_prompt)
# Reduce template and chain
reduce_template = """<s>[INST] The following is set of summaries from the transcript:
{doc_summaries}
Take these and distill it into a final, consolidated summary of the main points.
Construct it as a well organized summary of the main points and should be between 3 and 5 paragraphs.
Answer: [/INST] </s>"""
reduce_prompt = PromptTemplate.from_template(reduce_template)
reduce_chain = LLMChain(llm=llm, prompt=reduce_prompt)
```
This portion of the code is focused on setting up the templates and chains for the Map and Reduce phases of the summarization process using the Langchain library, here’s a breakdown.
For the `map_template` and chain:
- A string `map_template` is defined to hold the template for mapping. The template is structured to instruct a language model to identify the main points from a portion of the transcript represented by `{docs}`.
- `PromptTemplate.from_template(map_template)` is then called to convert the string template into a `PromptTemplate` object, which is stored in `map_prompt`.
- An instance of `LLMChain` is created with the `llm` object (representing the language model) and `map_prompt` as arguments, and is stored in `map_chain`. This chain will be used to process individual chunks of the transcript and identify the main points from each chunk.
For the `reduce_template` and chain:
- A string `reduce_template` is defined to hold the template for reducing. The template is structured to instruct a language model to consolidate a set of summaries (represented by `{doc_summaries}`) into a final, organized summary of main points that should span between 3 and 5 paragraphs.
- `PromptTemplate.from_template(reduce_template)` is called to convert the string template into a `PromptTemplate` object, which is stored in `reduce_prompt`.
- An instance of `LLMChain` is created with the `llm` object and `reduce_prompt` as arguments, and is stored in `reduce_chain`. This chain will be used to process the set of summaries generated from the map phase and consolidate them into a final summary.
With the different templates and individual chain components prepared, the next step is to start building the chain itself:
```python
# summarize.py
# (... previous function code ...)
# Takes a list of documents, combines them into a single string, and passes this to an LLMChain
combine_documents_chain = StuffDocumentsChain(
llm_chain=reduce_chain, document_variable_name="doc_summaries"
)
# Combines and iteratively reduces the mapped documents
reduce_documents_chain = ReduceDocumentsChain(
# This is final chain that is called.
combine_documents_chain=combine_documents_chain,
# If documents exceed context for `StuffDocumentsChain`
collapse_documents_chain=combine_documents_chain,
# The maximum number of tokens to group documents into.
token_max=4000,
)
# Combining documents by mapping a chain over them, then combining results
map_reduce_chain = MapReduceDocumentsChain(
# Map chain
llm_chain=map_chain,
# Reduce chain
reduce_documents_chain=reduce_documents_chain,
# The variable name in the llm_chain to put the documents in
document_variable_name="docs",
# Return the results of the map steps in the output
return_intermediate_steps=True,
)
```
This code block is dedicated to setting up various chains that orchestrate the process of summarizing documents. These chains are designed to work together, with each serving a specific role in the summarization pipeline, here's a detailed breakdown.
**Combining documents chain:**
- `StuffDocumentsChain`: This chain is initiated with `llm_chain` set to `reduce_chain` and `document_variable_name` set to "doc_summaries". Its purpose is to take a list of documents, combine them into a single string, and pass this combined string to the `reduce_chain`. This is stored in the variable `combine_documents_chain`.
**Reducing documents chain:**
- `ReduceDocumentsChain`: This chain is initiated with three arguments:
- `combine_documents_chain`: Refers to the previously defined `combine_documents_chain`.
- `collapse_documents_chain`: Also refers to `combine_documents_chain`, indicating that if the documents exceed the context length, they should be passed to the `combine_documents_chain` to be collapsed or combined further.
- `token_max`: Set to 4000, this argument specifies the maximum number of tokens to group documents into before passing them to the `combine_documents_chain`.
- This chain, stored in the variable `reduce_documents_chain`, is designed to manage the iterative process of reducing or summarizing the documents further.
**The map-reduce chain:**
- `MapReduceDocumentsChain`: This is a more complex chain that orchestrates the map-reduce process for summarizing documents. It's initiated with several arguments:
- `llm_chain`: Refers to the previously defined `map_chain`, which is used for mapping or summarizing individual chunks of documents.
- `reduce_documents_chain`: Refers to the `reduce_documents_chain`, which is used for reducing or summarizing the mapped documents further.
- `document_variable_name`: Set to "docs", this argument specifies the variable name in the `llm_chain` to put the documents in.
- `return_intermediate_steps`: Set to `True`, this argument specifies that the results of the map steps should be returned in the output.
- This chain, stored in the variable `map_reduce_chain`, orchestrates the overall map-reduce process of summarizing documents by first mapping a chain over them to summarize individual chunks, and then reducing or consolidating these summaries further.
The final step in the `summarize.py` file and the summarization function is to actually run the complete chain:
```python
# summarize.py
# (... previous function code ...)
# Split documents into chunks
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=4000, chunk_overlap=0
)
split_docs = text_splitter.split_documents(docs)
# Run the chain
start_time = time.time()
result = map_reduce_chain.__call__(split_docs, return_only_outputs=True)
print(f"Time taken: {time.time() - start_time} seconds")
return result['output_text']
```
This section of the code is focused on preparing the documents for processing, and executing the map-reduce summarization chain, here's a breakdown:
**Splitting documents:**
- An instance of `RecursiveCharacterTextSplitter` is created, named `text_splitter`, with a `chunk_size` of 4000 and `chunk_overlap` of 0. This object is designed to split documents into smaller chunks based on character count.
- The `split_documents` method of `text_splitter` is then called with the `docs` variable (which contains the loaded transcript) as the argument. This splits the transcript into smaller chunks, and the result is stored in the `split_docs` variable.
**Summarization Chain:**
- The `__call__` method of `map_reduce_chain` is then executed with `split_docs` and `return_only_outputs=True` as arguments. This triggers the map-reduce summarization process on the split documents. The `return_only_outputs=True` argument indicates that only the final outputs of the chain should be returned.
**Measuring LLM execution time:**
- The `time.time()` method is called at the end, and the difference between this time and `start_time` is calculated to determine the time taken to run the summarization chain.
This completes the `summarize.py` file. It may seem quite complex, but the basic principle is to prepare the prompts, prepare the map-reduce chains, and execute the final chain, splitting documents when necessary.
## Combine them all together in the Streamlit app
With all the necessary components built for the different stages of the flow, you can now focus on creating the user interface to receive a YouTube URL, process it and also display info for each stage of the process, and finally show the resulting summary.
The final step in building this summarization application is add the necessary widgets to finish the Streamlit application that you started in the first step.
You can complete the `main.py` file:
```python
# main.py
# (... previous import ..)
from download import download_audio_from_url
from summarize import summarize_transcript
from transcribe import transcribe_audio
# (... code continuation ...)
# Input URL
st.divider()
url = st.text_input("Enter YouTube URL", value="")
# Download audio
st.divider()
if url:
with st.status("Processing...", state="running", expanded=True) as status:
st.write("Downloading audio file from YouTube...")
audio_file, length = download_audio_from_url(url)
st.write("Transcribing audio file...")
transcript = transcribe_audio(audio_file)
st.write("Summarizing transcript...")
with open("transcript.txt", "w") as f:
f.write(transcript)
summary = summarize_transcript("transcript.txt")
status.update(label="Finished", state="complete")
# Play Audio
st.divider()
st.audio(audio_file, format='audio/mp3')
# Show Summary
st.subheader("Summary:", anchor=False)
st.write(summary)
```
This script is constructed to handle user input, download audio from a specified YouTube URL, transcribe the audio, summarize the transcript, and display the result using the Streamlit framework. Here’s a detailed breakdown:
**Input URL:**
- A visual divider is created using `st.divider()`.
- The `st.text_input` function creates a text input box for the user to enter a YouTube URL. The text "Enter YouTube URL" is displayed as a placeholder.
**Processing URL:**
- Within the `if` statement, a status indicator is initiated using `st.status` with the message "Processing...", and the following steps are encapsulated within this status indicator:
- The `download_audio_from_url` function is called with the URL as an argument, and the returned audio file path and length are stored in `audio_file` and `length`, respectively.
- Then a call to the `transcribe_audio` function with `audio_file` as the argument, storing the returned transcript in `transcript`.
- The transcript is written to a file named "transcript.txt", and then the `summarize_transcript` function is called with "transcript.txt" as the argument, storing the returned summary in `summary`.
- The status indicator is updated to "Finished" using `status.update`.
**Summary:**
- The `st.audio` function is used to create an audio player widget that allows the user to play the downloaded audio file, specifying the format as 'audio/mp3'.
- The summary is displayed below the subheader using `st.write(summary)`.
This concludes your YouTube summarization application, you can now run with:
```bash
streamlit run main.py
```
You should see something similar to this:
<iframe
width="560"
height="315"
src="https://www.youtube.com/embed/esnASX1nxnM?si=aT_mWANnWteVcYuI"
title="YouTube video player"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen
></iframe>
## Deploy to Koyeb
Now that you have the application running locally, depending on your CPU it might take some seconds to run the summarization. Deploy it on Koyeb and take advantage of the better processing power offered by high-performance microVMs.
Create a repository on your GitHub account, called `YouTubeSummarizer`.
Then create a `.gitignore` file in your local directory to exclude some folders and files from being pushed to the repository:
```bash
# PyCharm files
.idea
# Audio folder
audio
# Python virtual environment
venv
# Environment variables
.env
# Transcripts text file
transcript.txt
```
Run the following commands in your terminal to commit and push your code to the repository:
```bash
echo "# YouTubeSummarizer" >> README.md
git init
git add .
git commit -m "first commit"
git branch -M main
git remote add origin [Your GitHub repository URL]
git push -u origin main
```
You should now have all your local code in your remote repository. Now it is time to deploy the application.
Within the [Koyeb control panel](https://app.koyeb.com/), while on the **Overview** tab, initiate the app creation and deployment process by clicking **Create App**. On the App deployment page:
1. Select **GitHub** as your deployment method.
2. Choose the repository that you created earlier. For example, `YouTubeSummarizer`.
3. Select the **Buildpack** as your builder option.
4. Click **Build and deploy settings** to configure your **Run command** by selecting `Override` and adding the same command as when you ran the application locally: `streamlit run main.py`.
5. In the **Instance** selection, click "XLarge". This will provide you instance a good balance between performance and cost.
6. Select the regions where you want this service to run.
7. Click **Advanced** to view additional settings.
8. Set the port to `8501`. This is the port Streamlit uses to serve your application.
9. Click the **Add Variable** button to add your Deepgram API key named `DEEPGRAM_API_KEY`.
10. Set the **App name** to your choice. Keep in mind it will be used to create the URL for your application.
11. Finally, click the **Deploy** button.
Your application will start to deploy. Please note that the first time that you run the application it will take longer because it will download the Mistral 7B model. Subsequent runs will be much faster.
After the deployment process is complete, you can access your app by clicking with the application URL.
<iframe
width="560"
height="315"
src="https://www.youtube.com/embed/esnASX1nxnM?si=aT_mWANnWteVcYuI"
title="YouTube video player"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen
></iframe>
## Conclusion
By following this guide, you have created a sophisticated tool for summarizing YouTube video content. You've seen firsthand how potent tools like Deepgram, Langchain, and Mistral 7B can be used together to create a Streamlit application that not only simplifies but also amplifies the value of video content.
Deploying this application to Koyeb enables you to harness the power of high-performance microVMs, ensuring that your application runs smoothly in the regions where your users are located.
Now that you have a working application, you can think of ways to expand it, whether for enhancing personal productivity, advancing research, or providing cutting-edge business solutions.
| alisdairbr |
1,763,452 | Enhancing Java Concurrency: Processor, Core, Threads, Fibers | Regardless of your experience as a Java developer, you've likely encountered threads in your work and... | 0 | 2024-02-16T17:03:08 | https://dev.to/mbannour/enhancing-java-concurrency-threads-fibers-and-project-loom-in-jdk-21-49if | java, programming, beginners | Regardless of your experience as a Java developer, you've likely encountered threads in your work and have come across terms like **_processor_**, **_core_** and **_fiber_**. However, you might not fully understand the distinctions among them or their specific roles in Java's concurrency model. In this blog, I aim to demystify these concepts, explaining their unique meanings and functions within the framework of application execution. Grasping these differences is crucial for comprehending the fundamentals of concurrent and parallel computing
- **Processor:**
The processor, or Central Processing Unit (CPU), is the primary hardware component in a computer that executes instructions from software. It performs the basic arithmetical, logical, and input/output operations of the system.
- **Core:**
A core is an individual processing unit within a processor. Early CPUs had only one core, meaning they could only execute one sequence of instructions at a time. Modern CPUs, however, can have multiple cores, allowing them to execute multiple instruction sequences simultaneously. Each core can independently run one or more threads, making multi-core processors well-suited for multitasking and parallel processing.
- **Thread**
In Java, a thread represents a single sequence of executed instructions within a program. It is the smallest unit of execution that the Java Virtual Machine (JVM) can schedule and manage. Java allows programs to execute multiple threads concurrently, using the _Thread_ class or the _Runnable_ interface. Threads within the same process share the process's memory and resources but can be executed in parallel across multiple cores of a CPU to improve performance.
- **Fiber**
Fibers are lightweight threads designed to further enhance concurrency. Unlike traditional threads, which are managed by the operating system and have a significant context-switching overhead, fibers are managed by the JVM and are designed to have much lower overhead. This allows applications to spawn thousands or even millions of fibers, enabling massive concurrency. Fibers aim to simplify concurrent programming in Java by providing a model where developers can write code as if it were sequential, but execute it concurrently.
**Example with image to explain more:**

The image provided appears to illustrate a dual-core CPU, with each core running two threads and each core having its own set of caches (L1 instruction cache, L1 data cache, and a shared L2 cache), as well as sharing a common L3 cache that, in turn, interfaces with the main memory. Here’s how this relates to Java and CPU processors
**- CPU Cores and Threads in Java:**
In a Java application, when you create threads using the Thread class or by implementing the Runnable interface, the Java Virtual Machine (JVM) maps these threads onto the available CPU cores for execution.
**Core 0 and Core 1:** These represent individual processing units within the CPU. Modern CPUs can have multiple cores, allowing them to execute multiple threads in parallel.
**Thread 0 and Thread 1:** On each core, we see two threads, which could be Java threads. The operating system's scheduler determines which thread runs on which core and when. If a core is executing Thread 0, and it gets switched out for Thread 1, this is a context switch, which is a low-level task handled by the operating system to manage CPU time between multiple tasks.
**Caches (L1, L2, and L3)**: Caches are fast, small memories located close to or inside the CPU that store frequently accessed data to speed up access times. The L1 cache is split into an instruction cache and a data cache for storing instructions and data separately. The L2 cache is usually shared between the L1 caches of different cores (although it can also be core-specific), and the L3 cache is shared across all cores, interfacing between the CPU and the main memory. When a Java thread requires data, it first checks the cache before going to the main memory, which is slower.
Understanding this architecture is crucial for Java developers, especially when building applications that require high levels of concurrency and need to be optimized for performance. The way Java threads utilize CPU cores and caches can significantly affect application throughput, latency, and scalability.
| mbannour |
1,763,542 | Make your app faster - Use Caching 💨⚡️ | Speed is currency. Fast, responsive applications not only retain users but also boost revenue. One... | 0 | 2024-02-17T10:45:23 | https://dev.to/techvision/make-your-app-faster-use-caching-2f6d | systemdesign, database, performance | Speed is currency. Fast, responsive applications not only retain users but also boost revenue. One way to improve performance is through caching. Let's see what caching is and how it enhances performance. We'll also see how caching goes beyond performance and has other benefits.
If you prefer the video version here is the link 😉:
{% embed https://www.youtube.com/watch?v=DbSuaxPxqXo&t=1s %}
## What's caching?
Any interaction between two entities involves computation time and data transport time. To increase the interaction speed, we need to reduce those two factors. In a network interaction, we might want to reduce the transport time, while in the case of an application querying a database, we might want to optimise the query computation.
Caching is a technique that aims to reduce response time by taking a shortcut instead of going through the whole interaction. The idea is to store the result of an expensive computation or the result of a previous interaction and reuse it when the same computation or interaction is needed again.

## Caching everywhere
Let's take your browser as an example. When you visit a website, your browser stores the files that make up the website on your hard drive. The next time you visit the website, your browser will load the files from your hard drive instead of downloading them again from the server.
Caching is not limited to the browser. A CDN server could be sending you a cached version of the website you are visiting. A server could send a cached response instead of generating a new response for each request or querying fresh data from a database. Databases themselves have a cache. They might cache frequent queries and save on lookup time. Caching goes all the way to the hardware. CPUs have a sophisticated caching mechanism that optimises computation time and reduces access to slow hard drive memory.

As you can see, caching happens at different levels and places. No matter when and where it happens, the principle is the same: we want to move the data closer to the consumer and potentially use more performant storage.
## Dealing with stale data
If the data at the source changes, the cached data is considered stale. A common approach to deal with stale data is to set a time to live (TTL) for the cached data. When data has passed its TTL he data is considered stale, and the next time the data is requested, the cache will fetch the data from the source and store it again. Other strategies involve invalidating the cache when the source data changes, either by pushing the change to the cache or by having the cache check the source for changes.
Cache storage is limited in size and requires **cache eviction** when the capacity is reached. Strategies for cache eviction include removing the least recently used, least frequently used, or most expensive data. The chosen strategy depends on the use case and caching implementation.
Caching adds complexity to your system. You should only add caching when you have identified a performance problem and measured that caching will solve it.
Even if performance is not a primary concern, there are other reasons you might want to use caching.
## Hidden benefits
You could have a server that receives a high volume of requests. For each request, your server sends a query to your database. Your server could handle the load, but your database might not. If you have a cache in front of the database, you can reduce the load and improve the overall system performance.
Every query sent to your database requires computation resources. Additionally, if your database is not part of your server, each query may require a network call. Remember that network traffic and computation time are not free. You could make substantial savings on your cloud or infrastructure bill if your server caches query results.
---
Caching is a powerful technique to improve performance. It's not only about reducing response time; it's also about reducing the load on the source and the network. Caching adds complexity to your system, so use it when you have identified a performance issue worth solving.
Caching is a fundamental concept to understand. Understanding caching will help you understand other concepts such as CDN, Load Balancer, [Proxy server](https://www.youtube.com/watch?v=oy1I02V4JKs), etc. | techvision |
1,763,658 | How to de-structure an array returned by url.pathname.split(‘/’) | There is a nice trick to skip the empty string returned by the url.pathname.split(‘/’). “url” is a... | 0 | 2024-02-16T22:36:02 | https://dev.to/ramunarasinga/how-to-de-structure-an-array-returned-by-urlpathnamesplit-18el | javascript, webdev, frontend, nextjs | There is a nice trick to skip the empty string returned by the url.pathname.split(‘/’). “url” is a variable with the following, for example:
```
const url = new URL("https://medium.com/p/bfd60bf42c62/edit");
url;
```
Copy and paste the above code snippet into this browser console.
> Looking to practice and elevate your frontend skills? checkout https://tthroo.com for SaaS project based tutorials.
You will find that it logs the below object:
```
{
hash: "",
host: "medium.com",
hostname: "medium.com",
href: "https://medium.com/p/bfd60bf42c62/edit",
origin: "https://medium.com",
password: "",
pathname: "/p/bfd60bf42c62/edit",
port: "",
protocol: "https:",
search: "",
searchParams: URLSearchParams {size: 0},
username: "",
}
```
Type the below into the console:
```
url.pathname.split("/");
const [var1, var2, var3] = url.pathname.split("/");
console.log(var1, var2, var3);
```
You will see that it is inevitable to get the first element as an empty string in the returned/logged array in the browser.
## What is the clean way to skip the empty string when you de-structure it?
Just skip the defining the first item as shown below and you don’t have to worry about first empty string element.
```
const [, var2, var3] = url.pathname.split("/");
console.log(var2, var3)
```
I picked this from [Line 26: in create-next-app codebase](https://github.com/vercel/next.js/blob/canary/packages/create-next-app/helpers/examples.ts#L26C1-L27C1)
## Conclusion:
Well, you could still declare a variable when you de-structure it but this affects code readability since you now have a variable that contains empty string and you are not sure if you would use that anywhere later.
Looking to practice and elevate your frontend skills? checkout https://tthroo.com for SaaS project based tutorials.
| ramunarasinga |
1,763,687 | Python: Understanding Numbers and Basic Math Operations | Introduction This article delves into the diverse world of numeric operations in Python.... | 26,728 | 2024-02-17T00:10:17 | https://dev.to/tlayach/working-with-numbers-in-python-o8l | python, beginners, programming | # Introduction
This article delves into the diverse world of numeric operations in Python. From understanding the different types of numbers to executing basic arithmetic, and culminating in the art of representing numbers with f-strings, we embark on a journey through Python's numeric landscape.
# Index
- Types of Numbers in Python
- Math Operations in Python
- Number Representations using f-strings
# Types of Numbers in Python
Python encompasses various numeric types, including integers, floating-point numbers, and complex numbers. Integers stand as whole numbers, while floating-point numbers embrace decimals. Complex numbers, denoted as `x + yi`, possess both real and imaginary components, with `x` and `y` as floating-point numbers.
## Examples
### Integer
```python
fibonacci_10 = 55
print(f"The 10th Fibonacci number is {fibonacci_10}")
```
```
Output:
The 10th Fibonacci number is 55
```
### Floating-point number
```python
pi = 3.14159
print(f"The value of pi is approximately {pi}")
```
```
Output:
The value of pi is approximately 3.14159
```
### Complex number
```python
z = 3 + 4j
print(f"The absolute value of the complex number {z} is {abs(z)}")
```
```
Output:
The absolute value of the complex number (3+4j) is 5.0
```
# Math Operations in Python
Python presents an array of built-in math operators facilitating basic arithmetic like addition, subtraction, multiplication, division, exponentiation, and modulo operations.
## Examples
### Addition
```python
x = 3
y = 4
x_plus_y = x + y
print(f"The sum of {x} and {y} is {x_plus_y}")
```
```
Output:
The sum of 3 and 4 is 7
```
### Subtraction
```python
x = 3
y = 4
x_minus_y = x - y
print(f"The difference between {x} and {y} is {x_minus_y}")
```
```
Output:
The difference between 3 and 4 is -1
```
### Multiplication
```python
x = 3
y = 4
x_times_y = x * y
print(f"The product of {x} and {y} is {x_times_y}")
```
```
Output:
The product of 3 and 4 is 12
```
### Division
```python
x = 3
y = 4
x_divided_by_y = x / y
print(f"The result of dividing {x} by {y} is {x_divided_by_y}")
```
```
Output:
The result of dividing 3 by 4 is 0.75
```
### Power
```python
x = 3
y = 4
x_to_the_power_of_y = x ** y
print(f"{x} raised to the power of {y} is {x_to_the_power_of_y}")
```
```
Output:
3 raised to the power of 4 is 81
```
### Modulo
```python
x = 7
y = 3
x_modulo_y = x % y
print(f"The remainder when {x} is divided by {y} is {x_modulo_y}")
```
```
Output:
The remainder when 7 is divided by 3 is 1
```
## Bitwise Operators
Bitwise operators are also fundamental in Python, allowing manipulation of individual bits within integers. Here's a simple example:
### Bitwise AND
```python
a = 5 # 101 in binary
b = 3 # 011 in binary
result_and = a & b # 001 (1 in decimal)
print(f"The result of bitwise AND between {a} and {b} is {result_and}")
```
Output:
```
The result of bitwise AND between 5 and 3 is 1
```
### Bitwise OR
```python
a = 5 # 101 in binary
b = 3 # 011 in binary
result_or = a | b # 111 (7 in decimal)
print(f"The result of bitwise OR between {a} and {b} is {result_or}")
```
Output:
```
The result of bitwise OR between 5 and 3 is 7
```
### Bitwise XOR
```python
a = 5 # 101 in binary
b = 3 # 011 in binary
result_xor = a ^ b # 110 (6 in decimal)
print(f"The result of bitwise XOR between {a} and {b} is {result_xor}")
```
Output:
```
The result of bitwise XOR between 5 and 3 is 6
```
### Bitwise NOT
```python
a = 5 # 101 in binary
result_not_a = ~a # -6 (in decimal)
print(f"The result of bitwise NOT on {a} is {result_not_a}")
```
Output:
```
The result of bitwise NOT on 5 is -6
```
### Left Shift
```python
a = 5 # 101 in binary
result_left_shift = a << 1 # 1010 (10 in decimal)
print(f"The result of left shifting {a} by 1 is {result_left_shift}")
```
Output:
```
The result of left shifting 5 by 1 is 10
```
### Right Shift
```python
a = 5 # 101 in binary
result_right_shift = a >> 1 # 10 (2 in decimal)
print(f"The result of right shifting {a} by 1 is {result_right_shift}")
```
Output:
```
The result of right shifting 5 by 1 is 2
```
# Number Representations using f-strings
Introduced in Python 3.6, `f-strings` offer a succinct means to format strings. They allow embedding expressions within string literals via curly braces `{}`. `f-strings` prove invaluable in representing numbers diversely, be it specifying decimal places, employing various separators, or enhancing readability with underscores.
## Examples
### Two decimal places
```python
pi = 3.14159
print(f"The value of pi is approximately {pi:.2f}")
```
```
Output:
The value of pi is approximately 3.14
```
### Different separator
```python
large_number = 1234567890
print(f"The large number is {large_number:,}")
```
```
Output:
The large number is 1,234,567,890
```
### Underscores for readability
```python
large_number = 1234567890
print(f"The large number is {large_number:_}")
```
```
Output:
The large number is 1_234_567_890
```
# Conclusion
In conclusion, Python equips practitioners with a robust suite of tools for numerical manipulation, arithmetic computation, and expressive number representation. By harnessing its built-in numeric types, math operators, and the prowess of f-strings, Python emerges as an unparalleled choice for tackling numerical challenges with clarity and efficiency. | tlayach |
1,763,697 | Building and running javascriptlets for advanced data acquisition. | In an effort to utilize the full potential of an Android device, I decided to make a short scriplet... | 0 | 2024-02-17T01:11:49 | https://dev.to/cconley-dev/building-and-running-javascriptlets-for-advanced-data-acquisition-181n | building, and, running, javascri | In an effort to utilize the full potential of an Android device, I decided to make a short scriplet for web scraping. Particularly for finding specific file types like PDFs, EPUBs, or JPGs, the combination of Javascriptlets, Termux, and enhanced browser functionalities offers a compelling solution. This detailed guide walks through setting up the necessary tools and crafting scripts to automate the search for these file types directly from an Android device, illustrating the process with practical examples.
### Initial Setup with Termux
Termux is the backbone of this operation, providing a powerful Linux environment on Android. After installing Termux from the Google Play Store, or F-droid if needed, the following commands will prepare the environment for scripting:
```bash
pkg update && pkg upgrade
pkg install python
pkg install git
```
These steps ensure that the Termux environment is ready for advanced operations, including web scraping tasks.
### Enhancing Capabilities with Browser Extensions
To augment the web scraping process, installing browser extensions on a compatible browser like Kiwi or fenix(firefox) can significantly streamline operations. Adding an extension like Tampermonkey or Mobile Dev Tools enables the user to manage and execute Javascriptlets with ease, facilitating the automation of web tasks directly from the browser.
### Crafting Javascriptlets for File Search
Javascriptlets can be designed to initiate searches for specific file types across the web. Here’s a concise script aimed at finding PDFs usong googles search logic:
```javascript
javascript:(function() {
var query = encodeURIComponent('filetype:pdf');
var url = `https://www.google.com/search?q=${query}`;
window.open(url);
})();
```
Adapting this script to search for EPUBs or JPGs is as straightforward as changing `filetype:pdf` to `filetype:epub` or `filetype:jpg` in the script.
### Advanced Web Scraping with Termux
For more nuanced scraping tasks, such as parsing search results to extract specific URLs or directly downloading files, Python scripts executed within Termux are exceptionally useful. Tools such as Beautiful Soup can parse HTML content to find and list downloadable links. Here's an example script that searches for downloadable PDF links on a webpage:
```python
import requests
from bs4 import BeautifulSoup
def find_downloads(url):
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
links = soup.find_all('a', href=True)
for link in links:
if link['href'].endswith('.pdf'):
print(link['href'])
if __name__ == "__main__":
target_url = 'https://example.com'
find_downloads(target_url)
```
This script could be easily modified to search for `.epub` or `.jpg` files by replacing `.endswith('.pdf')` with the desired file extension in the script.
### Automating and Scheduling with Termux
To automate the execution of scripts for repeating data collection, Termux supports scheduling through cron jobs. This functionality allows scripts to run at specified intervals, ensuring continuous data collection without manual intervention:
```bash
echo "0 * * * * python /path/to/find_downloads.py" | crontab -
```
This command sets the `find_downloads.py` script to run hourly, demonstrating Termux’s capability to automate web scraping tasks.
### Conclusion
Leveraging the capabilities of Javascriptlets for initiating web searches, coupled with the power of Termux for advanced scripting and scheduling, users can effectively automate the search and collection of specific file types like PDFs, EPUBs, and JPGs on their Android devices. This approach not only makes targeted data collection more accessible but also significantly expands the scope of projects that can be undertaken directly from a mobile device, showcasing the practical and versatile applications of these tools for sophisticated web scraping tasks. Use your own creativity to develop other use cases. Keep in mind that you should always consider a sites usage rules and legal processes. | cconley-dev |
1,763,709 | Building for Mobile — Issue #1 | Crafting native apps for mobile devices comes with distinct advantages and a plethora of challenges. Throughout this series, drawing from my experience in mobile app development, I aim to shed light on effective strategies for overcoming these challenges and taking advantage of said benefits. | 0 | 2024-02-17T17:02:59 | https://dev.to/ivanbila/building-for-mobile-issue-1-1lfc | mobile | ---
title: Building for Mobile — Issue #1
published: true
description: Crafting native apps for mobile devices comes with distinct advantages and a plethora of challenges. Throughout this series, drawing from my experience in mobile app development, I aim to shed light on effective strategies for overcoming these challenges and taking advantage of said benefits.
tags: mobile
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-02-17 02:15 +0000
---
Crafting native apps for mobile devices comes with distinct advantages and a myriad of challenges. Throughout this series, drawing from my extensive experience in mobile app development, I aim to shed light on effective strategies for overcoming these challenges and unlocking the full spectrum of benefits.
### (En)force mobile app updates
In contrast to web applications, which rely on cache policies or seamless deployment for updates, mobile apps require manual updates (with the exception of certain frameworks like React Native that support over-the-air updates). This means that when an update becomes available, users must initiate the update process themselves. Unfortunately, this approach places the responsibility on users, leading to challenges in promptly addressing bugs and deploying new features for the majority of users.
Implementing a policy of enforcing app updates involves coding or utilizing existing libraries to verify whether the user’s current app version falls within the supported or non-deprecated threshold. If the version is outdated, users are restricted from accessing the app until they have completed the necessary update. This strategy ensures a more uniform and timely distribution of bug fixes and feature enhancements across the user base.

### Extend timeouts and add retrials
Networks can be unpredictable, unreliable or non-compliant with API response SLAs. To mitigate these issues, extending the connect, read, and write timeouts is a crucial step. By allowing more time for these operations, the system gains flexibility in dealing with variations in network behavior.
In addition to adjusting timeouts, incorporating retries after failed requests is an essential tool in the strategy to tackle network-related challenges. Retrying failed requests provides an opportunity for the system to recover from temporary disruptions and improves the overall robustness of the application in the face of network uncertainties.
### Always Verify internet access
To perform an HTTP request, we need internet or intranet access. Depending on your current setup, verifying if the user has internet access and informing them accordingly can help debug the issues or inform the user as to why the operation was not performed, and not having internet does not always mean not having access to a functioning network;, sometimes it might mean that the DNS server is not working. So when testing for internet access, do not only use the API, since this will only inform you if the user is connected to a network, but does not guarantee internet access. To do this verification, you can attempt to make an HTTP request or a DNS check; these should guarantee whether the user has internet access or not.
### Use serif fonts for password fields
This is where we replay the capital I and lowecase l, which depending on the font are barely distinguishable, along with other characters and variations, in my experience serif fonts create a noticable visual diference aiding in improved legibility.

| ivanbila |
1,763,748 | welcome | A post by Ahmed..🔻 | 0 | 2024-02-17T02:27:51 | https://dev.to/ahle567880ahmed/welcome-5f58 | webdev, javascript | ahle567880ahmed |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.