id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,393,792
Building a minimal web dev server with Deno
Writing pure client-side apps is good in theory but you still need a local server as file-based urls...
25,350
2023-03-14T05:32:16
https://dev.to/ndesmic/building-a-minimal-web-dev-server-with-deno-4gab
deno, vanillajs, webdev
Writing pure client-side apps is good in theory but you still need a local server as file-based urls are heavily restricted. The easiest way to deal with this is to use a file server. There are a number of perfectly fine options. I like the `http-server` module in node but Deno also has a perfectly acceptable option https://deno.land/manual@v1.31.1/examples/file_server#using-the-stdhttp-file-server, there's also plenty of other options in python etc. These all have a simple task, take files on your hard-drive and serve them on a web server so you can access them with urls. Since there are so many good options if all you need is files we don't even have to bother with doing this ourselves but sometime web development requires a bit more, so that's what I want to build. Something simple to start but maybe something to expand upon later to demonstrate various capabilities. The server that we'll be making can serve files but you can also write file-based handlers as well. I'm choosing Deno here because it has a fast server that adheres to web APIs and some helpful things in the standard library so we don't need to go find rando packages with a zillion dependencies. ## Just getting a response So to start let's just make a hello world app server. ```js Deno.serve(req => { return new Response("Hello World!", { headers: { "content-type" : "text/plain" } }); }); ``` As of this writing you'll need the `--unstable` flag as the native http server isn't 100% stable, but I also don't expect it to change much either. This should be straight-forward. We have a single handler that take a request and returns a response. ## Reading a file Next let's read in a file: ```js Deno.serve(req => { const file = await Deno.open("./index.html"); return new Response(file.readable, { headers: { "content-type" : "text/plain" } }); }); ``` You can put whatever in `index.html`. Note that we don't use `Deno.readTextFile`, instead we open the file and stream it to the response which is better for performance. ## Some basic routing We can add some basic url routing: ```js Deno.serve(req => { const file = await Deno.open(new URL(req.url).pathname); return new Response(file.readable, { headers: { "content-type" : "text/plain" } }); }); ``` This maps the url path to the file path. This isn't great though, aside from possible security issues web urls aren't well formed with extensions. For instance, what does the root `localhost:9000/` point to? The convention since the dawn of time has been to use `index.html` in this place. Otherwise the lookup just fails and we return a 404. But how do we know when to use `index`? It's when the path ends with `/`. ## Index files We can add support for that too. ```js import { typeByExtension } from "https://deno.land/std/media_types/mod.ts"; import { extname } from "https://deno.land/std/path/mod.ts"; Deno.serve(req => { let path = new URL(req.url).pathname; if(path.endsWith("/")){ path += "index.html"; } try { const file = await Deno.open(path); } catch(ex){ if(ex.code === "ENOENT"){ return new Response("Not Found", { status: 404 }); } return new Response("Internal Server Error", { status: 500 }); } return new Response(file.readable, { headers: { "content-type" : typeByExtension(extname(path)) } }); }); ``` If the path ends in `/` then add `index.html` to it. This works for nested paths too. If there's no file that matches the path we serve a 404 error (if there's some other problem reading it it's a 500). We also update the content mime-type based on the extension so the browser knows what to do with it. ## Handlers How what about things that aren't flat files? What if we want handlers? Here it gets more complicated. What I want to do is have certain js files act as handlers themselves. This poses a problem since we **also** want to pass back static js to the client. So I propose an extra extension to differentiate the two `.js` for client js, and `.server.js` for js files that execute server-side (most frameworks get around this by segmenting static assets and handlers by folder which I feel is odd because from the url perspective these should all be in the same path and there's no reason they can't live together). I also want `index.server.js` as a possibility in-case `index.html` is not found. I also want extensionless file paths to resolve to either `{path}.html` or `{path}.server.js`. To do this I will use a function that checks a list of paths and finds the first one that exists. ```js //file-utils.js export async function probeStat(filepaths){ for(const filepath of filepaths){ try { const fileInfo = await Deno.stat(filepath); return [fileInfo, filepath]; } catch (ex){ if(ex.code === "ENOENT") continue; } } } ``` This will stat (get file metadata) for the path, if it exists then we use it, otherwise we try the next one. If all fail then this returns null. The return type is a little strange, it's a tuple of `fileInfo` and `filepath`. This is because the `fileInfo` does not contain the path, but we want to know which path it was that ultimately matched. One last things I want to do is to constrain the path to a folder so the user can't do things like read our source code. We'll set a base path `./routes` where all the file are. ```js import { typeByExtension } from "https://deno.land/std/media_types/mod.ts"; import { extname } from "https://deno.land/std/path/mod.ts"; import { probeStat } from "./utils/fs-utils.js"; const baseDir = "./routes"; Deno.serve(async req => { const url = new URL(req.url); let inputPath = url.pathname; const filePaths = []; //normalize path if (inputPath.endsWith("/")) { inputPath += "index"; } if(!inputPath.includes(".")){ filePaths.push(...[".html", ".server.js"].map(ext => baseDir + inputPath + ext)); } else { const path = baseDir + inputPath; filePaths.push(path); } //find const fileMatch = await probeStat(filePaths); if(!fileMatch){ return new Response("Not Found", { status: 404 }); } //read or execute const ext = extname(fileMatch[1]); switch(ext){ case ".js": { if(fileMatch[1].endsWith(".server.js")){ const mod = await import(fileMatch[1]); return await mod.default(req); } } // falls through default: { const file = await Deno.open(fileMatch[1]); return new Response(file.readable, { headers: { "Content-Type": typeByExtension(ext) } }); } } }); ``` All we have to do is use a dynamic import to get the matching module. The handler will use the `default` export from the module. This will work for nested routes too! One thing to note is that the user can access a handler both with the extensionless route (eg `./api`) or the extension route (eg `./route/api.server.js`). So far I haven't found any reason to care if they use the explicit route so I didn't disable it. ## HTTP verbs Another thing we can add is the ability to handle other HTTP verbs besides `GET`. We can do this by exporting functions with the name of the verb. ```js case ".js": { if(fileMatch[1].endsWith(".server.js")){ const mod = await import(fileMatch[1]); if(req.method === "GET"){ return mod.get?.(req) ?? mod.default?.(req) ?? new Response("Method not allowed", { status: 405 }); } else if (req.method === "DELETE"){ return mod.del?.(req) ?? new Response("Method not allowed", { status: 405 }); } else { return mod[req.method.toLowerCase()]?.(req) ?? new Response("Method not allowed", { status: 405 }); } } } // falls through ``` This will match the verb with the appropriate method. eg: ```js //handler.server.js export default get(req){ ... } export post(req){ ... } export put(req){ ... } export del(req){ ... } ``` Some things to note. If the module exports `get` we try that first for get before moving to `default`. In the case of `DELETE` we can't use `delete` as a function name because it conflicts with the javascript keyword. Instead we'll call it `del` so we need a special case. Otherwise we try to use the lower-cased method name. If it doesn't exist we send back a 405 error saying we don't support that. I'm not worrying about which ones take request bodies or anything like that. ## TS/JSX/TSX Since we're using Deno this becomes really easy since it's all built-in. At this point JSX/TSX probably isn't super useful to us, but lots of people enjoy typescript and if we get it for free then why not? We need to expand our search path options when we have a bare file path: ```js if(!inputPath.includes(".")){ filePaths.push(...[".html", ".server.js", ".server.ts", ".server.jsx", ".server.tsx"].map(ext => baseDir + inputPath + ext)); } ``` Then if we find one the server files we import it like usual. ```js switch(ext){ case ".js": case ".ts": case ".tsx": case ".jsx": { if(/\.server\.(js|ts|jsx|tsx)/.test(fileMatch[1])){ const mod = await import(fileMatch[1]); if(req.method === "GET"){ return mod.get?.(req) ?? mod.default?.(req) ?? new Response("Method not allowed", { status: 405 }); } else if (req.method === "DELETE"){ return mod.del?.(req) ?? new Response("Method not allowed", { status: 405 }); } else { return mod[req.method.toLowerCase()]?.(req) ?? new Response("Method not allowed", { status: 405 }); } } } // falls through //...etc ``` I'm not especially happy with how this looks because the extensions are maintained in 3 places but this list is unlikely to ever change unless we create another popular flavor of JS (please no) so it's fine for now. With that I think we have a pretty decent start to a dev server that can handle files and APIs. There's definitely some more fun features we could add and maybe we'll take a look at those next time. You can find the code for this post here: https://github.com/ndesmic/dev-server/tree/v1
ndesmic
1,393,802
Recursion to simplify test assertions
At Woovi we use MongoDB as our primary database. For our integration tests, we use an in memory...
0
2023-03-10T11:04:54
https://dev.to/woovi/recursion-to-simplify-test-assertions-3800
recursion, testing
At Woovi we use MongoDB as our primary database. For our integration tests, we use an in memory database to ensure our queries and aggregates are working correctly. It is madness to mock a database, it generates a lot of false positives, and it won't give confidence to upgrade. ## How to assert `ObjectId`? ObjectIds are small, likely unique, fast to generate, and ordered. `ObjectId` is the primary key of each MongoDB document. A common test scenario is to validate if a given `ObjectId` is equal to another `ObjectId` In a naive approach, you use assert like this ```jsx expect(objectIdA).toBe(objectIdA); ``` This does not work because `ObjectId` are an Object in JavaScript, and even if they have the same value, they will be in different places in the memory. The way to solve this, it is to convert the `ObjectId` to string using toString() method, and then make the assertion ```jsx expect(objectIdA.toString()).toBe(objectIdA.toString()); ``` What if you want to assert 2 complex objects, like this one: ```jsx const obj = { _id: new ObjectId('5c9b1b9b9b9b9b9b9b9b9b9b'), name: 'test', myarr: [ new ObjectId('5c9b1b9b9b9b9b9b9b9b9b9b'), new ObjectId('5c9b1b9b9b9b9b9b9b9b9b9b'), new ObjectId('5c9b1b9b9b9b9b9b9b9b9b9b'), ], my: { nested: { field: new ObjectId('5c9b1b9b9b9b9b9b9b9b9b9b'), }, }, }; ``` ## Recursion for the rescue We want to convert all ObjectId values to string, the most elegant way of doing this is using recursion: ```jsx export const mongooseObjectIdToString = (data: any) => { // no value, return value if (!data) { return data; } // traverse the array if (Array.isArray(data)) { return data.map(d => mongooseObjectIdToString(d)); } // transform ObjectId to string if ( ObjectId.isValid(data) && data.toString().indexOf(data) !== -1 ) { return data.toString(); } // traverse nested object if (typeof data === 'object' && !Array.isArray(data)) { return Object.keys(data).reduce((prev, curr) => ({ ...prev, [curr]: mongooseObjectIdToString(data[curr]), }), {}); } return data; }; ``` `mongooseObjectIdToString` will call itself when it finds an array or a nested object, otherwise will just convert the `ObjectId` to String, or just return any other data type. The assert would then be: ```jsx expect(mongooseObjectIdToString(obj)).toEqual(mongooseObjectIdToString(anotherObj)); ``` ## In Short Recursion is not only useful to solve Fibonacci, but can also be applied to solve real problems, like traversing a complex object applying some transforming. This approach can be generalized to apply any transforming in any complex object, comment with your generalized algorithm, and where you are going to use it. --- Woovi [Woovi](https://www.woovi.com) is a Startup that enables shoppers to pay as they like. To make this possible, Woovi provides instant payment solutions for merchants to accept orders. If you want to work with us, we are [hiring](https://woovi.com/jobs/)!
sibelius
1,394,054
Props in React JS: A Comprehensive Guide to Passing Data Between Components
Props in React JS Props (short for "properties") are a mechanism for passing data from a...
0
2023-03-09T06:53:39
https://dev.to/sidramaqbool/props-in-react-js-a-comprehensive-guide-to-passing-data-between-components-34do
react, javascript, webdev, beginners
## Props in React JS Props (short for "properties") are a mechanism for passing data from a parent component to a child component in React JS. Props are an essential part of building reusable and modular components, and they allow you to create complex UIs with ease. In this post, we'll discuss what props are, how to use them, and some best practices for working with props in React JS. ## What are Props? Props are a way of passing data from a parent component to a child component in React JS. When a parent component renders a child component, it can pass data to that child component via props. Props are read-only in the child component, which means that the child component cannot modify the props that it receives from its parent. This makes it easy to reason about the behavior of a component, as you can be sure that the component will always behave the same way when given the same set of props. ## How to Use Props Using props in React JS is simple. To pass data from a parent component to a child component, you simply add an attribute to the child component's JSX tag with the name of the prop you want to pass, and the value you want to pass. Here's an example: ``` function ParentComponent() { const greeting = "Hello, World!"; return ( <ChildComponent message={greeting} /> ); } function ChildComponent(props) { return ( <p>{props.message}</p> ); } ``` In this example, we have a parent component that passes a message prop to a child component. The child component receives the prop via its props argument, and renders the value of the message prop to the DOM. ## Best Practices for Working with Props Here are some best practices for working with props in React JS: **Always define propTypes for your components.** Defining propTypes for your components is a good practice, as it helps catch errors and ensure that your components are used correctly. PropTypes allow you to specify the type of data that a prop should be, and will throw an error if the prop is of the wrong type. Here's an example: ``` function ChildComponent(props) { return ( <p>{props.message}</p> ); } ChildComponent.propTypes = { message: PropTypes.string.isRequired }; ``` In this example, we are using the PropTypes library to specify that the message prop should be a string, and that it is required. **Use default props to provide fallback values.** Using default props is a good practice, as it provides a fallback value for props that are not provided by the parent component. This helps prevent errors and ensures that your component behaves predictably. Here's an example: ``` function ChildComponent(props) { return ( <p>{props.message}</p> ); } ChildComponent.defaultProps = { message: "Default Message" }; ``` In this example, we are using defaultProps to specify a default value for the message prop. If the parent component does not provide a value for the message prop, the default value will be used instead. **Avoid modifying props directly in the child component.** Modifying props directly in the child component can lead to unpredictable behavior, and should be avoided. Instead, you should treat props as read-only, and use them to render the component's UI. **Use destructuring to access props.** Using destructuring to access props is a good practice, as it makes your code easier to read and understand. Destructuring allows you to extract values from an object and assign them to variables in a single statement. **Use spread syntax to pass props.** Using spread syntax to pass props is a good practice, as it allows you to pass all the props of an object to a component with a single line of code. This makes your code more concise and easier to read. Here's an example: ``` function ParentComponent() { const props = { message: "Hello, World!", color: "blue" }; return ( <ChildComponent {...props} /> ); } function ChildComponent(props) { return ( <p style={{ color: props.color }}>{props.message}</p> ); } ``` In this example, we are using spread syntax to pass all the props of the props object to the ChildComponent. The ChildComponent receives the props via its props argument, and uses them to render the component's UI. **Conclusion** Props are an essential part of building reusable and modular components in React JS. By passing data from a parent component to a child component via props, you can create complex UIs with ease. When working with props, it's important to follow best practices like defining propTypes, using default props, avoiding direct modifications, and using destructuring and spread syntax. With these best practices in mind, you can build robust and maintainable components that are easy to reason about and use. Thanks for reading! I hope you found this post informative and helpful. If you have any questions or feedback, please feel free to leave a comment below!
sidramaqbool
1,394,056
React Query - The what, how & when
React Query is a data-fetching library that helps with fetching, caching, synchronising, and updating...
0
2023-03-09T07:23:04
https://dev.to/wednesdaysol/react-query-the-what-how-when-35i7
[React Query](https://react-query-v3.tanstack.com/) is a data-fetching library that helps with **fetching, caching, synchronising**, and updating the server state in your react application. Before we start going into the details of React Query, it’s essential to understand the server state. Note: This article requires an intermediate understanding of building client-side applications with React. Most developers with experience in React understand what state is. To understand server state, let’s list the differences between client & server state. **Client State** - Data is stored in the browser. - The user can change the data locally. - Data lives in memory and is lost on reloads. **Server State** - Data is stored on a remote server. - Changing data requires remote server access or APIs. - Data persists on a remote database. React Query only helps manage the server state. ## What can React Query help with? **1. Caching** React Query sits between your server and the client intercepting every query. Each query is tied to a unique key called the query key in the react query store. When you try refetching data for the same query key, it immediately returns the cached data. **2. Identifying and updating stale data in the background** When data is fetched from the server, it compares it to cached data. If both are the same, it doesn’t force a reload. React Query also ensures the cache is updated by making fetch requests in the background to keep data in sync. **3. Memory management and garbage collection** React Query has a garbage collector for managing the cache in the browser. If the data in the cache is not consumed, it gets deleted after a timeout. This timeout can be configured locally for each query or globally. This helps retain only the most relevant information while clearing the rest from the cache. **4. Deduping multiple requests for the same data** If multiple requests for the same query key are made close to each other, only the first one will be sent over the wire. All the other requests will be resolved with the data from the first one. This saves bandwidth and improves the user’s experience. **5. Performance optimisations** React Query has out-of-the-box support for render optimisations, pagination, lazy loading, etc., to improve your application's performance. ## When should you use React Query? React Query is a tool. To get its maximum benefit, it’s essential to know when to use it. Let’s understand this with two examples 1. A social media application: Users can view timelines/posts and chat with friends. The usual. 2. A music production application: An offline first application where users can create music with different instruments. React Query would be better suited for the social media application. This application would naturally integrate with several APIs thus requiring server state management. This application would also have its client state, so a library like React Query will allow you to manage the server state well. React Query has become popular because most applications have some or the other form of the server component. As the number of APIs increases, it makes more sense to use this library. Here are a few pointers on why React Query is a worthy choice: - **Simplifies data fetching** - Erases a lot of boilerplate and makes your code easy to read. - **Provides tools to improve the reliability of server data** - Invalidates and marks data as stale. - **Provides tools to improve User Experience** - Prefetching, re-fetching, caching, and more. - **Performance optimisations** - Pagination and lazy loading. - **Request Retries** - Ability to retry in case of errors. - **Window Focus Refetching** - Refetching based on application tab activity. ## Does React Query replace Redux, MobX or other global state management libraries? TL;DR no. Redux, MobX, and the other popular libraries are great for client state management. React Query complements these libraries and helps create a separation of concerns. In most client-side applications, they deal with the server state. They typically perform these steps: - Fetch data from the server - Store the server data in the client store - Provide a medium for components to communicate and access the data from the store. React Query helps you abstract the above functions. Hence only leaving the actual client state to be stored using client state management libraries. This is why you’re left with the very little state to maintain when you migrate to React Query. ## Pitfalls when working with React Query **1. Large Bundle Size** React Query has an impact on application size. It is because of all the features that comes with it.Large bundle size could impact your performance. It could cause delays during load, render and user interaction. To give more context, according to [BundlePhobia](https://bundlephobia.com/) React Query is about 3 times larger than one of its competitor [SWR](https://swr.vercel.app/) . The idea here is not to scare you for its bundle size, instead to help you check if it's a perfect fit for your application. ![_React Query bundle size as reported by bundle phobia._](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7umtgeohvcnia0cd3kc5.png)_React Query bundle size as reported by bundle phobia._ ![_SWR bundle size as reported by bundle phobia._](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yatohih8109hfi9f8uss.png) _SWR bundle size as reported by bundle phobia._ **2. Identifying Query Keys to Invalidate** React Query provides a **useMutation** hook to update data on your server. If the update to the server goes through, It provides callback function called **onSuccess** and **onMutate** to update the cache in the react query store. The pitfall here is that for every update made to the sever, a manual step is required to identify and invalidate the query keys. Let’s understand why this gets tricky: - Identifying all the query keys in the React Query store related to the update that went through is difficult. There is a high potential that you could miss out on identifying these keys in large applications. It is error prone and a good understanding of the platform is required to avoid this pitfall. - The query keys are usually invalidated with the help of invalidateQueries call. The query keys that you pass into the invalidateQueries call should match with the query key that was initially set. If your query key is an array of Strings, the order in which the parameters are passed in the query key should also be the same. **3. Identifying Query Keys to Cache** React Query requires appropriate query keys for caching to work as expected. Setting them can get tricky at times. To understand better, let’s consider an example. Let’s say you need to fetch the count of all the users that came to your platform from start of the year. The query to fetch the users count is dependent on the start date and the end date. These parameters should be part of the query key as shown below ``` [ 'usersCount', { startDate: '2022-01-01+00:00:00', end_date: '2022-12-08+00:53:56' } ]; ``` Let’s say I came after a minute and requested for the users count from start of the year, it doesn’t cache. It instead requests directly from the server. This shouldn’t be the expected behaviour. Ideally with all that we have learnt, it should have returned from the cache while fetching for the latest updates in the background.To understand this, lets structure the query key for the request that was made after a minute. ``` [ 'usersCount', { startDate: '2022-01-01+00:00:00', end_date: '2022-12-08+00:54:56' } ]; ``` You would notice that the end date has changed. This key is not matching with the old one that is present in the cache. So it fetches directly from the server. To fix this issue, the parameters set in the query key needs to be modified. Instead of considering the entire timestamp what if we just consider the date in the query key. It is important to note that the parameters that are passed to the API call are still unchanged. We still send the entire timestamp. It is only in the query key that the modification is made as shown below. ``` [ 'usersCount', { startDate: '2022-01-01', end_date: '2022-12-08' } ]; ``` If you came back after a minute and made a request, it would instantly fetch from the cache. This is because your query key is unchanged and matches with the one in the cache. But what’s interesting here is that the background fetch is for the requested timestamps from the server. Once the latest updates are available, it paints it on to the users count section. ## Data Fetching in React Query Data fetching is a very common side effect that is usually managed with useEffect. It has become way simpler with React Query. To understand better, let’s compare how data fetching is implemented in useEffect and React Query. We’ll use axios and JSONPlaceHolder API to fetch the posts. ``` // Fetch all posts const fetchPosts = async () => { const { data } = await axios.get( "https://jsonplaceholder.typicode.com/posts" ); return data; }; ``` Using **useEffect**: ``` const [isLoading, setIsLoading] = useState(false); const [data, setData] = useState([]); const [error, setError] = useState(null); useEffect(() => { setIsLoading(true); try { (async () => { const data = await fetchPosts(); setData(data); })(); } catch (error) { setError(error); } finally { setIsLoading(false); } }, []); ``` Using React Query: ``` const { isLoading, data, error } = useQuery("posts", fetchPosts); ``` React Query provides a **useQuery** hook to fetch data from the server. The entire useEffect implementation was replaced with a single line in React Query. It erased a lot of boilerplate and makes your code easy to read. The loading, error and your data state is handled out of the box. In addition to this, it also helps with caching, background-sync and a bunch of other things. **Conclusion** React Query is a fantastic library. In most cases, it removes the need for your global state managers. It also helps you erase a lot of boilerplate and makes it easy to use in large applications. In addition, it handles caching, background updates, request retries, performance optimisations, and other things. Before you start integrating this library in full swing, you must carefully examine if this is something your application needs. If you want to use it for its essential functionalities, you must consider other lightweight alternatives. Overall, it's an absolute winner, and there is nothing close to the features it offers. No doubt, it is a missing data-fetching library for React. Thanks for reading. This article was originally posted on the [Wednesday Solutions](https://www.wednesday.is/) blog. You can check out the original article h[ere](https://wednesday-sol.webflow.io/writing-articles/react-query-the-what-how-when).
wednesdaysol
1,394,062
Column Store
openGauss supports hybrid row-column store. Row store stores tables to disk partitions by row, and...
0
2023-03-09T07:13:22
https://dev.to/llxq2023/column-store-165d
opengauss
openGauss supports hybrid row-column store. Row store stores tables to disk partitions by row, and column store stores tables to disk partitions by column. Each storage model applies to specific scenarios. Select an appropriate model when creating a table. Generally, openGauss is used for databases in online transaction processing (OLTP) scenarios. By default, row store is used. Column store is used only in online analytical processing (OLAP) scenarios where complex queries are performed and the data volume is large. By default, a row-store table is created. For details about differences between row store and column store. In the preceding figure, the upper left part is a row-store table, and the upper right part shows how the row-store table is stored on a disk; the lower left part is a column-store table, and the lower right part shows how the column-store table is stored on a disk. Both row-store and column-store models have benefits and drawbacks. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9n4ohdlq1lw95jtn77r3.png) Generally, if a table contains many columns (called a wide table) and its query involves only a few columns, column store is recommended. Row store is recommended if a table contains only a few columns and a query involves most of the columns. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a6d1v0kctrlk6ng0bge8.png) **Syntax** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/touiwwb7cq2462lx1n1v.png) **Parameter Description** **· table_name** Specifies the name of the table to be created. **· column_name** Specifies the name of a column to be created in the new table. **· data_type** Specifies the data type of the column. **· ORIENTATION** Specifies the storage mode (row-store, column-store, or ORC) of table data. This parameter cannot be modified once it is set. Value range: **- ROW** indicates that table data is stored in rows. **ROW** applies to OLTP services and scenarios with a large number of point queries or addition/deletion operations. **- COLUMN** indicates that the data is stored in columns. **COLUMN** applies to the data warehouse service, which has a large amount of aggregation computing, and involves a few column operations. **Example** If **ORIENTATION** is not specified, the table is a row-store table by default. For example: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ed0f3vo301nzvx74hpog.png) When creating a column-store table, you need to specify the **ORIENTATION** parameter. For example: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/873ocfvgyk2cpulwyenc.png)
llxq2023
1,394,080
LLVM
Based on the query execution plan tree, with the library functions provided by the Low Level Virtual...
0
2023-03-09T07:57:25
https://dev.to/llxq2023/llvm-4g2i
opengauss
Based on the query execution plan tree, with the library functions provided by the Low Level Virtual Machine (LLVM), openGauss moves the process of determining the actual execution path from the executor phase to the execution initialization phase. In this way, problems such as function calling, logic condition branch determination, and a large amount of data read that are related to the original query execution are avoided, to improve the query performance. LLVM dynamic compilation can be used to generate customized machine code for each query to replace original common functions. Query performance is improved by reducing redundant judgment conditions and virtual function calls, and by making local data more accurate during actual queries. LLVM needs to consume extra time to pre-generate intermediate representation (IR) and compile it into codes. Therefore, if the data volume is small or if a query itself consumes less time, the performance deteriorates. **Application Scenarios** **·** Expressions supporting LLVM The query statements that contain the following expressions support LLVM optimization: 1. Case…when… 2. IN 3. Bool 3.1 And 3.2 Or 3.3 Not 4 BooleanTest 4.1 **IS_NOT_UNKNOWN**: corresponds to SQL statement IS NOT UNKNOWN. 4.2 **IS_UNKNOWN**: corresponds to SQL statement IS UNKNOWN. 4.3 **IS_TRUE**: corresponds to SQL statement IS TRUE. 4.4 **IS_NOT_TRUE**: corresponds to SQL statement IS NOT TRUE. 4.5 **IS_FALSE**: corresponds to SQL statement IS FALSE. 4.6 **IS_NOT_FALSE**: corresponds to SQL statement IS NOT FALSE. 5. NullTest 5.1 IS_NOT_NULL 5.2 IS_NULL 6. Operator 7.Function 7.1 lpad 7.2 substring 7.3 btrim 7.4 rtrim 7.5 length 8. Nullif Supported data types for expression computing are bool, tinyint, smallint, int, bigint, float4, float8, numeric, date, time, timetz, timestamp, timestamptz, interval, bpchar, varchar, text, and oid. Consider using LLVM only if expressions are used in the following content in a vectorized executor: **filter** in the **Scan** node; **complicate hash condition**, **hash join filter**, and **hash join target** in the **Hash Join** node; **filter** and **join filter** in the **Nested Loop** node; **merge join filter** and **merge join target** in the **Merge Join** node; and **filter** in the **Group** node. **·** Operators supporting LLVM 1. Join: HashJoin 2. Agg: HashAgg 3. Sort Where HashJoin supports only Hash Inner Join, and the corresponding hash cond supports comparisons between int4, bigint, and bpchar. HashAgg supports sum and avg operations of bigint and numeric data types. Group By statements supports int4, bigint, bpchar, text, varchar, timestamp, and count(*) aggregation operation. Sort supports only comparisons between int4, bigint, numeric, bpchar, text, and varchar data types. Except the preceding operations, LLVM cannot be used. You can use the explain performance tool to check whether LLVM can be used. **Non-applicable Scenarios** **·** LLVM does not apply to tables that have small amount of data. **·** Query jobs with a non-vectorized execution path cannot be generated. **Other Factors Affecting LLVM Performance** The LLVM optimization effect depends on not only operations and computing in the database, but also the selected hardware environment. **·** Number of C functions called by expressions CodeGen does not implement full-expression calculation, that is, some expressions use CodeGen while others invoke original C code for calculation. In an entire calculation process, if the later calculation method plays a dominate role, using LLVM may deteriorate the performance. By setting log_min_message to DEBUG1, you can view expressions that directly invoke C code. **·** Memory resources One of the key LLVM features is to ensure the locality of data, that is, data should be stored in registers as much as possible. Data loading should be reduced at the same time. Therefore, when using LLVM, value of work_mem must be set as large as required to ensure that code is implemented in the memory. Otherwise, performance deteriorates. **·** Cost estimation LLVM realizes a simple cost estimation model. You can determine whether to use LLVM for the current node based on the tables involved in the node computing. If the optimizer underestimates the actual number of rows involved, gains cannot be achieved as expected. And vice versa. **Suggestions for Using LLVM** Currently, LLVM is enabled by default in the database kernel, and users can configure it as required. The overall suggestions are as follows: 1. Set work_mem to an appropriate value as large as possible. If much data is flushed to disks, you are advised to disable LLVM by setting enable_codegen to off. 2. Set codegen_cost_threshold to an appropriate value (the default value is 10000). Ensure that LLVM is not used when the data volume is small. After codegen_cost_threshold is set, the database performance may deteriorate due to the use of LLVM. In this case, you are advised to increase the parameter value. 3. If a large number of C functions are called, you are advised not to use the LLVM function. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6vfteax8p780zxwiw8c7.png)
llxq2023
1,394,088
Ustore
The Ustore storage engine, also called the in-place update storage engine, is a new storage mode...
0
2023-03-09T08:19:50
https://dev.to/llxq2023/ustore-1lpc
opengauss
The Ustore storage engine, also called the in-place update storage engine, is a new storage mode added to the openGauss kernel. The row storage engine used by the earlier openGauss versions is in append update mode. Append update has good performance in service addition, deletion, and heap only tuple (HOT) update (that is, update on the same page). However, recycling is not efficient in cross-data-page non-HOT update scenarios. Therefore, Ustore comes into being. **Design Principle** Ustore stores valid data of the latest version and junk data of earlier versions separately. The valid data of the latest version is stored on the data page, and an independent UNDO space is created for managing the junk data of earlier versions in a unified manner. Therefore, the data space does not expand due to frequent updates, and the junk data is recycled more efficiently. Ustore adopts the NUMA-aware UNDO subsystem design, which enables the UNDO subsystem to be effectively expanded on the multi-core platform. In addition, Ustore adopts the multi-version index technology to clear indexes and improve the efficiency of reclaiming and reusing storage space. Ustore works with the UNDO space to implement more efficient and comprehensive flashback query and recycle bin mechanisms, quickly roll back misoperations, and provide abundant enterprise-level functions for openGauss. **Core Advantages** **· High performance**: For services with different loads, such as insertion, update, and deletion, the performance and resource usage are relatively balanced. The in-place update mode is recommended in frequent update scenarios, featuring higher and more stable performance. It is suitable for typical OLTP service scenarios that require **short** transactions, **frequent** updates, and **high** performance. **· Efficient storage**: Maximizes in-place update, greatly saving space. Rollback segments and data pages are stored separately, providing more efficient and stable I/O usage. The UNDO subsystem uses the NUMA-aware design and has better multi-core scalability. The UNDO space is allocated and reclaimed in a unified manner, improving the reuse efficiency and storage space usage. **· Fine-grained resource control**: The Ustore engine provides multi-dimensional transaction monitoring. It monitors transaction running based on the transaction running duration, size of the UNDO space used by a single transaction, and overall UNDO space limit to prevent abnormal and unexpected behaviors. This feature enables database administrators to regulate and restrict the use of database system resources. Ustore provides stable performance in scenarios where data is frequently updated, enabling service systems to run more stably and adapt to more service scenarios and workloads, especially core financial service scenarios that have higher requirements on performance and stability. **Usage Guide** Ustore coexists with the original append update storage engine (Astore). Ustore shields the implementation details of the storage layer. The SQL syntax is basically the same as that of the original Astore storage engine. The only difference lies in table creation and index creation. **·** Table creation Ustore contains undo logs. Before creating a table, you need to set undo_zone_count in the postgresql.conf file. This parameter indicates the number of undo logs. The recommended value is 16384, that is, undo_zone_count=16384. After the configuration is complete, restart the database. [postgresql.conf] ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gha6loy9dbu8gs8a9tqy.png) **· Method 1: Specify the storage engine type when creating a table.** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t3yecw8e8iuuvgwvzii7.png) **· Method 2: Specify Ustore by configuring a GUC parameter.** 1. Before starting a database, set enable_default_ustore_table to on in postgresql.conf to specify that Ustore is used when a user creates a table by default. [postgresql.conf] ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o0752r1mv9w4z01rehrf.png) 2. Create a table. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/am2eb3707ng35bnqrjzf.png) **· Index creation** The index used by Ustore is UBtree. UBtree is developed for the Ustore storage engine and is the only index type supported by Ustore. Taking the following table **test** as an example, add an index **UBtree** to the **age** column of the **test** table. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i5b0y596tee8htpwlemr.png) **· Method 1: If the index type is not specified, a UBtree index is created by default.** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lf7x10fhgjabtwokfawf.png) **· Method 2: When creating an index, use the using keyword to set the index type to ubtree.** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/295fehz6bpz3shphv15x.png)
llxq2023
1,394,161
MOT(1)
openGauss introduces the memory-optimized table (MOT) storage engine, which is a transactional row...
0
2023-03-09T08:52:17
https://dev.to/llxq2023/mot1-38n1
opengauss
openGauss introduces the memory-optimized table (MOT) storage engine, which is a transactional row store and is optimized for multi-core and large-memory servers. MOT is the most advanced production-level feature (Beta version) of openGauss databases. It provides higher performance for transactional workloads. MOT fully supports ACID features, especially strict persistence and high availability. Enterprises can use MOT in mission-critical and performance-sensitive online transaction processing (OLTP) to achieve high performance, high throughput, predictable low latency, and high utilization of multi-core servers. MOT is especially suitable for running on modern servers with multi-channel and multi-core processors, such as Huawei TaiShan servers based on ARM/Kunpeng processors and Dell or similar x86 servers. **MOT Features and Benefits** MOT has significant advantages in terms of performance (query and transaction latency), scalability (throughput and concurrency), and in some cases, even costs (high resource utilization). **·** Low latency: provides fast query and transaction response time. **·** High throughput: supports peak and continuous high user concurrency. **·** High resource utilization: fully utilizes hardware. Applications that use MOT can reach 2.5 to 4 times the throughput compared to applications that do not use MOT. For example, you can perform the TPC-C benchmark test (interactive transactions and synchronous logs) on Huawei TaiShan servers based on ARM/Kunpeng processors and on Dell x86 servers based on Intel Xeon processors. The throughput gain provided by MOT reaches 2.5 times on a 2-socket server, 3.7 times on a 4-socket server, and 4.8 million tpmC on a 4-socket 256-core ARM server. In the TPC-C benchmark test, you can find that MOT provides lower latency and reduces the transaction response time by 3 to 5.5 times. The high load and high contention situation is a recognized problem for all industry-leading databases, and MOT can make extremely high use of server resources in this situation. After MOT is used, the resource utilization of 4-socket servers reaches 99%, which is far higher than that of other databases in the industry. This capability is especially obvious and important on modern multi-core servers. **Key MOT Technologies** The key technologies of MOT are as follows: **·** Memory-optimized data structure: To achieve high concurrent throughput and predictable low latency, all data and indexes are stored in the memory, no intermediate page buffer is used, and the lock with the shortest duration is used. The data structure and all algorithms are optimized for memory design. **·** Lock-free transaction management: While ensuring strict consistency and data integrity, MOT uses optimistic policies to achieve high concurrency and high throughput. During a transaction, the MOT does not lock any version of the data row being updated, greatly reducing contention in some large memory systems. Optimistic concurrency control (OCC) in transactions is implemented without locks. All data modification is performed in the part of memory dedicated to private transactions (also called private transactional memory). This means that during a transaction, related data is updated in the private transactional memory, thereby implementing lock-free read and write. In addition, a lock is locked for a short time only in the commit phase. **·** Lock-free index: The data and indexes of memory tables are stored in the memory. Therefore, it is important to have an efficient index data structure and algorithm. The MOT index mechanism is based on the state-of-the-art Masstree, which is a fast and scalable Key Value (KV) storage index for multi-core systems and is implemented using the Trie of the B+ tree. In this way, excellent performance on multi-core servers can be achieved in the case of high-concurrency workloads. In addition, MOT uses advanced technologies to optimize performance, such as lock optimization, cache awareness, and memory prefetch. **·** NUMA-aware memory management: MOT supports NUMA-aware. The NUMA-aware algorithm enhances the performance of data layout in memory by enabling threads to access the memory that is physically connected to the core where the threads run. This is handled by the memory controller and does not require additional jumps through the use of interconnection, such as Intel QPI. The intelligent memory control module of MOT pre-allocates memory pools for various memory objects to improve performance, reduce locks, and ensure stability. Transactional memory is always allocated to NUMA local nodes. After the transaction ends, memory is released to the pool. In addition, system memory allocation (OS malloc) is used as less as possible in transactions to avoid unnecessary locks. **·** Efficient persistence: Logs and checkpoints are key capabilities for disk persistence and one of the key requirements of ACID (D stands for durability). Currently, all disks (including SSDs and NVMe disks) are obviously slower than the memory. Therefore, persistence is the bottleneck of the in-memory database engine. As a memory-based storage engine, the persistence design of MOT must implement various algorithm optimizations to ensure that the designed speed and throughput targets can be achieved while persistence is implemented. These optimizations include: **·** Parallel logging, supported by all openGauss disk-based tables. **·** Log buffer and lock-free transaction preparation for each transaction. **·** Incremental update, that is, only changes are recorded. **·** NUMA-aware group commit, in addition to synchronous and asynchronous logging. **·** State-of-the-art checkpointing asynchronously using logical consistency (CALC), minimizing memory and computing overhead. **·** High SQL coverage and function set: MOT uses extended PostgreSQL Foreign Data Wrappers (FDWs) and indexes to support almost the entire SQL scope, including stored procedures, user-defined functions, and system function calls. **·** Native PREPARE statements for query: With the PREPARE client commands, users can execute query and transaction statements interactively. These commands have been pre-compiled into native execution formats, also known as Code-Gen or Just-in-Time (JIT) compilation. In this way, the performance can be improved by 30% on average. If possible, apply compilation and lightweight execution; otherwise, use the standard execution path to process the applicable query. The Cache Plan module has been optimized for OLTP. Different binding settings are used in the entire session and compilation results are reused in different sessions. **·** Seamless integration between MOT and openGauss: MOT is a high-performance memory-optimized storage engine integrated in the openGauss package. The MOT storage engine and disk-based storage engine coexist to support multiple application scenarios. In addition, the MOT reuses auxiliary database services, such as WAL, replication, checkpoint, and HA. Users can benefit from unified deployment, configuration, and access of disk-based tables and MOTs. Users can flexibly and cost-effectively select a storage engine based on their specific needs. For example, performance-sensitive data that causes bottlenecks is stored in memory. **MOT Application Scenarios** MOT can significantly improve the overall performance of applications based on load characteristics. MOT improves transaction processing performance by improving the efficiency of data access and transaction execution, and minimizing redirections by eliminating locks and lock memory contention between concurrently executed transactions. MOT is fast not only because it is in memory, but also because it is optimized around concurrent memory usage. Data storage, access, and processing algorithms are designed from scratch to take advantage of the state-of-the-art technologies for in-memory and highly concurrent computing. openGauss allows applications to freely combine MOTs and standard disk-based tables. MOT is especially useful for enabling the most active, contention-intensive, and performance-sensitive application tables that have proven to be bottlenecks, and tables that require predictable low-latency access and high throughput. MOT can be used in a variety of applications, such as: **·** High-throughput transaction processing: This is the main scenario where MOT is used because it supports massive transactions and requires low latency of each single transaction. The representative applications include the real-time decision-making system, payment system, financial instrument transactions, sports lottery, mobile games, advertisement placement, and the like. **·** Performance acceleration: Tables with high contention can benefit from MOT, even if the table is a disk-based table. The transformation of such tables (other than related tables and tables referenced together in queries and transactions) results in significant performance improvement due to lower latency, fewer contentions and locks, and increased server throughput capabilities. **·** Elimination of mid-tier caching: Cloud computing and mobile applications tend to have periodic or peak high workloads. In addition, more than 80% of the loads of many applications are read loads with frequent repeated queries. Typically, a mid-tier caching layer is deployed for applications to meet the individual requirements of peak loads and to reduce response latency and provide the best user experience. Such an additional layer increases the complexity and time of development as well as operational costs. MOT provides a good alternative solution, which simplifies the application architecture, shortens the development cycle, and reduces the CAPEX and OPEX through consistent high-performance data storage. **·** Large-scale stream data extraction: The MOT can meet requirements of extracting large-scale stream data in the cloud (for mobility, M2M, and the IoT), transactional processing (TP), analytical processing (AP), and machine learning (ML). MOT is particularly good at extracting large amounts of data from many different sources at once, continuously and quickly. This data can be processed, transformed, and moved later in slower disk-based tables. In addition, MOT can query consistent and latest data to obtain real-time results. In IoT and cloud computing applications with many real-time data streams, there is usually dedicated data ingestion and processing. For example, an Apache Kafka cluster can be used to extract data of 100,000 events per second with a latency of 10 ms. A periodic batch processing task collects data, converts the data format, and stores the data in a relational database for further analysis. MOT can support such a scenario (and eliminate a separate data processing layer) by storing data streams directly in relational MOTs to prepare for analysis and decision making. This enables faster data collection and processing, avoids costly tiering and slow batch processing, improves consistency, increases the timeliness of data analysis, and reduces the total cost of ownership (TCO). **·** Reduced TCO: 30% to 90% TCO can be saved by improving resource utilization and eliminating the intermediate layer. **Unsupported Data Types** **·** UUID **·** User-Defined Type (UDF) **·** Array data type **·** NVARCHAR2(n) **·** Clob **·** Name **·** Blob **·** Raw **·** Path **·** Circle **·** Reltime **·** Bit varying(10) **·** Tsvector **·** Tsquery **·** JSON **·** Box **·** Text **·** Line **·** Point **·** LSEG **·** POLYGON **·** INET **·** CIDR **·** MACADDR **·** Smalldatetime **·** BYTEA **·** Bit **·** Varbit **·** OID **·** Money **·** Any unlimited varchar/character varying **·** HSTORE **MOT Usage** 1. Grant permissions to a user. The following describes how to grant a database user the permission to access the MOT storage engine. This operation is performed only once for each database user and is usually performed during initial configuration. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/36cd04kltuvbqr15lqth.png) To enable a specific user to create and access MOTs (through DDL, DML, and SELECT operations), execute the following statement only once: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bn0flj1qf4gj0ubfb2ea.png) All keywords are case insensitive. 2. Create or delete an MOT. The statements for creating and deleting MOTs are different from those for creating and deleting disk-based tables in openGauss. Except that, the syntax of all other SELECT, DML, and DDL commands is the same for MOTs and openGauss disk-based tables. 2.1 Create an MOT. **create FOREIGN table test(x int) [server mot_server];** 2.2 In the preceding statement: 2.2.1 Always use the FOREIGN keyword to reference the MOT. 2.2.2 When creating a MOT, [server mot_server] is optional because the MOT is an integrated engine, not a standalone server. 2.2.3 In the preceding example, an MOT named test (containing an integer column named x) is created. Another example is provided in the next step “Create an index for an MOT.” 2.2.4 If incremental checkpoints are enabled in postgresql.conf, MOTs cannot be created. Therefore, set enable_incremental_checkpoint to off before creating an MOT. 2.3. Delete the test MOT. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iuxgtatfko6bqucpad5g.png) 3. Create an index for an MOT. Standard openGauss statements for creating and deleting indexes are supported. For example: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/98fy6ta3nc2e0qggjywl.png) Create an ORDER table for TPC-C and create an index. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/unnreoehyowlju9j0s29.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0xys7fed35ylpnwclyfe.png)
llxq2023
1,394,266
500 open-source components for TailwindCSS
I'd like to share my latest discovery with you. Tailwind Elements is...
0
2023-03-09T10:47:31
https://dev.to/aymanebenhima/500-open-source-components-for-tailwindcss-1717
tailwindcss
[![Tailwind components](https://tailwind-elements.com/img/components-big.jpg)](https://tailwind-elements.com/) I'd like to share my latest discovery with you. [Tailwind Elements](https://tailwind-elements.com/) is currently, the most popular 3rd party UI kit for TailwindCSS with over 10k Github stars. [![GitHub Repo stars](https://img.shields.io/github/stars/mdbootstrap/tailwind-elements?style=social)](https://github.com/mdbootstrap/Tailwind-Elements/) It's a **huge collection of stunning components** made with attention to the smallest detail. Forms, cards, buttons, and hundreds of others. All components have **dark mode** and very intuitive **theming options**. The project is supported by an [engaged community on GitHub](https://github.com/mdbootstrap/Tailwind-Elements/discussions), I recommend you check it out and join one of the many discussions. You will find installation instructions [here](https://tailwind-elements.com/docs/getting-started/installation), and you can track the progress of the project live [here](https://tailwind-elements.com/docs/standard/getting-started/changelog/). The project was kickstarted by @MDBootstrap, a group of open-source developers behind [MDB UI Kit](https://github.com/mdbootstrap/mdb-ui-kit) - a high-quality UI kit for Bootstrap, and also behind [MDB GO](https://mdbgo.com/) - hosting and deployment platform. I highly recommend you to check it out! {% link mdbootstrap/tailwind-elements-breakthrough-version-is-here-59hh %}
aymanebenhima
1,394,321
5 Cyber security Best Practices for Small and Medium-sized Businesses
In today's digital landscape, cybersecurity is more important than ever, especially for small and...
0
2023-03-09T11:22:30
https://dev.to/inibambam/5-cyber-security-best-practices-for-small-and-medium-sized-businesses-3a9e
In today's digital landscape, cybersecurity is more important than ever, especially for small and medium-sized businesses (SMBs). While large corporations have dedicated teams and resources to handle cybersecurity threats, SMBs often lack the resources and expertise to adequately protect themselves. However, with the right strategies and tools, SMBs can greatly reduce their risk of cyberattacks. Here are five best practices for SMBs to improve their cybersecurity: Use Strong Passwords: This may seem like a no-brainer, but many SMBs still use weak or easily guessable passwords. Encourage employees to use complex passwords that include a mix of upper and lowercase letters, numbers, and special characters. Consider using a password manager to generate and store strong passwords. Implement Multi-Factor Authentication: Multi-factor authentication adds an extra layer of security by requiring users to provide more than just a password to access their accounts. This can include a code sent via text message, a biometric scan, or a physical security key. Many cloud-based services offer multi-factor authentication options. Train Employees on Cybersecurity: Employees are often the weakest link in cybersecurity. Make sure all employees are aware of the risks and consequences of cyberattacks and provide regular training on best practices, such as avoiding phishing scams, identifying suspicious emails, and reporting security incidents. Keep Software Up-to-Date: Outdated software can contain vulnerabilities that hackers can exploit. Ensure all software, including operating systems and applications, is regularly updated with the latest security patches. Conduct Regular Backups: In the event of a cyberattack or data loss, having recent backups can be a lifesaver. Regularly backup important data to a secure location, such as an offsite server or cloud-based storage. Implementing these best practices can greatly reduce the risk of cyberattacks for SMBs. However, it's important to remember that cybersecurity is an ongoing process, and SMBs should regularly assess and update their security strategies to stay ahead of evolving threats
inibambam
1,394,369
API security is now more important than web application security
We have been reviewing the OWASP Top Ten in some detail, which is the premier index of the most...
0
2023-03-23T16:31:31
https://bycontxt.com/blog/blog/api-security-is-now-more-important-than-web-application-security?utm_source=DevTo
We have been reviewing the OWASP Top Ten in some detail, which is the premier index of the most critical vulnerabilities in web applications. But, in 2019, the OWASP Foundation found that their traditional web application security vulnerability was simply not advanced enough to bring visibility to the fastest growing threat categories: API vulnerabilities. So, in response, the foundation published its first-ever [API Top Ten](https://owasp.org/www-project-api-security/). It’s quite staggering. Between 2017 and 2021, the vulnerability posture of APIs grew to overtake the traditional web application, to the extent that even the web application vulnerabilities often overlap with API vulnerabilities. In fact, if APIs were fully secured, it’s likely that web application breaches would fall by at least 66%. So, as an extension of our security vulnerabilities overview, we will extend and expand to review many of the OWASP API Top Ten. Since the majority of all internet traffic is over APIs, it makes sense that we would want to prioritize improving and hardening APIs above even many traditional web application vulnerabilities. APIs are more often exploited, and their preference as an attack vector continues to grow for bad actors. Stay tuned for a more detailed review of these risks and how to manage them without paralyzing your customer-facing initiatives.
beckland
1,394,688
MSSN CTRL: Call for papers is now open
The inaugural MSSN CTRL security engineering and automation conference will take place on October...
0
2023-03-30T20:16:56
https://www.limacharlie.io/blog/mssn-ctrl-call-for-papers-is-now-open-2023
security
--- title: MSSN CTRL: Call for papers is now open published: true date: 2023-03-09 00:00:00 UTC tags: security canonical_url: https://www.limacharlie.io/blog/mssn-ctrl-call-for-papers-is-now-open-2023 --- The inaugural [MSSN CTRL](https://www.mssnctrl.org/) security engineering and automation conference will take place on October 5-6, 2023 in Arlington, VA. Get involved by submitting our [<u>call for proposals</u>](https://www.papercall.io/mssnctrl23) or [<u>get notified when registration is open.</u>](https://form.typeform.com/to/h64tdKlt) Over the last couple of years we’ve seen our user base grow from a core group of early adopters to now thousands of users worldwide. And with that growth comes our dedication to our community and to learn from each other. Cybersecurity is continuously evolving and security practitioners need to look to their peers for inspiration, knowledge, and ideas. ### Community at work We’re creating the conference with you, the practitioner, in mind. We plan to shape the event around the cybersecurity community with workshops and discussions to give you a voice and the opportunity to share. In order to accomplish this, we need your help. Do you have a topic you want to discuss with your peers? Did you build something on top of LimaCharlie that you want to share? Topics are not limited to the use of LimaCharlie. We’re all ears. [<u>Submit your talk idea in our call for proposals. </u>](https://www.papercall.io/mssnctrl23) <u><b>All proposals are due by April 16, 2023, at 11:59 pm PST.</b></u> All speakers will receive a professionally edited recoding of their session, professional photographs, comped ticket to the event, and the opportunity to speak at the first of many MSSN CTRL conferences. We cannot wait to create this conference with you.
charltonlc
1,394,718
How To: Controlled Forms with React
Unlike JavaScript, React makes use of state in order to keep track of component changes over time....
0
2023-03-09T19:48:08
https://dev.to/codybarker/how-to-controlled-forms-with-react-55h7
webdev, react, javascript, beginners
Unlike JavaScript, React makes use of state in order to keep track of component changes over time. While using state does incur some added complexity, it has its benefits, particularly when it comes to creating controlled forms. We use state to keep track of the dynamic input values in our forms. So what is state exactly? <h2>State<h2> While props remain static, state is dynamic data. It changes over time. Setting state allows us to rerender our components without having to refresh the page or change/add props. That's why state plays an important role in how we handle forms in React. When the input, text area, or select values of a form are set with state, we have a controlled form. In order to use state in our app, we need to import the {useState} hook from React like so. ```jsx import React, { useState } from 'react' ``` Next, we need to invoke our useState hook. The useState hook returns an array of 2 variables, the first being a reference to the value of our state variable, and the second a function that allows us to update that state, which begins with "set". Whatever is within the parentheses of the useState hook becomes the default value of our state. In many cases, it might simply be an empty string. An example of the useState hook might look a little something like this: ```jsx const [name, setName] = useState("") ``` Above, "name" is our state, and "setName" is our setter function. Because we have an empty string inside the parentheses of useState, the default value of "name" is an empty string. An important note about state: whenever we want to update state, we must pass a new object or value to our setter function. By default, we don't have access to the updated state value until the component has rerendered. If we want to update state using the current value of state, we must pass it within a callback function like so. ```jsx setName((name) => `${name} Barker`) ``` Finally, whenever we call set to update our state, our component and all of it's children components will rerender. <h2>Controlled Forms<h2> Setting up a controlled form is relatively easy. First things first, set up your form. Here's a simple example. ```jsx import React from 'react' function Form() { return( <form> <input type="text" name="name" placeholder="name"> </input> <input type="number" name="age" placeholder="age"> </input> <button type="submit">Submit</button> </form> ) } export default Form ``` Next, import the useState hook from React. ```jsx import React, { useState } from 'react' function Form() { return( <form> <input type="text" name="name" placeholder="name"> </input> <input type="number" name="age" placeholder="age"> </input> <button type="submit">Submit</button> </form> ) } export default Form ``` Then, call your useState hook within your component to set up state for both of your form inputs. ```jsx import React, { useState } from 'react' function Form() { const [name, setName] = useState("") const [age, setAge] = useState("") return( <form> <input type="text" name="name" placeholder="name"> </input> <input type="number" name="age" placeholder="age"> </input> <button type="submit">Submit</button> </form> ) } export default Form ``` Next, set the input values equal to the corresponding state values. This will allow us to clear our input fields after submitting our form. ```jsx import React, { useState } from 'react' function Form() { const [name, setName] = useState("") const [age, setAge] = useState("") return( <form> <input type="text" name="name" placeholder="name" value={name}> </input> <input type="number" name="age" placeholder="age" value={age}> </input> <button type="submit">Submit</button> </form> ) } export default Form ``` Now we need to set up event listeners to update state and rerender our component whenever the user enters something into our input fields. ```jsx import React, { useState } from 'react' function Form() { const [name, setName] = useState("") const [age, setAge] = useState("") function handleName(e) { setName(e.target.value) } function handleAge(e) { setAge(e.target.value) } return( <form> <input onChange={handleName} type="text" name="name" placeholder="name" value={name}> </input> <input onChange={handleAge} type="number" name="age" placeholder="age" value={age}> </input> <button type="submit">Submit</button> </form> ) } export default Form ``` At this point, we've controlled our inputs, so from here it all depends on what we want to do when we submit. If we just wanted to add these values to an API, we could add a submit event listener to the form and make a fetch post request to the database. That might look something like this. ```jsx import React, { useState } from 'react' function Form() { const [name, setName] = useState("") const [age, setAge] = useState("") function handleName(e) { setName(e.target.value) } function handleAge(e) { setAge(e.target.value) } function handleSubmit(e) { e.preventDefault() const newUser = { username: name, userAge: age } fetch('http://localhost:3001/users', { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify(newUser) }) .then(r => r.json()) .then(user => console.log(user)) .then( setName(""), setAge("") ) } return( <form onSubmit={handleSubmit}> <input onChange={handleName} type="text" name="name" placeholder="name" value={name}> </input> <input onChange={handleAge} type="number" name="age" placeholder="age" value={age}> </input> <button type="submit">Submit</button> </form> ) } export default Form ``` To recap, whenever a user changes the text within our form input fields, the setter function is called in the event listener, updating our states by setting them equal to the input values. Every state change rerenders the component without a page refresh, updating everything immediately. How nice! When we submit our form, we create a new object called newUser, setting the key value pairs using state. We then make a fetch request to our example API, console.log the response which is our newUser, and then clear the input fields by setting state back to empty strings. If done correctly, our newUser will be added to the database. Remember, we aren't just limited to controlling forms with inputs. We can also control form elements like <textarea> and <select> with state in very much the same way!
codybarker
1,395,707
Secure Your PHP Code With Taint Analysis by Qodana
It only takes one user to exploit a vulnerability in your project and breach your system. To defend...
0
2023-03-10T14:16:23
https://blog.jetbrains.com/qodana/2023/03/secure-your-php-code-with-taint-analysis-by-qodana/
codereview, php, security, codequality
It only takes one user to exploit a vulnerability in your project and breach your system. To defend programs against malicious inputs from external users (known as “taints”), development teams add taint checking to their static analysis routines. In this year’s first release, the [Qodana](https://www.jetbrains.com/qodana/) team has delivered taint analysis for PHP in the EAP. The feature is available only in Qodana for PHP 2023.1 (jetbrains/qodana-php:2023.1-eap). Qodana for PHP was the first linter we released, so we decided to let PHP developers be the first to test our new security functionality, too. We plan on adding more languages in the future, after we’ve collected enough feedback. Read on to learn more about what taint analysis is and how it works in Qodana. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/clwaxqlnq3rcml4ydr0r.png) [GET STARTED WITH QODANA](https://www.jetbrains.com/qodana) ## What is taint analysis? A taint is any value that can pose a security risk when modified by an external user. If you have a taint in your code and unverified external data can be distributed across your program, hackers can execute these code fragments to cause SQL injection, arithmetic overflow, cross-site scripting, path traversal, and more. Usually they exploit these vulnerabilities to destroy the system, hijack credentials and other data, and change the system’s behavior. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gg88k8j31t85sz7igi3w.png) Example of a taint. Arbitrary data from the GET parameter is displayed on the screen. For example, malicious users can exploit this vulnerability to tamper with your program’s layout. As an extra layer of defense against malicious inputs, development teams execute taint analysis when they run a security audit on the program’s attack surface. Taint analysis is the process of assessing the flow of untrusted user input throughout the body of a function or method. Its core goal is to determine if unanticipated input can affect program execution in malicious ways. Taint sources are locations where a program gets access to potentially tainted data. Key points in a program that are susceptible to allowing tainted input are called taint sinks. This data can be propagated to the sinks via function calls or assignments. If you run taint analysis manually, you should spot all of the places where you accept data from external users and follow each piece of data through the system – the tainted data can be used in dozens of nodes. Then, to prevent taint propagation, you should take one of the two approaches described below: **Sanitize the data**, i.e. transform data to a safe state. In the example below, we removed tags to resolve the taint. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r9qmighs2wsuae25wwtm.png) **Validate the data**, i.e. check that the added data conforms to a required pattern. In the example below, we enable validation for the `$email` variable. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2gdmzu7kqtfctnhqbad3.png) In other words, the taint analysis inspection traces user-tainted data from its source to your sinks, and raises the alarm when you work with that data without sanitizing or validating it. ## How taint analysis works in Qodana Taint analysis is performed by Qodana for PHP starting from version 2023.1 EAP. This functionality includes an inspection that scans the code and highlights the taint and potential vulnerability, the ability to open the problem in PhpStorm to address it on the spot, and a dataflow graph visualizing the taint flow. ## Example #1. SQL injection Let’s take a look at an example of SQL injection and how Qodana detects it: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rbcggyhe1jyvgeulkgpd.png) Here, Qodana shows us the following taints in the system_admin() function: Markers 1-2: Data from user form input is retrieved from the $_POST global array with no sanitization or validation and is assigned to the variable $edit. This is a taint. Marker 3: The tainted variable $edit is passed to the system_save_settings function as an argument without any proper sanitization. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/17dnyolv25xtwdbsv80t.png) Marker 4: Data from the $edit variable is now located in the $edit parameter. Marker 5: The $edit variable is passed to foreach with the $filename key and $status value. Both variables contain the tainted data from the $edit variable concatenated with the string. The $filename key is concatenated with a tainted SQL string, and then it will propagate tainted data into an argument passed to the db_query. Marker 6: The $ filename key contains the tainted data from the $edit variable concatenated with the string. Marker 7: The $ filename key is concatenated with a tainted SQL string. Marker 8: Tainted SQL string will propagate tainted data into an argument passed to the `db_query` Let’s now look at the db_query: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4qlrofkpext33wvlwr2n.png) Marker 9: The tainted string will be located in the $query parameter. Marker 10: This parameter is going to be an argument of the _db_query function. Let’s move on to the _db_query function: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f84czipidydows32uiko.png) Marker 11: Tainted data located in the first parameter $ query of the _db_query function. Marker 12: Data of the parameter is passed to the mysql_query function, which is a sink. The whole data flow above illustrates how data moves from $_POST[“edit”] to the mysql_query($query) without any sanitization or validation. This allows the attacker to manipulate the SQL query which was concatenated with a key of $_POST[“edit”] and trigger SQL injection. Qodana will spot these risks in your codebase along with all nodes where tainted data is used, so you can sanitize all tainted data in a timely manner. ## Example #2. XSS problem In the Qodana UI, you can see a graph that visualizes the entire taint flow. Here’s how Qodana will visualize the XSS vulnerability, which contains 2 sources that would be merged on marker 5. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x2f95eh48iol52hymx1v.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1t5bg2a5zci2yefzixwr.png) **Source 1** Markers 1-2: Data from the searchUpdate.pos file will be read and tainted data will be assigned to the $start variable. **Source 2** Markers 3-4: Data from files whose path is located in $posFile will be read and tainted data will be assigned to the $start variable. Marker 5: A merged tainted state from all conditional branches in the $start variable will be passed as an argument to the doUpdateSearchIndex method. Let’s look inside the doUpdateSearchIndex() method: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/592t4vvyrz3gijzk4xu9.png) Markers 6-8: The $ start parameter will contain tainted data on this dataflow slice and then it will be passed within a concatenated string as an argument to the `output` method. Let’s look inside the output method: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ojno451lsmsjaw66f51.png) Marker 9: Tainted data contained inside the transmitted string will be located in the $out parameter. Marker 10: Data from the $out parameter will be transferred to the `print` function without any sanitization. This function is a sink and causes XSS vulnerability, which can be exploited. To exploit the vulnerability, an attacker can, for example, upload a shell script instead of the expected files in markers 1 and 2, and will be able to put any information onto the web page as a result of an unsanitized print function. Qodana will alert you to this vulnerability and give it a high priority so that you can resolve it as soon as possible and prevent the hack. ## Conclusion Taint analysis helps eliminate exploitable attack surfaces, so it’s an effective method to reduce risk to your software. To learn about taint analysis and Qodana in detail, explore Qodana documentation. Happy developing and keep your code healthy! [GET STARTED WITH QODANA](https://www.jetbrains.com/qodana)
valeriekukuss
1,396,858
Is HTML a programming language ?
HTML (Hypertext Markup Language) is not a programming language, but rather a markup language used for...
0
2023-03-11T13:34:58
https://dev.to/nite_dev/is-html-a-programming-language--534i
webdev, codenewbie, html, computerscience
HTML (Hypertext Markup Language) is not a programming language, but rather a markup language used for creating and structuring content on the web. Unlike programming languages, HTML does not have the ability to perform complex operations, make decisions or carry out calculations. Instead, HTML is used to create the structure and content of web pages, defining the text, images, and other elements that make up a web page. However, HTML is often used in conjunction with programming languages such as JavaScript and PHP to create dynamic and interactive web pages. These programming languages are used to add interactivity, perform calculations and operations, and connect to databases and other web services.
nite_dev
1,405,361
Contacto
Según linkedin soy un: Conferencista internacional apasionado por hacer del mundo un lugar mejor,...
0
2023-03-18T04:11:17
https://dev.to/wildchamo/contacto-8ph
Según linkedin soy un: Conferencista internacional apasionado por hacer del mundo un lugar mejor, fundador y lider de comunidades tecnológicas éxitosas 🦏✨. Desarrollador web experimentado con un historial demostrado de trabajo en la industria de Internet. Experto en diseño web, desarrollo front-end, ingeniería, gestión de proyectos y desarrollo web. Fuerte profesional en ingeniería multimedia enfocado en desarrollo de páginas Web y desarrollo en Blockchain de la Universidad Autónoma de Occidente. Info/dudas/propuestas de trabajo que me harán millonario son bienvenidas wildchamo@gmail.com. Existo en instagram [@wildchamo](https://www.instagram.com/wildchamo/).
wildchamo
1,405,517
Just bought an Asus Zephyrus G14. What you need to know is
So, I am a new Software Developer and my journey began with an Asus Zenbook Duo. I had it for almost...
0
2023-03-18T07:21:20
https://dev.to/codelikeagirl29/just-bought-an-asus-zephyrus-g14-what-you-need-to-know-is-1o8l
reviews, laptop, gaming
So, I am a new Software Developer and my journey began with an Asus Zenbook Duo. I had it for almost a year and it just couldn't handle my workload. :( It only had an __Intel® Core™ i5-1155G7 Processor__ 2.5 GHz (8M Cache, up to 4.5 GHz, 4 cores) So next up, I got the Lenovo Legion 5 (17) - which was great but a little too big for me and my needs with 17" screen. ### Specs Processor Model - AMD Ryzen 7 5000 Series Processor Model Number - 5800HS Storage Type - SSD Total Storage Capacity - 512 gigabytes Solid State Drive Capacity - 512 gigabytes System Memory (RAM) - 16 gigabytes Graphics - NVIDIA GeForce GTX 1650 Operating System - Windows 11 Home Battery Type - Lithium-ion polymer Backlit Keyboard - Yes ### General - Product Name ROG Zephyrus G14 14" Laptop - AMD Ryzen 7 - 16GB Memory - NVIDIA GeForce GTX 1650 - 512GB SSD Brand - ASUS Model Number - GA401QH-211.ZG14BL ### Display Screen Size - 14 inches Screen Resolution - 1920 x 1080 (Full HD) Touch Screen - No ![pic-of-my-desk](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kf2jmfla1lxv5qlo1agk.jpg) My current desk setup Back to the basics, the Zephyrus G14 gets an Ergolift hinge design which pushes up the main-chassis on rubber feet at the bottom of the screen, in order to improve airflow underneath and help cool the components. Ergolift designs have one major flaw, though, and that’s the fact that hot air is blown into the screen. It’s an issue here as well, but not as bad as on ZenBooks. For starters, there’s a bigger gap between the exhausts and the screen than on most ZenBooks. Then, the lid design also allows some of the hot air to go out to the back, through that carving in the bottom side. Finally, the ROG engineers smartly designed the radiators and plastic fins that split the hot air coming out from the exhausts, sending some of it downwards and some of it upwards and to the sides, so not straight into the screen. I haven’t got to test this implementation in games and measure the panel-temperatures around the exhausts, so I can’t vouch for the practical worth of this solution, but it shows they were aware of the problem and did something to at least partially address it. We’ll further touch on the thermal design in the next section. In the meantime, I’ll also mention a couple of other aspects you should be aware of. First off, this notebook gets quad-speakers, with a set on main speakers (woofers, kind of) still firing on the bottom, but also a set of tweeters firing through those grills in the palm-rest. Not sure if that’s the best placement, as I fear they might be fairly easily cover by your palms with daily use, but again, they tried to address an issue of most modern gaming notebooks, within the space constraints of a 14-inch frame. Then, you’ll notice there’s no webcam on the Zephyrus G14, just like on the 15-inch model and many of the other recent ROG gaming laptops. I don’t mind it, but some of you might feel otherwise. I was told an external camera will not be included with the standard bundle, unlike on some of the Zephyrus S models; makes sense to help keep prices down. There’s also a fingerprint sensor integrated into the power button of the G14, a first for an ROG laptop. It’s been touted for months and I’m glad to finally see it implemented. The redesigned button also ditches that pesky always-on light implemented with other ROG notebooks, but the status LEDs are still placed beneath the screen, annoying when trying to watch a movie at night. Please send them to the side or the front somewhere. Finally, as far as the IO goes, the Zephyrus G14 offers a fair selection of ports, with USB-A and C ports, HDMI and an audio jack. There’s no LAN, but you get a USB to LAN adapter included for the rare occasions you’ll need wired Internet. There’s also no card-reader and no Thunderbolt 3, no surprise given this is an AMD-based notebook, but the left USB-C port allows charging. More on that further down.
codelikeagirl29
1,405,658
Localization in Laravel
Why Localization is an important feature in Laravel? A popular PHP web framework, which allows...
0
2023-03-18T11:45:15
https://dev.to/codeofaccuracy/localization-in-laravel-3g0l
laravel, beginners, learning, localization
**Why Localization is an important feature in Laravel?** A popular PHP web framework, which allows developers to create multilingual applications easily. Laravel provides built-in support for localization using the Illuminate/Translation package, which provides tools for managing language files, translating strings, and displaying localized content. To start using localization in Laravel, developers need to create language files for each language that their application will support. These files are typically located in the `resources/lang` directory of the project and have a filename that corresponds to the language code, e.g. `en` for English or `fr` for French. Once the language files are in place, developers can use the `trans()` function to translate strings into the appropriate language. For example: ``` echo trans('messages.welcome'); ``` In this example, the `trans()` function is used to translate the string `'messages.welcome'` into the language specified by the user's browser settings or application preferences. Developers can also use the `Lang::get()` method to retrieve language strings directly from the language files, like this: ``` echo Lang::get('messages.welcome'); ``` Laravel also supports localization of dates, numbers, and other data formats using the Carbon library, which provides tools for formatting and manipulating date and time values. In summary, Laravel provides robust support for localization, making it easy for developers to create multilingual applications that can be used by audiences in different regions and languages.
codeofaccuracy
1,405,880
Next.js Authentication using Higher-Order Components
Introduction Managing authentication in Next.js is quite tricky, with problems such as...
0
2023-03-18T15:55:38
https://theodorusclarence.com/blog/nextjs-auth-hoc
nextjs, react, typescript, javascript
## Introduction Managing authentication in Next.js is quite tricky, with problems such as content flashing. In this blog, I won't address the problems and explain how to solve it in detail, because I've written a blog about that in [Next.js Redirect Without Flashing Content](https://theodorusclarence.com/blog/nextjs-redirect-no-flashing). In this blog, I'll cover how to handle them cleanly using Higher Order Components. ## The Usual Way & The Problem Usually for the authentication in Next.js, we define routes that need to be blocked like so: ```tsx const protectedRoutes = ['/block-component', '/profile']; ``` Then we have a component that checks the route like this: ```jsx export default function PrivateRoute({ protectedRoutes, children }) { const router = useRouter(); const { isAuthenticated, isLoading } = useAuth(); const pathIsProtected = protectedRoutes.indexOf(router.pathname) !== -1; useEffect(() => { if (!isLoading && !isAuthenticated && pathIsProtected) { // Redirect route, you can point this to /login router.push('/'); } }, [isLoading, isAuthenticated, pathIsProtected]); if ((isLoading || !isAuthenticated) && pathIsProtected) { return <FullPageLoader />; } return children; } ``` This works, but there are several problems: 1. It's **not colocated**, the placement of authentication is not located in the page itself, instead in another component such as `PrivateRoute` 2. **Error Prone**, when you're doing route changes, for example: if you're moving the `pages/blocked-component.tsx` file to `pages/blocked/component.tsx`, you will have to change the `protectedRoutes` variable into the new route. This is quite dangerous because with the `protectedRoutes` variable, there are no type checking because there is no way for TypeScript to know if that's the right path. ([maybe soon](https://nextjs.org/blog/next-13-2#statically-typed-links)) ## Higher-Order Component My friend and I built a higher-order component that we can put inside the page like so: ```tsx export default withAuth(ProtectedPage); function ProtectedPage() { /* react component here */ } ``` With this implementation, it's now colocated within the page and it won't be a problem if you change the file name. ## Adding Several Types of Pages In my experience of building simple authenticated apps, there are 3 type of authenticated pages that we need to support > For the demo, you can try it yourself on the [demo page](https://auth-hoc.thcl.dev/) ### 1. Simple Protected Pages It's for pages that need protection, such as dashboard, edit profile page, etc. **Behavior** - **Unauthenticated users** will be redirected to `LOGIN_ROUTE` (default: `/login`), without any content flashing - **Authenticated users** will see this page in this following scenario: - **Direct visit using link** → user will see a loading page while the `withAuth` component checks the token, then this page will be shown - **Visit from other pages** (`router.push`) → user will see this page immediately ### 2. Authentication Pages (Login) It's for pages such as Login and Register or any other page that suits with the behavior. **Behavior:** - **Unauthenticated users** can access this page without any loading indicator - **Authenticated users** will be redirected to `HOME_ROUTE` (default: `/`). - We're assuming that authenticated users won't need to see login anymore. Instead, they should be redirected to the `HOME_ROUTE`. - It's also best to hide all links back to the login page when the users is already authenticated. ### 3. Optional Page This is a more specific use case, but sometimes there are pages that you don't need to be authenticated to visit, but you still need to show the users details if they are authenticated. **Behavior:** - This page is accessible to all users - You can get the user from `useAuthStore.useUser()` ## Page Focus Synchronization {% youtube RyUgDondT6A %} We also added a page focus listener. When you open several tabs, the authentication will be synced across tabs. ```tsx React.useEffect(() => { // run checkAuth every page visit checkAuth(); // run checkAuth every focus changes window.addEventListener('focus', checkAuth); return () => { window.removeEventListener('focus', checkAuth); }; }, [checkAuth]); ``` ## Source Codes We use Zustand to store authentication data globally ### Zustand Store ```tsx import { createSelectorHooks } from 'auto-zustand-selectors-hook'; import produce from 'immer'; import create from 'zustand'; import { User } from '@/types/auth'; type AuthStoreType = { user: User | null; isAuthenticated: boolean; isLoading: boolean; login: (user: User) => void; logout: () => void; stopLoading: () => void; }; const useAuthStoreBase = create<AuthStoreType>((set) => ({ user: null, isAuthenticated: false, isLoading: true, login: (user) => { localStorage.setItem('token', user.token); set( produce<AuthStoreType>((state) => { state.isAuthenticated = true; state.user = user; }) ); }, logout: () => { localStorage.removeItem('token'); set( produce<AuthStoreType>((state) => { state.isAuthenticated = false; state.user = null; }) ); }, stopLoading: () => { set( produce<AuthStoreType>((state) => { state.isLoading = false; }) ); }, })); const useAuthStore = createSelectorHooks(useAuthStoreBase); export default useAuthStore; ``` ### withAuth HOC Component ```tsx import { useRouter } from 'next/router'; import * as React from 'react'; import { ImSpinner8 } from 'react-icons/im'; import apiMock from '@/lib/axios-mock'; import { getFromLocalStorage } from '@/lib/helper'; import useAuthStore from '@/store/useAuthStore'; import { ApiReturn } from '@/types/api'; import { User } from '@/types/auth'; export interface WithAuthProps { user: User; } const HOME_ROUTE = '/'; const LOGIN_ROUTE = '/login'; const ROUTE_ROLES = [ /** * For authentication pages * @example /login /register */ 'auth', /** * Optional authentication * It doesn't push to login page if user is not authenticated */ 'optional', /** * For all authenticated user * will push to login if user is not authenticated */ 'all', ] as const; type RouteRole = (typeof ROUTE_ROLES)[number]; /** * Add role-based access control to a component * * @see https://react-typescript-cheatsheet.netlify.app/docs/hoc/full_example/ * @see https://github.com/mxthevs/nextjs-auth/blob/main/src/components/withAuth.tsx */ export default function withAuth<T extends WithAuthProps = WithAuthProps>( Component: React.ComponentType<T>, routeRole: RouteRole ) { const ComponentWithAuth = (props: Omit<T, keyof WithAuthProps>) => { const router = useRouter(); const { query } = router; //#region //*=========== STORE =========== const isAuthenticated = useAuthStore.useIsAuthenticated(); const isLoading = useAuthStore.useIsLoading(); const login = useAuthStore.useLogin(); const logout = useAuthStore.useLogout(); const stopLoading = useAuthStore.useStopLoading(); const user = useAuthStore.useUser(); //#endregion //*======== STORE =========== const checkAuth = React.useCallback(() => { const token = getFromLocalStorage('token'); if (!token) { isAuthenticated && logout(); stopLoading(); return; } const loadUser = async () => { try { const res = await apiMock.get<ApiReturn<User>>('/me'); login({ ...res.data.data, token: token + '', }); } catch (err) { localStorage.removeItem('token'); } finally { stopLoading(); } }; if (!isAuthenticated) { loadUser(); } }, [isAuthenticated, login, logout, stopLoading]); React.useEffect(() => { // run checkAuth every page visit checkAuth(); // run checkAuth every focus changes window.addEventListener('focus', checkAuth); return () => { window.removeEventListener('focus', checkAuth); }; }, [checkAuth]); React.useEffect(() => { if (!isLoading) { if (isAuthenticated) { // Prevent authenticated user from accessing auth or other role pages if (routeRole === 'auth') { if (query?.redirect) { router.replace(query.redirect as string); } else { router.replace(HOME_ROUTE); } } } else { // Prevent unauthenticated user from accessing protected pages if (routeRole !== 'auth' && routeRole !== 'optional') { router.replace( `${LOGIN_ROUTE}?redirect=${router.asPath}`, `${LOGIN_ROUTE}` ); } } } }, [isAuthenticated, isLoading, query, router, user]); if ( // If unauthenticated user want to access protected pages (isLoading || !isAuthenticated) && // auth pages and optional pages are allowed to access without login routeRole !== 'auth' && routeRole !== 'optional' ) { return ( <div className='flex min-h-screen flex-col items-center justify-center text-gray-800'> <ImSpinner8 className='mb-4 animate-spin text-4xl' /> <p>Loading...</p> </div> ); } return <Component {...(props as T)} user={user} />; }; return ComponentWithAuth; } ``` For more code and implementation examples check out the code on [GitHub](https://github.com/theodorusclarence/nextjs-with-auth-hoc) ## Attribution - [Rizqi Tsani](https://rizqitsani.com), co-creator of this code. - [Next Auth](https://next-auth.js.org/), for the inspiration and the idea of using HOC to handle authentication. ## Conclusion This will be a great addition to your code, making it cleaner and more efficient. You should colocate your code as much as possible, and this will be a step to do that. --- > Originally posted on [my personal site](https://theodorusclarence.com/?ref=devto), find more [blog posts](https://theodorusclarence.com/blog?ref=devto) and [code snippets library](https://theodorusclarence.com/library?ref=devto) I put up for easy access on my site 🚀 Like this post? [Subscribe to my newsletter](https://theodorusclarence.com/subscribe?ref=devto) to get notified every time a new post is out!
theodorusclarence
1,405,997
Iteration or Recursion?
One of the most common misconception or jeopardy arises when encountering Data Structure &amp;...
0
2023-04-16T09:57:10
https://dev.to/noorejannatnafia/iteration-or-recursion-13mk
One of the most common misconception or jeopardy arises when encountering Data Structure & Algorithms. **Iteration:** Iteration is the repetition of a process. **Recursion:** Recursion is the process of calling a certain function by itself. The definitions might make them appear similar. However, there are several differences among them to differentiate them properly. | Iteration | Recursion | |------------|-----------| | Set of instructions is repeated. | The function calls itself. | | Iteration consists of initialization and conditions. | Recursion consists of base case (termination condition) only. | | Infinite iteration can occur when the condition never becomes false. | Infinite recursion can occur when the base case(s) can never be reached. | | Iteration uses a data structure called stack. | Recursion does use stack. | | Fast in execution. | Slow in execution. | | The codes of iteration are usually longer. | The codes of recursion are usually shorter. |
noorejannatnafia
1,406,008
Big O Notation
A brief overview of Big O notation 📌Big O notation is a mathematical notation used to...
0
2023-03-18T19:16:09
https://dev.to/alisamirali/big-o-notation-j34
algorithms, programming, computerscience
## A brief overview of Big O notation --- 📌Big O notation is a mathematical notation used to describe the complexity or running time of an algorithm. It is used to compare the efficiency of algorithms by analyzing how the time and space requirements grow as the input size increases. In other words, it tells us how quickly the resource requirements of an algorithm increase as the input size grows larger. --- 📌In Big O notation, the notation "O" stands for "order of", and is followed by a mathematical expression that describes the algorithm's performance. --- 📌For example, an algorithm with a runtime of O(n) has a linear runtime, meaning that the time it takes to execute the algorithm grows linearly with the input size n. An algorithm with a runtime of O(n^2) has a quadratic runtime, meaning that the time it takes to execute the algorithm grows quadratically with the input size n. --- 📌Some commonly used Big O notations include: - O(1) - Constant time - O(log n) - Logarithmic time - O(n) - Linear time - O(n log n) - Linearithmic time - O(n^2) - Quadratic time - O(2^n) - Exponential time --- ⚡Big O notation is important because it helps us analyze the efficiency of algorithms and determine which algorithm is the best for a given problem. It allows us to make informed decisions about the trade-offs between time and space complexity, and to choose the algorithm that is most appropriate for a given situation. --- 💡 You can follow me on: - [LinkedIn](https://www.linkedin.com/in/dev-alisamir) - [Telegram](https://t.me/dev_ali_samir) - [Facebook](https://www.facebook.com/alisamir.dev) - [Instagram](https://www.instagram.com/alisamir.dev)
alisamirali
1,406,138
How to dynamically stream video
Build it yourself or use Cloudinary Dynamic video streaming is a video delivery technique...
22,300
2023-03-18T21:56:45
https://timbenniks.dev/writing/how-to-dynamically-stream-video
video, cloudinary, streaming
## Build it yourself or use Cloudinary Dynamic video streaming is a video delivery technique that adjusts the quality of a video stream in real time. It does this according to detected bandwidth and CPU capacity of a user. In this article we will explore the two techniques which allow you to dynamically stream video. HLS and MPEG-DASH are the two most popular formats out there. Dynamic or adaptive video delivery requires outputting a video in different quality settings along with some additional files. Both HLS and MPEG-DASH have different approaches to the problem. The process of making adaptive streaming work is complex. Most services out there do not provide an end-to-end solution for this and the ones that do a are quite costly. The adaptive video streaming paradigm is not one that many companies have conquered as it requires specific knowledge and access to hardware. There is a reason we don't have many competitors for Netflix and YouTube. Adaptive streaming of video is hard. First we’ll go into how adaptive streaming works and then I’ll explain exactly how to do this yourself. It’s much easier than you think once you have the knowledge and the right third party tool to do the heavy lifting. ### How adaptive video delivery works The video stream adapts itself based on a set of rules. The user’s bandwidth, CPU load and video player resolution on the page. To be able to stream adaptively you need to be able to stream different versions of a video. Each variant is of different quality, has a different bitrate and potentially has a different codec or resolution. Think of it as progressive enhancement in web development. The simplest stream always works and based on the features you have (in this case, CPU power, bandwidth, resolution), you get a nicer looking video stream. Each adaptive video is also joined by an index file that specifies predefined segments of the video. In the HLS standard these segments are usually 10 seconds long where in MPEG-DASH we use 1 second. There is also a master playlist that points to the available video variations with additional information about each one. #### An audio playlist adaptation It’s pretty cool that dynamic video streaming is based on the spec from the M3U8 audio playlist. M3U8 was originally designed for audio files, such as MP3, but nowadays it is commonly used to point media players to audio and video sources. An adaptive streaming video player uses the playlist information to decide which of the available video variations fits the user’s network conditions, CPU load or resolution best. It can switch to another source at each 10 second segment (these segments can also be shorter, see examples below) if the network conditions change. This approach works well to minimise bandwidth use and optimise it for a smooth playback for everybody who watches the video stream. It can also be used the other way around, if the streaming service is completely overloaded it can send a video stream with a smaller bitrate or resolution to the viewer. ### About HLS and MPEG-DASH #### HLS HLS was originally created by Apple to provide video for the iPhone, but now it’s a common format used across HTML5 web applications. You’ll need to encode your video with H.264 or HEVC/H.265 codecs, which can be decoded by all major browsers. With HLS, the video is chopped up into 10 second intervals and sent to the user. #### MPEG-DASH MPEG-DASH is the latest HLS competitor. It was originally created to be an alternative to HLS. It has a few advantages over HLS, mainly because it is open-source. This means the media content publisher community as a whole can contribute to its changes and updates. MPEG-DASH is globally supported and codec agnostic, which means that you can encode video without worrying about codec support. It has lower latency than HLS. It's playlist file is an `.MPD`, which is an `XML` format. ### Doing it yourself To deliver videos using adaptive streaming you must generate multiple video versions, add an index file per variant and add a master playlist. The formats and encoding for HLS and MPEG-DASH are different for each of these files. If you want to stream using both HLS and MPEG-DASH formats you need to double the effort for every video you want to deliver. Additionally, for MPEG-DASH, the best practice is to deliver the audio and video separately. This stuff is complex and time consuming. If you are a developer who likes to get into the nitty gritty of `ffmpeg` you can deep dive and create all sources for HLS and MPEG-DASH yourself. #### DIY steps for MPEG-DASH MPEG-DASH is simplest to do yourself. Let's give it a go! Imagine we have a video file called `video.mp4`. To make sure we can adaptively stream the video we need to create video files with different bitrates and an audio file. _Beware that this is a simplified version for illustration purposes. In real life `ffmpeg` has many quirks based what video you give it._ **Step 1: extract the audio** Extract the audio track: ``` $ ffmpeg -i video.mp4 -c:a copy -vn video-audio.mp4 ``` **Step 2: extract and re-encode the video track** ``` $ ffmpeg -i video.mp4 -an -c:v libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 5300k -maxrate 5300k -bufsize 2650k -vf 'scale=-1:1080' video-1080.mp4 $ ffmpeg -i video.mp4 -an -c:v libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 2400k -maxrate 2400k -bufsize 1200k -vf 'scale=-1:720' video-720.mp4 $ ffmpeg -i video.mp4 -an -c:v libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 1060k -maxrate 1060k -bufsize 530k -vf 'scale=-1:478' video-480.mp4 $ ffmpeg -i video.mp4 -an -c:v libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 600k -maxrate 600k -bufsize 300k -vf 'scale=-1:360' video-360.mp4 $ ffmpeg -i video.mp4 -an -c:v libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 260k -maxrate 260k -bufsize 130k -vf 'scale=-1:242' video-240.mp4 ``` The video is encoded using H.264 codec. This forces to have a key frame every 24 frames, in this case, every second. This allows the video to be segmented in chunks of 1 second. The bitrate is evaluated according to the buffer size, so in order to be sure the encoding is close to the requested rate, the buffer size should be lower than the bitrate. **Step 3: generate the MPD file** We now have one audio file and five video files. A Media Presentation Description (MPD) file has to be created. An MPD file functions as an index referencing the different video and audio tracks with their bitrate, size and how the segments are ordered. ``` $ MP4Box -dash 1000 -rap -frag-rap -profile onDemand -out video.mpd video-1080.mp4 video-720.mp4 video-480.mp4 video-360.mp4 video-240.mp4 video-audio.mp4 ``` The -dash option sets the duration of each segment to one second. Next to preparing adaptive streaming content MP4Box can do a lot more. So much more in fact that it's best to just read more [here](https://github.com/gpac/gpac/wiki/MP4Box). **Step 4: configure your webserver** Make sure your webserver understands `.mpd` files by adding the following mime type: `application/dash+xml` to its config. **Step 5: make sure your video player understands adaptive streaming** Implement [dash.js](https://github.com/Dash-Industry-Forum/dash.js) into your video player or build a custom video player around dash.js. **Concluding** Obviously, doing this at scale or as a slightly less technical user this process is not realistic. You'll want to automate this completely. #### Enter: Cloudinary Next to being market leader in image delivery Cloudinary also provides features for video: from dynamic streaming profiles to cropping the subject perfectly on different video ratios. They even use AI to generate captions for muted videos or meaningful previews. Today we are discussing the dynamic streaming service they offer. Cloudinary has created [smart pre-defined](https://cloudinary.com/documentation/video_manipulation_and_delivery#adaptive_bitrate_streaming_hls_and_mpeg_dash) streaming profiles to help you out. A streaming profile holds a set of video variation definitions with different qualities, bitrates, and codecs. For example, the one profile specifies 10 different variations ranging from extremely high quality to audio-only. You can also create [custom profiles](https://cloudinary.com/documentation/admin_api#adaptive_streaming_profiles) through their admin API. Once you have selected a profile, you upload your video file with an eager transformation that instructs the system to generate all the required files for the requested profile in either HLS or MPEG-DASH format. If you want to deliver both formats, add two [eager transformations](https://cloudinary.com/documentation/transformations_on_upload#eager_transformations) within your upload command. This upload code is for the Node.js SDK. ``` // This file is to be used in node.js and is for uploading your video file to Cloudinary. // This will not work in codesandbox and is here only for example purposes. // Run locally like: `node upload.js` const cloudinary = require('cloudinary').v2; // Create a Cloudinary account and fill out your credentials cloudinary.config({ cloud_name: '', api_key: '', api_secret: '', }); // Upload your file with the Cloudinary Uploader API cloudinary.uploader .upload('<your-video.mp4>', { resource_type: 'video', eager: [ // Specify what streaming profile you want to use { format: 'm3u8', streaming_profile: '4k' }, { format: 'mpd', streaming_profile: '4k' }, ], eager_async: true, eager_notification_url: '<your-notify-url>', public_id: '<your-public-id>', // This will be the public ID of the video }) .then((video) => { console.log('File Uploaded'); console.log(video.public_id); }) .catch((error) => { console.log('File Upload Error'); console.log(error); }); ``` Now that the file has been uploaded, it generates a bunch of different video and audio streams. These streams are represented in the playlist files below. For the HLS version of the video this is what comes out as the m3u8 playlist file: ``` #EXTM3U #EXT-X-STREAM-INF:BANDWIDTH=10712000,CODECS="avc1.640028,mp4a.40.2",RESOLUTION=3840x2160 /dwfcofnrd/video/upload/c_limit,w_3840,h_2160,vc_h264:high:4.0,br_35m/v1602940452/cloudinary-dynamic-video-streaming.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=5420000,CODECS="avc1.640028,mp4a.40.2",RESOLUTION=2560x1440 /dwfcofnrd/video/upload/c_limit,w_2560,h_1440,vc_h264:high:4.0,br_16m/v1602940452/cloudinary-dynamic-video-streaming.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=3248000,CODECS="avc1.640028,mp4a.40.2",RESOLUTION=1920x1080 /dwfcofnrd/video/upload/c_limit,w_1920,h_1080,vc_h264:high:4.0,br_8500k/v1602940452/cloudinary-dynamic-video-streaming.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=1400000,CODECS="avc1.4D401F,mp4a.40.2",RESOLUTION=1280x720 /dwfcofnrd/video/upload/c_limit,w_1280,h_720,vc_h264:main:3.1,br_5500k/v1602940452/cloudinary-dynamic-video-streaming.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=876000,CODECS="avc1.4D401F,mp4a.40.2",RESOLUTION=960x540 /dwfcofnrd/video/upload/c_limit,w_960,h_540,vc_h264:main:3.1,br_3500k/v1602940452/cloudinary-dynamic-video-streaming.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=615000,CODECS="avc1.42C01E,mp4a.40.2",RESOLUTION=640x360 /dwfcofnrd/video/upload/c_limit,w_640,h_360,vc_h264:baseline:3.0,br_2m/v1602940452/cloudinary-dynamic-video-streaming.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=411000,CODECS="avc1.42C01E,mp4a.40.2",RESOLUTION=480x270 /dwfcofnrd/video/upload/c_limit,w_480,h_270,vc_h264:baseline:3.0,br_800k/v1602940452/cloudinary-dynamic-video-streaming.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=279000,CODECS="avc1.42C01E,mp4a.40.2",RESOLUTION=320x180 /dwfcofnrd/video/upload/c_limit,w_320,h_240,vc_h264:baseline:3.0,br_192k/v1602940452/cloudinary-dynamic-video-streaming.m3u8 ``` For the MPEG-DASH version of the video this is what comes out as the MPD playlist file (I have shortened the file for readability): ``` <MPD xmlns="urn:mpeg:dash:schema:mpd:2011" minBufferTime="PT1.500S" type="static" mediaPresentationDuration="PT0H0M28.800S" maxSegmentDuration="PT0H0M2.800S" profiles="urn:mpeg:dash:profile:full:2011"> <Period duration="PT0H0M28.800S"> <AdaptationSet segmentAlignment="true" maxWidth="1280" maxHeight="720" maxFrameRate="25" par="16:9" lang="und"> <Representation id="1" mimeType="video/mp4" codecs="avc1.42C01E" width="320" height="180" frameRate="25" sar="1:1" startWithSAP="1" bandwidth="188841"> <BaseURL>/dwfcofnrd/video/upload/c_limit,w_320,h_240,vc_h264:baseline:3.0,br_192k/v1602940452/cloudinary-dynamic-video-streaming.mp4dv</BaseURL> <SegmentList timescale="12800" duration="25600"> <Initialization range="0-909" /> <SegmentURL mediaRange="910-48949" indexRange="910-953" /> <SegmentURL mediaRange="48950-90844" indexRange="48950-48993" /> <SegmentURL mediaRange="90845-134433" indexRange="90845-90888" /> <SegmentURL mediaRange="134434-177434" indexRange="134434-134477" /> <SegmentURL mediaRange="177435-229116" indexRange="177435-177478" /> <SegmentURL mediaRange="229117-280431" indexRange="229117-229160" /> <SegmentURL mediaRange="280432-328048" indexRange="280432-280475" /> <SegmentURL mediaRange="328049-376769" indexRange="328049-328092" /> <SegmentURL mediaRange="376770-426815" indexRange="376770-376813" /> <SegmentURL mediaRange="426816-478009" indexRange="426816-426859" /> <SegmentURL mediaRange="478010-528551" indexRange="478010-478053" /> <SegmentURL mediaRange="528552-572601" indexRange="528552-528595" /> <SegmentURL mediaRange="572602-620003" indexRange="572602-572645" /> <SegmentURL mediaRange="620004-679828" indexRange="620004-620047" /> </SegmentList> </Representation> <Representation id="2" mimeType="video/mp4" codecs="avc1.42C01E" width="480" height="270" frameRate="25" sar="1:1" startWithSAP="1" bandwidth="346668"> <BaseURL>/dwfcofnrd/video/upload/c_limit,w_480,h_270,vc_h264:baseline:3.0,br_800k/v1602940452/cloudinary-dynamic-video-streaming.mp4dv</BaseURL> <SegmentList timescale="12800" duration="25600"> <Initialization range="0-909" /> <SegmentURL mediaRange="910-84012" indexRange="910-953" /> <SegmentURL mediaRange="84013-157030" indexRange="84013-84056" /> <SegmentURL mediaRange="157031-233498" indexRange="157031-157074" /> <SegmentURL mediaRange="233499-307813" indexRange="233499-233542" /> <SegmentURL mediaRange="307814-397973" indexRange="307814-307857" /> <SegmentURL mediaRange="397974-486089" indexRange="397974-398017" /> <SegmentURL mediaRange="486090-566671" indexRange="486090-486133" /> <SegmentURL mediaRange="566672-651620" indexRange="566672-566715" /> <SegmentURL mediaRange="651621-750051" indexRange="651621-651664" /> <SegmentURL mediaRange="750052-862906" indexRange="750052-750095" /> <SegmentURL mediaRange="862907-974846" indexRange="862907-862950" /> <SegmentURL mediaRange="974847-1059121" indexRange="974847-974890" /> <SegmentURL mediaRange="1059122-1143744" indexRange="1059122-1059165" /> <SegmentURL mediaRange="1143745-1248006" indexRange="1143745-1143788" /> </SegmentList> </Representation> <Representation id="3" mimeType="video/mp4" codecs="avc1.42C01E" width="640" height="360" frameRate="25" sar="1:1" startWithSAP="1" bandwidth="561940"> <!-- ... and many more ... --> </AdaptationSet> </Period> </MPD> ``` Now that we have the playlist files and all the video streams we can either build our own fancy video player that understands dynamic streaming or we go for the [Cloudinary player](https://cloudinary.com/documentation/cloudinary_video_player). In this case I suggest we use the Cloudinary player as it works out of the box. Check out the code sandbox for a very simple vanilla JavaScript example of loading the player for both HLS and MPEG-DASH. Try throttling your connection and see the differences in quality. To do this, open your web developer tools (assuming you use chrome), open the network tab and select a different connection type in the dropdown next to the "preserve log" and "Disable cache" checkboxes. The Cloudinary video player is based on [videojs](https://videojs.com/) and has both the HLS and MPEG-DASH plugins installed by default. In the code sandbox below you'll see both the HLS and the MPEG-DASH version. Beware that the HLS version has better support for showing different statistics than the MPEG-DASH version. See the code here: [https://codesandbox.io/s/white-cherry-g4ixt](https://codesandbox.io/s/white-cherry-g4ixt)
timbenniks
1,406,395
web seo gestión andorra
web seo gestión andorra : En los últimos tiempos, la presencia en línea de una empresa es muy...
0
2023-03-19T05:42:32
https://dev.to/ad700manag47099/web-seo-gestion-andorra-710
**[web seo gestión andorra](https://ad700management.com/como-conseguir-el-consultor-seo-adecuado-en-andorra/)** : En los últimos tiempos, la presencia en línea de una empresa es muy esencial para aumentar su alcance para captar clientes potenciales y maximizar sus ventas. Entonces, los empresarios estratégicos están poniendo el mismo dinero y esfuerzo para aumentar su estrategia de marketing digital. Esto puede ahorrarles tiempo y producir resultados garantizados.
ad700manag47099
1,406,560
Appium automation code structure
Desired Capabilities: Desired capabilities are a set of key-value pairs that specify the...
0
2023-03-19T10:34:43
https://dev.to/khairunnaharnowrin/appium-automation-code-structure-fk9
**Desired Capabilities**: Desired capabilities are a set of key-value pairs that specify the characteristics of the device or emulator to be used for testing. This includes details such as device name, platform version, app package name, and app activity name. These capabilities are used to initialize the driver object. ``` @BeforeTest public void setup() throws Exception { DesiredCapabilities caps = new DesiredCapabilities(); caps.setCapability("deviceName", "Android Emulator"); caps.setCapability("platformVersion", "11"); caps.setCapability("appPackage", "com.example.myapp"); caps.setCapability("appActivity", "com.example.myapp.MainActivity"); caps.setCapability("automationName", "UiAutomator2"); caps.setCapability("noReset", true); driver = new AndroidDriver<AndroidElement>(new URL("http://127.0.0.1:4723/wd/hub"), caps); } ``` **Page Objects:** Page objects are classes that encapsulate the behavior and properties of the user interface elements of an app. They provide a layer of abstraction between the test code and the app's user interface. Each screen or page of the app can have a corresponding page object class. **Test Code:** The test code is written to interact with the user interface elements of the app through the page objects. It typically includes the steps that the user takes to navigate through the app, interact with the elements, and perform validations. ``` @Test public void loginTest() { LoginPage loginPage = new LoginPage(driver); loginPage.enterUsername("myusername"); loginPage.enterPassword("mypassword"); loginPage.clickLoginButton(); HomePage homePage = new HomePage(driver); Assert.assertTrue(homePage.isUserLoggedIn()); } ``` **Test Runner:** The test runner is a program that runs the test code and communicates with the Appium server to automate the app. There are different test runners available for Appium, including TestNG, JUnit, and Appium Studio. The test runner usually handles the configuration, setup, and execution of the tests. **Supporting files:** These are files that contain additional information or configuration for the tests, such as test data, configuration files, or utility classes. They can be organized in folders for better structure and maintainability. ``` - src - main - java - com - example - pages - LoginPage.java - HomePage.java - tests - LoginTest.java - utils - AppiumUtils.java - resources - app - MyApp.apk - test - java - com - example - runners - TestRunner.java ```
khairunnaharnowrin
1,406,561
5 Reasons That Will Justify the Surge of Online Learning
Introduction Online learning is a medium through which students can participate in courses...
0
2023-04-28T07:58:04
https://www.showwcase.com/show/34274/5-reasons-that-will-justify-the-surge-of-online-learning
programming, beginners, career, learning
## Introduction Online learning is a medium through which students can participate in courses available on the internet. People can choose to learn anything they want from the convenience of their own homes without having to travel to lecture halls or classrooms. On the other hand, subject-matter experts get the opportunity to share their knowledge in a rewarding way through online courses. In this article, we will try to justify this surge in digital education by discussing some crucial factors that are potentially increasing the value of online learning. They include: ### Freedom to Learn Anything Physical classrooms have some restrictions due to: **Budget:** As it is impossible for colleges and universities to recruit qualified instructors to teach every subject under the sun, they must streamline their curricula to accommodate the greatest demand. **Interest:** Physical schools are unable to give a class simply because one student is interested in it. If interest declines, they cut the class. **Syllabi:** The majority of college, university, and continuing education courses are required to adhere to a strict syllabus, which may omit material that students would like to learn. **Staffing:** In order to combat turnover, schools must regularly acquire new employees. To achieve financial expectations, they occasionally also have to reduce staffing (along with courses). All of these barriers are torn down by online learning, since it makes education more accessible. Online courses are available on almost any topic you can think of. The ability to delve as deeply into your subject as you'd like is another of the main advantages of online education. An endless hunger for information has been fueled by the internet. Free information, however, doesn't always go into adequate detail. E-learning spans the gap between publicly available knowledge and specialized training that students will gladly pay for. ### Learn From Anywhere The aspect of flexibility and self-care that online learning gives you is one of its unnoticed benefits. Without the everyday struggle of making it to school on time, navigating traffic, or even pushing through illness to make it to class, you can still engage in mental training. You can continue your study without leaving the house if you choose online learning. You can also take your classes while traveling or at a different location rather than your home. Thanks to online courses, students also have the choice to learn in whatever setting is most effective for them. Some people require complete silence in order to concentrate. Some people need to be surrounded by activity or music to stay motivated. While classroom instruction forces a particular atmosphere and framework, online learning allows you to customize your surroundings according to your tastes. ### Reduced Education Costs Academic success has a considerable price tag, at least if you go the conventional route. In-person learning requires you to pay for everything you are leveraged with. This includes the course you have enrolled for, learning materials, classroom supplies and many more. This might to be expensive for many people. On the other hand, online courses are substantially less expensive. There are a few reasons behind this: **Affordable Expertise:** You don't require a professor from a university to teach you a language or a skill. People with specialized skills may not work in academia, but they provide extremely affordable courses online that are of great value. **Course Variety:** There are different types of online courses available on the internet. You get the opportunity to choose anything you want. But when you are limited to a single institution, you must choose from its course catalog. **Less Supplies:** Online classes don't need the desks, chairs, paper, writing implements, or any other items that are necessary in a physical classroom. **Reusable Content:** Instructors who design online courses can reuse the same materials with new students. They are not required to repeatedly administer the course in person. ### Study at Your Own Pace Some learners are quick to pick up new information, whereas others require repetition to completely absorb a concept. The style in which you learn is absolutely fine. There's nothing wrong with it. But if you learn in the incorrect setting, you'll squander both time and money. This is why E-learning systems and online classrooms use cutting-edge technologies to customize the experience for every type of student. Personalization is now a significant component of online learning. You can use a learning experience platform that will adapt the curriculum and your current learning materials based on your needs, any previous learning materials, and any skill gaps you may have to help you study at your own pace. In this environment, you don't feel rushed or restricted because you aren't in competition with anyone else. You can fast-forward through or go over the information again as many times as you need to be comfortable with your understanding. ### Boost Your Resume Online learning can also help job seekers who want to give their resumes a little more oomph. You might not have time to attend a nearby college campus to pick up those extra skills because looking for work can feel like a full-time endeavor. In this case, going for online courses is a smart choice. Choose courses based on your tech stack or the skillset that your ideal job demands. You can also pick up new skills that could help you stand out as a candidate. Another advantage of taking lessons online is that you can complete them more rapidly than you would in a traditional classroom setting during a semester. This means you don't have to wait several months to add your new skills to your CV. ## Conclusion Going for traditional education is absolutely fine. In fact, if you want to have a degree on your resume, you should pursue it. However, online education opens up a whole new realm of learning. Among the benefits of online learning are flexibility, subject diversity, and the opportunity to broaden your employment options. So if you are thinking of enrolling in an online course, do it. The internet might just turn into your favorite classroom. Finally, thank you for reading my article! Want to connect? Here are my socials: - [LinkedIn](https://www.linkedin.com/in/sriparno08/) - [Twitter](https://twitter.com/Sriparno08) - [DEV](https://dev.to/sriparno08) - [GitHub](https://github.com/Sriparno08)
sriparno08
1,406,601
How to use mixins - Part 1
If you've ever checked the source code of a CSS framework, you might have noticed the use of @mixin...
0
2023-03-19T12:42:05
https://dev.to/sanjusudheerm/how-to-use-mixins-part-1-43h9
If you've ever checked the source code of a CSS framework, you might have noticed the use of @mixin statements. Have you ever wondered how mixins work, or how you can incorporate them into your code? To elaborate, mixins are a Sass utility that allows you to encapsulate styles into a single rule. You define a mixin using the @mixin statement, and then include it in your styles using the @include statement. (Please refer to the following URL to install the sass on your machine https://sass-lang.com/install) ``` //_global.scss // syntax @mixin <any valid sass identifier>(optional arguments) { ... properties } ``` ``` //_global.scss //example // creating a mixin to reset the anchor tag style, here updating the two properties of the selector in which this is being used. @mixin reset-anchor-tag { text-decoration: none; color: $font-clr; } // applying the mixin `reset-anchor-tag` on the anchor tag as follow a { @include reset-anchor-tag; } ``` In the example above, the code updates the provided properties regardless of where it is used. Let's examine another example that specifically resets the style of the anchor tag. ``` //_global.scss @mixin reset-anchor-tag { a { text-decoration: none; color: $font-clr; } } // the style will reset, only for the anchor tags inside the p tag p { @include reset-anchor-tag; } // to do it globally @include reset-anchor-tag; ``` Suppose you need to modify only the color and text-decoration of anchor tags that belong to the .container class, instead of resetting the properties to their default values. How can this be achieved? To accept parameters, we can update the mixin code as follows. We have added an option to provide a default value, that will be used if no values are provided. ``` //_global.scss // here `none` and `$font-clr` are the default values @mixin reset-anchor-tag($decoration-type: none, $color: $font-clr) { a { text-decoration: $decoration-type; color: $color; } } //usage @include reset-anchor-tag; .container { @include reset-anchor-tag(underline, #3c37ff); } ``` if we have to change only certain variables, we can pass only those variables instead of passing all of them as follow. ``` // if you wish to update the color of the tag instead of decoration type, the mixin can be used as follow .container { @include reset-anchor-tag($color: #3c37ff); } // in this case, the text-decoration property will be set with their default value and the color variable will be used the value provided here. ``` While there is much more to discuss on mixins, let's wrap up this blog here. We can explore this topic further in a future blog post. (A quick playground for this discussed code snippet can be found here: https://codepen.io/sanjusm/pen/OJojzZE) Thanks for reading and until the next time.
sanjusudheerm
1,406,652
Thoughts on the current state of AI
Here we are! AI is the big thing everyone was expecting for years now. It's not a new idea. More than...
22,485
2023-03-19T13:22:14
https://ochsenbein.red/blog/musings-on-ai/
ai, openai, datascience, chatgpt
Here we are! AI is the big thing everyone was expecting for years now. It's not a new idea. More than half a century ago science fiction writers were already talking about it. But now it is here. What will we do with it? Where will it lead to? What happens to our society? To be honest: Nobody knows what the future will be like. 10 years ago I was overly optimistic about autonomous cars. I predicted that within 5 years every new car will be self-driving. Well, 10 years in and it still seems to be coming within 5 years (probably not). But what do I think about LLMs like GPT-4 or ChatGPT, or things like DALL-E and Midjourney? I'll try to take you on a journey through my thoughts. ## The optimistic view ### Chances in healthcare Advancements in technology have the potential to revolutionize healthcare in a number of ways, from more efficient diagnosis and treatment to better access to care for underserved populations. Telemedicine and digital health platforms are becoming increasingly popular, allowing patients to receive remote consultations and access to medical information and resources. I think, most importantly AI and machine learning can also help healthcare professionals make more accurate diagnoses and develop personalized treatment plans for patients. No medical doctor in this world is able to keep up with all the new developments and papers. Especially when it comes to rare diseases it will be helpful to be able to feed an AI with symptoms and other data and retrieve pointers to things the doctor might not even have heard of. Those developments will increase the quality of life for many when we manage to make it accessible and affordable for everyone. ### Chances for society Emerging technologies also offer many benefits for society at large. The use of autonomous vehicles could improve road safety, reduce traffic congestion, and cut down on greenhouse gas emissions. Smart cities could help us better manage resources, from water and electricity to waste disposal. The technologies can help to address social issues such as poverty and inequality, by providing new tools and opportunities for education and economic empowerment. Increased automation could lead to a redefinition of work. Society might finally be able to fulfil the promise of a more purposeful life and not having to rely on a job just to be able to survive. ### Chances for the environment The new technologies have the potential to help us address some of the biggest environmental challenges we face today. Renewable energy technologies such as solar and wind power can help us transition away from fossil fuels and reduce our carbon footprint. AI can help us find improved ways of reducing unnecessary transportation, make supply chains and distribution of resources more efficient. Advanced agricultural technologies such as precision farming can help us reduce waste, conserve water, and increase food production in a sustainable way. ## The pessimistic view ### Black box problem One of the main concerns with AIs is the so-called "black box problem." As machine learning algorithms become more complex and powerful, it becomes increasingly difficult for humans to understand how they are making decisions. In a neural network, you can't just go in, take a look at any part of it and extrapolate the final result from there. As far as I see it, never before we built something we can't comprehend in such a way. If you think about it, we build incredible machines, but we always knew how each of its parts contributed to the whole. This is no longer true in neural networks and other algorithms. This lack of transparency can be problematic, particularly in areas such as healthcare, where decisions can have life-or-death consequences. There is also the risk of algorithmic bias, where the algorithms reflect and reinforce existing biases and inequalities in society. As a heard a CEO saying once: "Why should we care about biases in algorithms? Humans have biases, too." This sums up the problem quite well. There seems to be a lack of awareness in the field. ### Alignment problem Quite similar to the black box problem is the Alignment problem. Since we can't really see into the inner workings of the decision-making process of those models we really can't know if the goal of the AI is actually aligned with the goal we think we gave it. There are several examples of this. Can we be sure an image classifier identifies something because of the thing itself, or just because something always happens to be on those images? Is a doctor a doctor because she has a stethoscope? Or is it the gender? Or anything else? ### Social acceptance automaton If you think about how we train a language model. First, we train a model which should learn how a human would interact with the model and then we train the actual model with the other model. Does the language model really learn to give accurate and correct information, or does it learn to please the person interacting with it? In other words, a language model might just learn to tell us what we want to hear. How can we make sure the model actually learns to be correct? Are there enough experts in every field involved with the training of the models? I think this is one of the hardest problems we'll have to address, and honestly, I'm not sure if this can be fixed in any way. ### Knowledge gap There is also a risk that emerging technologies will widen the gap between the haves and have-nots. As AI and automation disrupt industries and create new jobs, there is a risk that some people will be left behind, lacking the skills and education necessary to participate in the new economy. This could lead to a world where a small elite holds most of the wealth and power, while the rest of society struggles to get by. I recently heard someone talk about how AIs will wipe out software development jobs. It was suggested a Senior Developer will no longer have to rely on Juniors because he could just assign the tasks to AI code-generators and then review their code. My question would be: Where would those Seniors come from if there are no more Junior positions? How would anyone gain the experience and knowledge to properly assess the generated code? Another problem with that might be that the lonely Senior Dev might be stuck with a suboptimal AI system for the tasks to be done. It would probably be harder to switch to a different system compared to having a pool of different people. ### Source acknowledgement AIs require transparency and accountability from those who are developing and deploying the technologies. There is often a lack of clarity around who is responsible for ensuring that emerging technologies are used ethically and responsibly, particularly in cases where multiple stakeholders are involved. The black box problem mentioned earlier does not make this any easier. If we are not even able to describe how exactly a system comes to the result, how will we be able to acknowledge the sources? If the text is basically a string of the most probable words, how will we be able to acknowledge the source? But being able to assess the sources is important in today's and future world. ### Technological feudalism There is a concern that new technologies could lead to a new kind of feudalism, with a few powerful corporations and individuals controlling vast amounts of wealth and power. As data becomes the new oil, those who control it will wield immense power over society. There is also a risk of monopolies forming in key industries, stifling competition and innovation. ## Final thoughts The new developments in AI and Machine Learning offer both great promise and significant risks. It is up to us to ensure that these technologies are developed and deployed in a way that maximizes their benefits while minimizing their risks. This will require careful consideration of the ethical, social, and environmental implications of emerging technologies, as well as a commitment to inclusive and equitable development. By working together, there is some hope we can build a better future for all. But to be frank: I'm sceptical. Acknowledgement: This article was written with the help of but not by ChatGPT.
syeo66
1,406,664
The Ultimate Checklist to Streamline Your Workflow and Boost Your Income
Hey there, coding aficionados! Are you ready to take your full stack web development game to the next...
0
2023-03-20T13:07:31
https://dev.to/jimmymcbride/the-ultimate-checklist-to-streamline-your-workflow-and-boost-your-income-1b91
webdev, career, discuss, productivity
Hey there, coding aficionados! Are you ready to take your full stack web development game to the next level? Today, I'm bringing you the ultimate checklist of five full stack web development hacks that'll not only streamline your workflow but also help you rake in that sweet, sweet cash. Buckle up for a wild ride filled with clever insights, witty banter, and high-octane coding wisdom! ## 1: Embrace the power of automation Why do things manually when you can let technology do the heavy lifting for you? Successful full stack developers know that automation is the key to a smooth and efficient workflow. How to hack it: Use task runners like Grunt or Gulp, version control systems like Git, and automate your testing with frameworks like Selenium. Say hello to a stress-free workflow and more time for money-making projects! ## 2: Master the art of keyboard shortcuts Fancy yourself a coding ninja? Then it's time to master the art of keyboard shortcuts! Shave precious seconds (or even minutes) off your tasks and watch your productivity skyrocket. How to hack it: Learn the essential keyboard shortcuts for your IDE, text editor, and browser. Create custom shortcuts for frequently used actions. Also, consider learning the VIM motions and become a true wizard! Most IDE's have great VIM emulation, IDEA makes my favorite IDE's because their VIM emulation is so great! Before you know it, you'll be a bona fide keyboard wizard! ## 3: Turbocharge your development environment A well-organized and customized development environment is like a finely tuned race car - it's built for speed and efficiency. Give your workflow a turbo boost by optimizing your tools and settings. How to hack it: Choose a powerful IDE or text editor, customize your terminal or command prompt, and make use of browser extensions like Web Developer Toolbar or React DevTools. It's time to leave your competition in the dust! ## 4: Harness the might of reusable code Why reinvent the wheel when you can build upon the genius of others (and your past self)? Successful full stack developers know the value of reusable code in saving time and streamlining their workflow. How to hack it: Make use of code libraries, frameworks, and APIs to simplify your projects. Don't forget to create your own code snippets for common tasks. Reusable code is like a secret weapon in your coding arsenal! ## 5: Stay focused with the Pomodoro Technique Distractions are the enemy of productivity, and full stack developers are no exception. Keep your focus razor-sharp and power through tasks with the tried-and-true Pomodoro Technique. How to hack it: Set a timer for 25 minutes, and work on your task without interruptions. When the timer rings, take a 5-minute break. Repeat this process until your task is complete. Watch your productivity soar and your income follow suit! ## Bonus Section: The Flow State - Unleash Your Inner Coding Superhero! Whoa! Hold onto your keyboards, folks, because I've got a BONUS hack for you that's sure to blow your minds. While the Pomodoro Technique is fantastic for staying focused, sometimes you need to unleash your full coding potential and tap into the power of the flow state. The flow state, also known as being "in the zone," is when you're completely absorbed in your work, time flies by, and productivity soars to superhero levels. Studies (and my personal experience) show that flow states usually last around 90 minutes, followed by a 15-25 minute break to recharge and refuel. It's like surfing a wave of pure coding genius! How to hack it: 1. Set the stage: Create a distraction-free environment, put on your favorite focus-enhancing tunes, and ensure you have everything you need within reach. 2. Dive in: Immerse yourself in your coding task, and let the magic happen. Allow yourself to get lost in the world of code, and ride that wave of productivity. 3. Recharge: After your 90-minute coding marathon, take a well-deserved break. Stretch, grab a snack, or take a quick walk. Your brain (and your future income) will thank you! So next time you feel the pull of the flow state, don't resist it. Embrace your inner coding superhero and watch your productivity (and income) reach new heights! --- And there you have it, code warriors! By implementing these full stack web development hacks, including our BONUS flow state hack, you'll streamline your workflow, boost your productivity, and ultimately, increase your income. It's time to unleash your coding prowess and claim your spot at the top! Now, I want to hear from you! What are your favorite full stack web development hacks? How have they helped you improve your workflow and boost your income? Share your tips, tricks, and experiences in the comments below, and let's keep the conversation going. Together, we can take the full stack web development world by storm!
jimmymcbride
1,406,839
Overcoming Resistance in Software Engineering
As software engineers, we face numerous obstacles in our work. From challenging coding problems to...
0
2023-03-19T17:37:18
https://dev.to/_jmsrsd/overcoming-resistance-in-software-engineering-3no8
As software engineers, we face numerous obstacles in our work. From challenging coding problems to demanding deadlines, it can be easy to become overwhelmed and discouraged. But one of the biggest obstacles we face is something that is often internal: Resistance. Resistance can take many forms, from negative self-talk to self-doubt and fear of failure. It can be paralyzing and prevent us from doing our best work. However, recognizing Resistance for what it is can be the first step in overcoming it. By acknowledging its presence and understanding its impact on our work, we can begin to take action to push through it. One of the most effective ways to overcome Resistance is to stay disciplined. This means showing up every day and doing the work, even when we don't feel like it. It can be easy to get caught up in distractions or feel overwhelmed by the enormity of a particular project, but by remaining focused and committed to our work, we can make steady progress and ultimately achieve our goals. Having faith in our abilities is another critical component of overcoming Resistance. As software engineers, we are constantly learning and growing, and it's important to trust in ourselves and our skills. This doesn't mean that we'll always have the answer or that everything will come easily, but it does mean that we have the ability to figure it out. In conclusion, overcoming Resistance is a natural part of the creative process, but it is not insurmountable. By recognizing Resistance, staying disciplined, and having faith in our abilities, we can push through the obstacles and achieve our goals as software engineers. So the next time you feel Resistance creeping in, take a deep breath, stay focused, and keep pushing forward.
_jmsrsd
1,406,851
Setting a Theme in Svelte using Hooks
This is a follow-up post to Secure Authentication in Svelte using Hooks to expand on our Hooks file....
22,310
2023-03-19T17:56:22
https://dev.to/brewhousedigital/setting-a-theme-in-svelte-using-hooks-162i
svelte, theme, javascript, tutorial
This is a follow-up post to [Secure Authentication in Svelte using Hooks](https://dev.to/brewhousedigital/secure-authentication-in-svelte-using-hooks-k5j) to expand on our Hooks file. If you're looking for a way to implement authentication in your SvelteKit app, definitely check out that article. If you've ever built a site with multiple themes, there is a chance you've run into the annoying problem of having the site flash before the theme value is calculated client side. In this tutorial, we'll setup the `hooks.server.js` file to allow for a dynamic theme to be processed _before_ sending it to the client. ## Initial Setup If you don't already have a `hooks.server.js` file, the most basic version looks like this: ```js // hooks.server.js export const handle = async({event, resolve}) => { const response = await resolve(event); return response; } ``` This is explained in the previous post, but here is a refresher for any new readers: The `event` property contains all the information related to the file request. This includes user's cookies, the browser http headers, and the URL of the specific request. You can read more indepth docs here: [SvelteKit Hooks](https://kit.svelte.dev/docs/hooks). The second item, `resolve`, is the function that creates the finished HTML. Inside our `handle()` function is the `await resolve(event)` call. This is a SvelteKit thing that essentially tells the server that it is ready to build the HTML before sending it to the client. Returning that `response` value renders the page as normal. In this tutorial, we'll be using a simple HTML data attribute to handle which CSS will be used. In your `app.html` page, go ahead and add this to your `<html>` tag like so: ```html <html lang="en" data-theme=""> ``` That is all the setup we need for the HTML page. Go ahead and close that and lets open up our `hooks.server.js` file. Now we want to modify our `resolve()` function to look for the string `data-theme=""` and replace it with the user's theme cookie value. We can do that like this: ```js // hooks.server.js export const handle = async({event, resolve}) => { const response = await resolve(event, { // Processing will go here }); return response; } ``` First step is to add an object as the second property of `resolve()`. Inside here, we can call the Svelte object property `transformPageChunk`, and pass it a function. That will look like this: ```js // hooks.server.js export const handle = async({event, resolve}) => { const response = await resolve(event, { transformPageChunk: ({html}) => { // This section will modify the HTML // before being returned to the client } }); return response; } ``` We want to first check if the user's theme cookie even exists. This could be a new user without a cookie, or an existing user with one. An easy way to do that is to check the `event.cookies` that is passed from the root `handle()` function ```js // hooks.server.js export const handle = async({event, resolve}) => { const response = await resolve(event, { transformPageChunk: ({html}) => { // This section will modify the HTML // before being returned to the client let currentTheme = cookies.get("theme"); // Make sure the cookie was found, if not, set it to dark if(!currentTheme) { currentTheme = "dark"; cookies.set("theme", currentTheme) } } }); return response; } ``` This value will return null if it doesn't exist, so it is useful to add in a default. We can also use this as an opportunity to set a theme cookie for all new users. In the above code, we're setting the default to `dark`. Now that we have our theme value ready, we can update the HTML before being sent to the client. The easiest way to do that is by using the `replace()` javascript function on the entire HTML. It is useful to note that `replace()` is non-destructive so if you want to do additional processing, you must save it into a new variable. ```js // hooks.server.js export const handle = async({event, resolve}) => { const response = await resolve(event, { transformPageChunk: ({html}) => { // This section will modify the HTML // before being returned to the client let currentTheme = cookies.get("theme"); // Make sure the cookie was found, if not, set it to dark if(!currentTheme) { currentTheme = "dark"; cookies.set("theme", currentTheme) } return html.replace(`data-theme=""`, `data-theme="${currentTheme}"`); } }); return response; } ``` And just like that, your theme is now being processed before being sent to the client side. No more flashing! ## CSS Example If you're curious how to use CSS to access this, you can utilize CSS variables like so: ```css :root { --body: #fff; --text: #000; } [data-theme='dark']:root { --body: #000; --text: #fff; } body { background-color: var(--body); color: var(--text); } ```
brewhousedigital
1,426,315
This is my first doc
Hello world
0
2023-04-04T21:45:03
https://dev.to/matheolm/this-is-my-first-doc-59go
Hello world
matheolm
1,406,947
Apache-age: A Powerful and Open Source Graph Database Solution. [Part # 2]
Querying the Data Welcome back to our series on building a social network app with...
0
2023-03-19T19:38:10
https://dev.to/kamleshmmb45/apache-age-a-powerful-and-open-source-graph-database-solution-part-2-c82
apacheage, postgresqextension, database, opensource
## Querying the Data Welcome back to our series on building a social network app with apche-age! In our previous blog post, we covered the basics of data modeling, setting up Neo4j, creating the graph, and adding sample data. Now that we have our graph populated with users, posts, comments, and relationships, it's time to start querying the data. In this post, we'll explore some example queries for retrieving useful information from our social network app. If you haven't read our previous post yet, we recommend starting there to get caught up on the basics of building a social network app with apche-age [here](https://dev.to/kamleshmmb45/building-a-social-network-app-with-apache-age-a-beginners-guide-31fj) ## Query to get all posts ``` SELECT * from cypher('social_network', $$ MATCH (p:Post) RETURN p.title, p.content $$) as (V agtype, C agtype); ``` This query returns the title and content of all posts in the graph. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h1clxatnem7ws8oxuz9r.png) ## Get all posts and their comments ``` SELECT * from cypher('social_network', $$ MATCH (V)-[R:COMMENTED]-(V2) RETURN V,R,V2 $$) as (V agtype, R agtype, V2 agtype); ``` This query returns the title of each post and the content of each comment on that post. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mek5eehhymlrneq5bpdd.png) ## Get all posts and their likes ``` SELECT * from cypher('social_network', $$ MATCH (p:Post)<-[:LIKED]-(u:User) RETURN p.title, COUNT(u) as likes $$) as (V agtype, R agtype); ``` This query returns the title of each post and the number of likes it has. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1epp4pxezbqcu9xwa3fk.png) ## Get all posts and their authors ``` SELECT * from cypher('social_network', $$ MATCH (u:User)-[:POSTED]->(p:Post) RETURN p.title, u.name $$) as (V agtype, R agtype); ``` This query returns the title of each post and the name of its author. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/diemur8h3u4uroc9cezw.png) ## Get all users and the users they follow ``` SELECT * from cypher('social_network', $$ MATCH (u:User)-[:FOLLOWS]->(f:User) RETURN u.name, COLLECT(f.name) as following $$) as (V agtype, R agtype); ``` This query returns the name of each user and a list of the names of the users they follow. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pbco1cdnipotp9henxj4.png) ## References For further help you can look though the documentation [here](https://age.apache.org/) and Apache-age repo [here](https://github.com/apache/age).
kamleshmmb45
1,423,734
Second check-in for project 2
A link to the github repo can be found here. A lot of progress has been made since the last post. ...
0
2023-04-02T23:54:57
https://dev.to/pjstrauss12/second-check-in-for-project-2-5g6d
A link to the github repo can be found [here](https://github.com/pjstrauss12/badge-list). A lot of progress has been made since the last post. The badge API is wired up and the CSS code is near completion to get it looking like our comp. We unfortunately have not been able to get the search functions up and running yet, so that will be the main focus in the coming days. We have a good idea on how we will get the search API working, and that can be shown with the image below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ifpiprlfndy5wxfv4rjo.png) The problem with a monolithic design is that changes to one part of the system have the ability to cause issues with other parts of the system. In a business setting where time equals money having downtime due to glitches would result in the loss of revenue for the company. By implementing a microservice architecture our company could section off many parts of the monolithic design making the micro sections easier to manage and fix when something goes wrong. You can also make changes to specific parts of the service without worrying about the rest of the system. Also, microservices can be developed and deployed independently of each other. One of the questions that we had is if we could get an idea on where to begin with getting the search API started. We need a bit more of an idea on how to do it so we can visualize it and get it implemented.
pjstrauss12
1,423,741
Git branches
In this blog post we will be making an in-depth review of the git branches models. Git branches are a...
22,466
2023-04-03T00:35:20
https://dev.to/alcb1310/git-branches-1og6
webdev, git, github, tutorial
In this blog post we will be making an in-depth review of the git branches models. Git branches are a part of your everyday development process. Each branch is effectively a snapshot of your changes. Every time you want to update your code, either a new feature, fix a bug, o any other modification to your code independent of the size of it, you can encapsulate the changes. This approach will make it harder to get unstable code to get merged into your production branch. ## How to create a branch There are two main ways to create a new branch ```bash git branch my_new_branch git checkout my_new_branch ``` This will create a copy of the branch you are in into a new branch called `my_new_branch` and then make that as the active branch. The other way to create a branch is ```bash git checkout -b my_new_branch ``` This will have the same effect as the two commands we showed before, I personally like this second way because I can do the same thing but in a single command. Making a new branch for your changes is important because it will not modify your stable branch and you can make your changes with confidence that you will not break by accident your main branch. ## Branches structure When you initialize a git repository you will get a first branch that is usually named `main`. This branch is usually the one that is sent to production, so making changes to it is not recommended because of the potential risks of untested and unstable code be introduced. Even in small projects, I personally like to create a separate branch called `development` where all of my changes will go into and be able to make the required tests to ensure my code quality before "copying" those into the main branch. When working on larger projects, specially when working on a team, I like to create a different branch for each issue I have to implement. The issues can be a feature, fix, refactor, etc. ## Branch commands To list all of the branches of your repository you can run ```bash git branch ``` To delete a branch ```bash git branch -d my_new_branch ``` This is a safe way to delete the branch, because it will not take effect if your branch is unmerged. However if you like to delete a branch even if it is not merged, you should run ```bash git branch -D my_new_branch ``` To rename your current branch ```bash git branch -m branch_new_name ``` ## Uploading your branch to GitHub Now that we've created our branch we will like to have it on GitHub, so every member of your team can see it, to do so you need to run ```bash git push origin branch_new_name ``` ## Merging branches locally Merging is git's way to copy your changes into a single branch. It will combine multiple commits into one unified story. So we can merge our feature branches into development and the development into main. To merge, we will need to first go to the receiving branch with the `git checkout` command. ```bash git checkout development ``` Now that we are in the development branch, we need to make sure we have all of the latest remote commits, to do so we need to execute ```bash git fetch git pull origin development ``` Finally to actually merge the data, you need to run the following command ```bash git merge branch_new_name ``` After the branch is merged I highly recommend to delete the feature branch, this will make your code base easier to understand and you will know what features are being worked on. ## Merging in GitHub Once you have made all of the changes you have to do and committed them, you should push your branch to GitHub as we reviewed earlier in this post. In GitHub to make a merge is called ***Pull Request***. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lmh00j3qwovy9ayk08xl.png) To make a Pull Request (PR) in your project GitHub page, you have to go to the Pull Request section, and press the New Pull Request button. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/85qt4633eom5n62ywduo.png) The base branch should always be the branch you want to merge into and the compare branch should be the branch that contains the changes and press the Create Pull Request button. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n43ci5cuen0vq661vu9z.png) Then after you made your tests you can just press the Merge pull request button. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6w5y4wgdwo3patg0v4jl.png) To confirm your merge, you need to press the confirm merge button. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mo9bgeom4i76lqubb9no.png) Finally, as I stated earlier, is important to clean up your merged branches, so you can just press the delete branch button ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jjnjnw95yr7kvvwb94ua.png) ## Conclusions In this post we reviewed how to work with branches both locally and with GitHub. One important thing, when working on a solo project, both merging your branches locally or in GitHub are similar, but, when working on a team, is important to use the GitHub method so any time a PR is created it has to be reviewed by a different team member and make sure the changes produces the expected results.
alcb1310
1,423,744
VSCode - Hidden Browser Inside
There are a lot of options like npm packages or VSCode extension to create a live server. For me, one...
15,252
2023-04-03T02:27:32
https://dev.to/equiman/vscode-browser-inside-2b06
vscode, productivity, webdev, programming
There are a lot of options like **npm packages** or **VSCode extension** to create a live server. For me, one of the better is using Vite as a live server ```bash npx vite dev --port 3000 ``` It's my favorite option because it can be used on projects that don't use vite under the hood, given us Hot Reload with a simple project with vanilla HTML, CSS, and JavaScript, and also use with debugger extensions like [Console Ninja](https://marketplace.visualstudio.com/items?itemName=WallabyJs.console-ninja). --- That's a piece of good information to have, but surely that is not what you are here, right? Ok, I'll show you a hidden 💎 in VSCode where you can open this live server (or a page) in a new tab with a simple browser inside. Open the command palette with `ctrl+shift+p` and search for **Simple Browser**. ![simple-browser](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rxu38b4vfu2dmlr8khzk.png) Press the `return` key and write `http://localhost:3000` or the URL you want to open and press the `return` key again. ![localhost](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s7btd3kyfbtbb4nmyops.png) It will open a new tab inside VSCode with a simple browser and you can move aside to see the code and the result at the same time. ![new-tab](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mjcti91ikk1gfy924yv6.png) --- It was good, but take a lot of steps and effort. I'll show you how to create a task and assign a shortcut to open the simple browser easily. Start creating a `.vscode` folder with a `tasks.json` file inside the project, or if you want it to be available at profile level (no matter the project) press `ctrl+shift+p` (or `cmd+shift+p` on macOS) and run the command **Tasks: Open User Tasks**. ``` # 📄 File: /.vscode/tasks.json ----------------------------------- { "version": "2.0.0", "tasks": [ { "label": "Simple Browser", "command": "${input:openSimpleBrowser}", "problemMatcher": [] } ], "inputs": [ { "id": "openSimpleBrowser", "type": "command", "command": "simpleBrowser.show", "args": [ "http://localhost:3000" ] } ] } ``` And also define the shortcut using the `File > Preferences > Keyboard Shortcuts` menu. ```json # 📄 File: keybindings.json ----------------------------------- // Place your key bindings in this file to override the defaults [ // Browser { "key": "alt+shift+b", "command": "workbench.action.tasks.runTask", "args": "Simple Browser" }, ] ``` The next time you can open the simple browser by pressing `alt+shift+b`. --- ## You don't need that extension 🚫 By the way, you don't need an extension like [Live Preview](https://marketplace.visualstudio.com/items?itemName=ms-vscode.live-server) (or any of the many others that exist) to achieve it. But if you prefer to have all the browser dev tools inside VSCode instead of using the Browser, then use the extension. --- **That’s All Folks! Happy Coding** 🖖 [![beer](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mkgsx4aw6qn9i9ax6as9.png)](https://github.com/sponsors/deinsoftware?frequency=one-time&sponsor=equiman) > **Sources** > - [VS Code task to open VS Code's simple browser with a url](hhttps://stackoverflow.com/questions/68486893/vs-code-task-to-open-vs-codes-simple-browser-with-a-url)
equiman
1,423,771
How to monitor an app in production using Discord channels
Observability is an extremely important practice for you having an idea of what is happening in your...
0
2023-04-03T14:34:05
https://dev.to/vinibgoulart/how-to-monitor-an-app-in-production-using-discord-chanels-foh
tutorial, typescript, programming
Observability is an extremely important practice for you having an idea of what is happening in your app and receive updates from new interactions. We will see how to send new messages to a discord channel, this allows us to monitor interactions in production of our application, such as a new api call, a new login, or even a new record in the database. In this example we will monitor a route and at each call to it you want to send a message to discord with some data. If you do not create a project yet, just run `yarn init -y` to create a new. Install some dependencies: ```bash yarn add express axios nodemon ``` Create a discord channel that will be used to monitor and receive new updates. After creating it, we need to create a new webhook for this channel. Go to Server Settings > Integrations > Webhooks > New Webhook, then select the created channel and copy the link. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/25owzfc5tkic8076v1yn.png) Go back to code... create a `config.js` file to define our variables: ```js export const config = { API_PORT: 4000, DISCORD_CHANNEL_WEBHOOK: <your discord channel webhook>, }; ``` Create a new express server with a simple route: ```js // index.js const express = require('express') const { config } = require('./config') const axios = require('axios') const app = express() app.get('/observability', async (req, res) => { await axios.post(config.DISCORD_CHANNEL_WEBHOOK, { content: 'Hello World' }, { headers: { Accept: 'application/json', 'Content-Type': 'application/json', } }) res.send('observability done successfully') }) app.listen(config.API_PORT, () => { console.log(`Server listening on port ${config.API_PORT}`) }) ``` Will be made a POST for this discord channel webhook with the content. Let's run the code, open the terminal and run `nodemon index.js` Now, if you access this url `localhost:4000/observability` you will see it: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tb2kuguwurqoo1wtbxnt.png) And if you check your discord channel it has something like this: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y7jepewbrzphm2j6f9xo.png) And that's it, we have a log and observability system, feel free to run this webhook at any time in your code that you want to follow the status in production. See the repository using typescript here: [observability-with-discord](https://github.com/vinibgoulart/observability-with-discord) --- See more in my [zettelkasten](https://vinibgoulart.github.io/zettelkasten/docs/about) --- Foto de <a href="https://unsplash.com/@laughayette?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Marten Newhall</a> na <a href="https://unsplash.com/pt-br/fotografias/uAFjFsMS3YY?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>
vinibgoulart
1,423,784
Project 2 Week 12 Homework
Progression you've made CSS is starting to more closely resemble the model. Search function is not...
0
2023-04-03T03:06:24
https://dev.to/lloyd64/project-2-week-12-homework-27dh
**Progression you've made** CSS is starting to more closely resemble the model. Search function is not completely working but we have looked at other examples on how to get it working and have started to implement that. **How did you get the relationship between Searching and rendering results working** Watched a youtube video and looked at online resources showing html,css,and js examples of a search bar function. **Draw a diagram on draw.io for user interaction pattern. What happens from user input, through machine sending value, to re-rendering on the page.** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f4k5tyu7mu7nu4wji7gv.png) **Think of a real world use-case from industry (Media streaming, youtube, corporate, cable company provider, web platform, etc) where micro-service architecture could fit into their business context. How could we use this approach to solve a real problem at a company currently using a monolithic design architecture?** This use of micro-service architecture could be used to solve an issue say where a company wants all of their websites or anything else associated with their name that a user interacts with to have the same structure, look, and feel to it. Using this micro-service architecture, they can accomplish this while also styling them to be pleasing to interact with. **More questions you have / things your stuck on** We began this project with the understanding that we could do our own thing, as long as we followed the model and made it look the same. Now we are understanding that we weren't supposed to do our own thing, rather we were just meant to remake the model using our own understanding of how to make a web page. So right now, we are in a little bit of a panic mode as we sort of need to change the entire thing. We thought that this wouldn't be too hard as we could pull from the model webpage but both of us have now lost access to that page.
lloyd64
1,423,802
The Future of Code: When Humans and AI Collaborate
In recent years, there has been a lot of talk about the rise of artificial intelligence (AI) and the...
0
2023-04-03T03:52:18
https://dev.to/bhavin9920/the-future-of-code-when-humans-and-ai-collaborate-24jb
ai, programming, discuss, news
In recent years, there has been a lot of talk about the rise of **artificial intelligence (AI)** and the potential impact it could have on various industries. But what about the future of code? Will AI replace human developers, or will they work together to create even more innovative solutions? The answer lies in a concept called "**collaborative intelligence**," which is the idea that humans and machines can work together to achieve better outcomes than either could achieve alone. In the case of coding, this means that AI can assist developers in various tasks, such as debugging, optimizing code, and even generating code based on specifications. However, the real power of collaborative intelligence lies in its ability to **augment human creativity**. AI can generate new ideas and suggest different approaches to problems that human developers may not have thought of. This could lead to new breakthroughs in software development that may not have been possible before. Of course, there are also concerns about the impact of AI on the job market for human developers. Some experts predict that AI could automate many of the tasks currently performed by developers, leading to job loss. However, others argue that AI will create new opportunities for developers to specialize in areas where machines are not as proficient, such as user experience design or project management. The future of code is still uncertain, but one thing is clear: the collaboration between humans and AI will be a key factor in shaping it. As AI continues to evolve and become more sophisticated, we can expect to see new tools and platforms emerge that will allow developers to work more closely with machines. And who knows, maybe one day we'll even see AI-generated code winning prestigious programming contests.
bhavin9920
1,423,865
Compatibility Testing Tutorial: A Comprehensive Guide With Examples And Best Practices
Compatibility testing is a technique to check the functionality of software applications across...
0
2023-04-03T05:15:44
https://dev.to/nazneenahmd/compatibility-testing-tutorial-a-comprehensive-guide-with-examples-and-best-practices-2jlc
Compatibility testing is a technique to check the functionality of software applications across different web browsers and versions, mobile devices, databases, operating systems, hardware, and networks. This process typically ensures the software applications work correctly across all platforms and environments as intended. In today’s digital age, the market is flooded with many types of mobile devices and different browsers, each with multiple versions of its own. Currently, 5252 billion people use smartphones, translating to 86.29% of the world’s population. It is predicted to add 1156.2 billion new users by 2028. With such a high prevalence of mobile devices, people use different browsers and their diverse versions to access applications and websites. Also, a multitude of devices, including desktops, laptops and tablets, and different operating systems, are used to access these websites and applications. This is because every browser, operating system, and other has unique features and differs in how they render web pages. Therefore, ensuring that websites and applications are compatible across all platforms that give a seamless user experience is crucial. This can be done with compatibility testing of software applications. Before the mobile application or website gets released in the market, a compatibility test is performed to ensure its congruence with all hardware, browsers, OS, and others. Compatibility tests ensure that developed software applications (websites and mobile apps) are functional and display correctly across all platforms without glitches. ## OVERVIEW Compatibility testing is a technique to check the functionality of software applications across different web browsers and versions, mobile devices, databases, operating systems, hardware, and networks. This process typically ensures the software applications work correctly across all platforms and environments as intended. In today’s digital age, the market is flooded with many types of mobile devices and different browsers, each with multiple versions of its own. Currently, 5252 billion people use smartphones, translating to 86.29% of the world’s population. It is predicted to add 1156.2 billion new users by 2028. With such a high prevalence of mobile devices, people use different browsers and their diverse versions to access applications and websites. Also, a multitude of devices, including desktops, laptops and tablets, and different operating systems, are used to access these websites and applications. This is because every browser, operating system, and other has unique features and differs in how they render web pages. Therefore, ensuring that websites and applications are compatible across all platforms that give a seamless user experience is crucial. This can be done with compatibility testing of software applications. Before the mobile application or website gets released in the market, a compatibility test is performed to ensure its congruence with all hardware, browsers, OS, and others. Compatibility tests ensure that developed software applications (websites and mobile apps) are functional and display correctly across all platforms without glitches. ##What is Software Compatibility? Software compatibility is defined as the application’s ability to function accurately with different web browsers, browser versions, hardware, software, mobile devices, and networks without hiccups. It means that software applications can run on particular operating systems, hardware, and network environments without any conflict or glitches. In simple terms, you can understand software compatibility as interoperability between two and more software applications and different platforms or hardware configurations. But why does compatibility conflict or glitches arise? The main underlying reasons are differences in hardware configurations, software versions, dependencies, and other factors. For example, a software application developed to function on Windows 7 may not be compatible with Windows 10. Similarly, newly developed software applications may be incompatible with the old version of the database management system. Thus, addressing this during the Software Development Life Cycle is essential. Test for software compatibility is important for all software applications to work effectively and efficiently without causing crashes or other issues. This makes compatibility testing a crucial part of the software development process. ## What is Compatibility Testing? Compatibility testing is a type of non-functional testing to verify the function of software applications across different operating systems, browsers, and their versions, devices, hardware, networks, software, and security protocols. You can only perform compatibility tests on a stable website or application. It happens that developed applications or websites’ functions get impacted when run in different browser versions, resolutions, networks, configurations, etc. This may give rise to errors that can delay the software release process. To overcome such a scenario, you must perform compatibility testing. This will enable the team to ensure compatibility requirements for the software applications are addressed and inbuilt before they get released to the end users. Through this, you can develop software that works accurately across various configurations. This will give consistent experience and performance of applications and websites across all platforms. Overall, the compatibility test checks applications and websites’ usability, reliability, and performance in different test environments. ## Why is Compatibility Testing important? When a new software application is released in the market, many users access those from various browsers and devices like PCs, smartphones, and tablets. However, if the software application is not tested for compatibility with such browsers, devices, or other platforms, it can make them non-functional. This can lead users to stop using the application, resulting in a negative user experience and a poor reputation for the organization. Therefore, compatibility tests are needed to ensure their functionality and performance across all platforms are uniform. Other reasons that highlight the need for compatibility testing are explained below: When we perform compatibility tests, it is easy to identify and address any compatibility-related issues that can impact the software’s function across different devices, browsers, etc. With this, we can ensure software applications work as expected across different test environments, and users can access those without any glitches. Any compatibility issues in the software application and websites identified at a later stage of development or its post-release can be costly to fix. It is crucial to perform compatibility tests in the development phase of software applications to resolve any challenges, if aroused at the earliest. Thus, this will save time and cost in the long run. Compatibility issues impact the organization’s reputation, directly related to the poor end-user experience. So, compatibility tests are needed to ensure they work seamlessly across different devices, platforms, and OS. This will improve the organization’s reputation among users, increasing brand loyalty. ## Example of Compatibility Testing Let‘s consider a scenario where an organization has developed a web application that runs on desktops. It is developed to work on Windows OS. Now they want to test its working on different versions of Windows. Here comes the need for compatibility testing, which will be done to ensure the user’s requirements. During compatibility testing, we will test the application on different versions of Windows, such as Windows 8, Windows 10, and Windows 11. The web application can also be tested on laptops, mobile, and tablets. To ensure that the web application’s functionality is consistent and accurate, we can test its performance on each configuration. In addition, we can test the web application’s compatibility with other popular web browsers, like Chrome, Firefox, Safari, and Edge. This will ensure the application functions correctly when accessed through different web browsers. ## Benefits of Compatibility Testing Testing for compatibility can help organizations meet their application or website requirements. Consequently, when you perform compatibility testing, you get assured of its usability, reliability, and security. However, to run compatibility tests, you should know their benefits so that they shouldn’t be missed in the software development process. Improve Software Development Life Cycle (SDLC): Compatibility test finds the defects in the software during the SDLC before it gets released. You can detect bugs or errors in the application and website during its development phase. This ensures the timely resolution of the bugs and associated challenges with it. Hence, compatibility tests do not complicate the SDLC, which can happen if the working of the application or website is not aligned across various platforms. Detecting bugs early: A compatibility test is significant as it helps to timely detect bugs in the application or website when tested across various platforms. Since the QA encounters related challenges at the early stage of development, a compatibility test gives sufficient time for the team to solve them as a priority. You can efficiently fix the time taking tasks like resolving compatibility issues for various devices, browsers, and OS with compatibility tests. Ensure successful software release:Compatibility tests with other tests like unit testing and system testing ensure the application’s usability, scalability, and stability across various platforms. This will help in successful software release without any issues or underlying defects in the application. Hence, with compatibility tests, users’ complaints about the application or website can be avoided. Enhance security:Compatibility tests identify any security vulnerabilities related to the developed software application when used in the intended environment. You can address those vulnerabilities early and ensure the security of the applications and websites is not compromised. Types of Compatibility Testing There are several types of compatibility tests. Here is a quick breakdown. ## Hardware Compatibility Testing This testing process checks software applications’ ability to function on the different hardware configurations. You perform a hardware compatibility test to ensure whether a specific hardware device is compatible with a particular software application, platform, and OS. To perform this test, the test environment needs to be set with different hardware configurations to check each application’s function. You have to test various hardware components like graphic cards, processors, input and output devices, and storage devices. This is important to ensure they function appropriately and are compatible with the tested application. Its outcome can be used to identify potential issues when software applications and hardware are used together. You can use such information to make changes in the hardware and software to ensure appropriate functioning. ## Network Compatibility Testing This testing process checks the software application’s functioning on different network connections. Its primary purpose is to test the software application and its communication with the network and ensure there are no security, connectivity, and performance issues. You perform network compatibility tests to ensure the software applications work seamlessly in a particular network environment. For this, you need to connect the application being tested with several networks. For example, you will check the application’s function on Wi-Fi and data networks like 4G and 5G. You can measure two crucial metrics, speed and bandwidth, which can affect the application’s functioning. If such metrics have the expected outcome, the application is compatible with different network connections. ## Operating System Compatibility Testing This testing process checks the functioning of the software application on the different operating systems and its version. The primary purpose of operating system compatibility testing is to ensure that there is no compatibility issue when an application or website works on a different OS and its version. For example, if you are testing a mobile application, you can test it on iOS and Android to verify its response on both OS. Or, if you are testing a website, you can run an operating system compatibility test on Windows, macOS, and Linux OS. The test outcome can fix any issue arising when software is used in different OS and versions. You can use such information to make changes to either the application or component of the OS to ensure they function together and recommend to the end-user which OS works best with applications. ## Device Compatibility Testing This testing process checks the compatibility of the software application on several devices, including laptops, tablets, mobile phones, and desktop systems. The primary purpose of device compatibility testing is to test whether the application or website functions correctly on different hardware devices and configurations without any issues. In this test, you verify the software application’s compatibility with various device components like sensors, cameras, microphones, etc. ## Mobile Compatibility Testing This testing process verifies the function of the software application on various mobile devices. This test refers only to mobile devices to ensure whether software applications or websites can operate correctly on different mobile devices such as smartphones, tablets, and other handheld devices. You may wonder how to choose the correct mobile devices to perform mobile compatibility tests due to the large number of different devices in the market. Here is the answer. Even though many devices are available in the market, you can go through the market stats to gather information on which device you should perform the test on. Browser Compatibility Testing It checks the function of the website and web application on different web browsers like Google Chrome, Mozilla Firefox, Safari, Microsoft Edge, and other popular browsers. Its primary purpose is to ensure that web applications or websites work consistently with similar display and function no matter which browser the user uses. Browser compatibility testing may also involve testing various browser components like JavaScript, HTML, CSS, and other plugins or extensions. With this, your website and web application can work flawlessly on any browser. You don’t have to ask your end-user to change their browsers, as with an effective browser compatibility test, you will offer a positive experience across all popular browsers, including their legacy and latest versions. ## Software Compatibility Testing This testing process checks the function of the software for its compatibility with other software or third-party tools. You will be able to look at how applications or websites function and respond while communicating with different software. For example, if your application allows users to download PDF files, it should open on Adobe Acrobat. Likewise, if the developed application has an exportable grid view, it should open in Microsoft Excel. ## Version Compatibility Testing This testing process checks the compatibility of the software applications on various versions of web browsers and operating systems. Its main purpose is to ensure that any changes or updates made to software applications do not lead to compatibility issues on a previous version of the software or its components that interact with it. You have to create test cases on different test scenarios to ensure that the new version of the software can interact with the previous version or its components. Some common areas that need to be tested during the version compatibility test include database, API, user interface, and file format compatibility. Version compatibility tests are further divided into two parts: ## Backward Compatibility Testing This testing process checks the functioning of the new software version with older hardware and software. Here, you test the new applications in their legacy versions. This test is also known as downward compatible. This testing process checks the functioning of the new software version with older hardware and software. Here, you test the new applications in their legacy versions. This test is also known as downward compatible. For example, the old version of the software application undergoes following backward compatibility testing. Window XP → Vista → Win 7 → Win 8 → Win 8.1 ## Forward Compatibility Testing This testing process ensures the functioning of the software application with future versions. Forward compatibility testing is important to ensure the software can be upgraded to newer versions without any compatibility issues. For example, the latest version of the software application undergoes following forward compatibility testing. Win 7 → Win 8 → Win 8.1 → Win 10 ## When to perform Compatibility Testing? One of the best practices for performing compatibility testing is when the build gets stable enough to test. This is because, at this point, software applications are less likely to undergo any changes in the near future. OHowever, it is crucial to remember that compatibility tests are ongoing. This means you cannot perform compatibility only when the application or website is stable. You can get compatibility issues at any phase or point of the Software Development Life Cycle; thus, it is crucial to integrate compatibility tests into the overall test strategy. Here are a few examples of when we should run compatibility tests: If there are any changes or updates in the OS used by the application. If a software application runs on browsers and requires an update. If a software application interacts with any new hardware and needs any related change. If a software application uses third-party software and libraries and there is any new update. Hence, compatibility tests should be performed at different Software Development Life Cycle phases to ensure that applications are compatible with other platforms, software versions, and hardware. ## Common Compatibility Testing Defects The different types of compatibility tests identify different types of defects that may occur due to incompatibility issues. Here are some common defects detected that you should know so that if you encounter any such, you can fix those with effective compatibility tests. Changes in UI ( look and feel of the software application) Change in font size Alignment-related issues Dropdown menu issues Change in CSS style and color Scroll bar-related issues Pop-up window issues Content or label overlapping Video and audio playback issue Broken tables or Frames Checklist for Compatibility Testing Before performing compatibility tests, you should know about the things that should be checked in applications or websites. This will keep you going with the test without missing any aspects of the application or website and ensure a comprehensive testing process. ## Checklists for testing compatibility include: Accuracy of HTML and CSS. Appropriateness of the SSL certificate for respective browsers. End-user forms, fields, and webpage with and without JavaScript. Accuracy of DOCTYPE for every webpage. Accuracy of the layout across various screen resolutions. Font attributes like format, size, and color should be consistent across various platforms. Content alignment of applications across various screens. Working of images, audio, video, and other multimedia technologies. Consistency of navigation within the application. ## Compatibility Test Tools Once you have identified the platform; now your focus should be turned to speeding up the testing process. For this choosing the right tools for compatibility tests are important. Automated testing tools can help speed up the compatibility test process by running tests on multiple devices and browsers. Here are some commonly used compatibility test tools: LambdaTest: It is a cloud-based digital experience testing platform that allows running manual and automated browser testing across 3000 browsers, browser versions, devices, and operating systems. With [LambdaTest](http://www.lambdatest.com/?fp_ref=nazneen-17), you can also perform real-time and real-device testing to check the cross browser compatibility of websites and mobile applications. Browsera: It is a web-based tool to test websites for scripting and layout issues. It crawls over your website and creates a report which finds compatibility issues on various browsers. Using Browsera, it is possible to take screenshots of your website in different browsers. Through this, you can easily identify and resolve the issues. GhostLab: It is a tool used to perform compatibility tests for your website across all devices simultaneously. You can sync the website across all devices to check its function on different platforms. Both manual and automation testing can be done using GhostLab and give detailed reports on compatibility issues. Steps to perform Compatibility Testing Before you perform compatibility testing, you should understand the application’s target platforms, including the operating system, hardware configuration, and third-party software versions. For this, you prepare a list of operating systems like Windows and Mac OS; and browsers like Chrome, Firefox, Mozilla, Safari, Edge, and Internet Explorer that are needed for the test. Along with the above, you need to identify the most used hardware configuration among the end-user. You should also identify the third-party tools on which the software application being tested depends, like plugins, frameworks, or libraries. This will help to ensure that third-party software versions are compatible with the software application. The compatibility test process can be time-consuming and complicated, involving several platforms and devices. To simplify the test, it can be better understood from the following divided steps: ## Design and Configure Test Cases In this step, the test cases are created to verify the compatibility of the software application with the target platforms. You are required to consider the identified target platforms like different operating systems, browsers, hardware configurations, and third-party software versions. Based on this, you can create test cases that cover all other target platforms. In creating test cases, different test scenarios should be defined that covers all possible scenarios that end-users may encounter. As a bonus tip, ensure that the test scenario is comprehensive and consider all combinations of target platforms. For example, a test scenario may involve testing the software application on Windows 10 with Chrome version 90. Considering the different test scenarios, created test cases should have clear, concise steps and actual and expected results. ## Environment Set Up When you have test cases, you must set up a test environment to perform compatibility tests. This involves the selection of correct hardware and software configuration, including OS, browsers, and third-party software versions, to ensure website and application compatibility. You should set up a test environment that simulates the end user’s software and hardware configurations to ensure accurate results. For this, you need to install browsers, OS, and third-party software on real devices or virtual devices, like emulators and simulators, to ensure they can be tested individually. Further, you are required to configure hardware per end-user configurations. This includes RAM, processor, graphic card configurations, and storage. Test Execution After setting up the platforms or test environment, it is now time to execute the test cases and scrutinize the result in the selected test environment. You must follow the test case steps precisely and record the test results. While performing test execution, you must ensure the compatibility test is done on each target platform. In executing the test cases, result analysis should also be done to identify any issue or bug noted during the testing process. All you have to do is record the issue and report it to the development team. However, the documented issue should have a clear description so that it gets fixed at the earliest. ## Validation and Retesting The final step of the compatibility test is validating and retesting. To test the fixed error found during the compatibility test by the development team, the testers are now required to retest the software application. Retesting is a crucial step as it helps to ensure that the particular error in the applications or websites is resolved without giving rise to any further errors or bugs. You can repeat the testing process until all the test cases are resolved. This validation should be done before moving it to real production. The steps mentioned above on how to perform a compatibility test can be best executed in a cloud-based platform. Testing for compatibility on a cloud-based environment will eliminate your in-house infrastructure challenges and scalability and reliability issues. ## Compatibility Testing on the Cloud Running compatibility tests on cloud-based platforms lets you test on a wide range of browsers, devices, and operating systems. Such access can be difficult in a local in-house environment. To overcome this, a cloud-based platform can be used to test applications simultaneously across multiple browsers, lowering the efforts and time required for testing. Continuous testing platforms like LambdaTest enable you to perform manual and automated compatibility testing on 3000+ browsers, mobile devices, and OS. With LambdaTest’s platform, you can quickly test your website and mobile application in real user environments by leveraging its real device cloud. It also offers other features, like parallel testing, automated screenshot testing, and debugging tools, to make identifying and fixing any compatibility issues easy. ## Challenges in Compatibility Testing Specific challenges can arise when performing compatibility tests in either a local or a cloud environment. It is crucial to address those while testing to get reliable test results. Let us learn a few of those challenges. The availability of a sheer number of devices, browsers, and operating systems creates challenges in testing applications and websites on every possible configuration. Such fragmentation makes it difficult to ensure the consistent functioning of the application across all platforms. Setting up test environments can be expensive and time-consuming. Changing or evolving software technology is another challenge that brings new devices, updated OS, and browser versions. It may happen that applications tested for particular platforms may not function correctly on new software technology. Applications and websites are intended to perform in multiple countries. In terms of this, the cross-cultural compatibility issue is a critical challenge. This is because ensuring that the application supports different languages, time zones, and date formats is problematic and can be expensive and time-consuming. Best Practices for successful Compatibility Testing Considering the challenges mentioned above in compatibility tests, it is crucial to address those to get reliable test results and high-quality applications and websites. Here are some best practices for performing compatibility tests that should be followed to scrutinize and improve the test process: Test early and often Performing compatibility tests early and frequently during the Software Development Life Cycle helps in the early identification and fixation of any related issue. You can be assured that compatibility issues are not left unaddressed and save time associated with fixing them later. Test on real devices Testing on real devices is one of the best practices that ensure accurate testing of the application’s function on specific devices and OS. You can identify and address compatibility issues before the software application is released. You can perform test by your own by accessing LambdaTest at low rate. Prioritizing critical issues Prioritize tests for crucial functions and features of the applications and website to ensure that most of their significant aspects are addressed across all platforms. Through this, you will be able to allocate resources more effectively and focus on the important functionality of the application. Test in different environments You should perform compatibility tests on different network environments, like slow and unstable networks. This will help you verify the compatibility of applications and websites across all networks. Collaborate with the development team Collaboration between the testing and development teams should be practiced during compatibility tests. It will help ensure that compatibility issues are identified and addressed promptly, aligned with development goals and that the application meets the requirements. ## Conclusion Compatibility testing is crucial in this current time, where technology is evolving faster. It is an essential process that ensures software applications can work seamlessly across multiple platforms, devices, and environments. In this tutorial, the process outlined to perform a compatibility test can help you get started. Following the best practice, you can detect and resolve any compatibility issue early in development. This can save you time and effort. You should prioritize compatibility tests to ensure that software applications and websites meet users’ expectations.
nazneenahmd
1,423,976
The Most Detailed Selenium PHP Guide (With Examples)
The Selenium automation framework supports many programming languages such as Python, PHP, Perl,...
0
2023-04-03T08:43:51
https://www.lambdatest.com/blog/selenium-php-tutorial/
php, webdev, tutorial, programming
The Selenium automation framework supports many programming languages such as Python, PHP, Perl, Java, C#, and Ruby. But if you are looking for a server-side programming language for automation testing, [Selenium WebDriver](https://www.lambdatest.com/blog/selenium-webdriver-tutorial-with-examples/?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=blog) with PHP is the ideal combination. The Selenium WebDriver APIs (or Selenium WebDriver Bindings) for PHP support was initially developed & maintained by Facebook under facebook/webdriver. However, it is now called php-webdriver and maintained by the community. PHP’s difficulty level is comparatively less in comparison to languages like Java and Python; hence you should have a go at PHP when trying out [Selenium](https://www.lambdatest.com/selenium-automation?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=webpage) test automation. In this Selenium PHP tutorial, we will go deep into the essential aspects of Selenium test automation with PHP. **_Mobile [emulators online](https://www.lambdatest.com/mobile-emulator-online?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=webpage) from LambdaTest allows you to seamlessly test your mobile applications, websites,and web apps on mobile browsers and mobile devices._** ## Getting Started With Selenium PHP For Automation Testing Before we start this Selenium PHP tutorial, we think it’s a good idea to give you an introduction to PHP quickly. PHP (PHP: Hypertext Preprocessor) is a popular open-source scripting language that is widely used for web applications. The latest stable version of PHP is 7.4.10, and also PHP 8.0.0 (Beta 3) was open for community testing. For Selenium test automation with PHP, we will first download and install the latest stable version of PHP, i.e., 7.4.10, on Windows 10. ## Installation of PHP on Windows Follow the below steps for installing PHP on Windows: 1. Download PHP 7.4.10, thread-safe version (i.e. VC15 x64 Thread Safe) from PHP downloads page. ![](https://cdn-images-1.medium.com/max/2000/0*7lnPb6-Q0DNAxFOX.png) 2. Unzip the zipped file in C:\PHP7 (or any other preferred folder) ![](https://cdn-images-1.medium.com/max/2698/0*sREblbqFKeFQhXqh.png) 3. Go to the location where the PHP executable (**php.exe**) is present and copy the path to that executable and append it in the Path. ![](https://cdn-images-1.medium.com/max/2698/0*EGDlVoC1pZ4UQBjk.png) ![](https://cdn-images-1.medium.com/max/2000/0*4hB_DY_C4VG8xDN-.png) 4. If required, you can restart the machine to ensure that the settings take effect. To check the PHP installation completion, go to the terminal (or command prompt) and execute the command **php –v** to check the installed PHP version. ![](https://cdn-images-1.medium.com/max/2000/0*KpWDTbgcrbvey7D7.png) ## Installation of PHP Composer on Windows Once PHP is installed, you should install the dependency manager for PHP called Composer. `Composer is a dependency package manager for PHP that helps to manage the dependencies. When using Selenium WebDriver with PHP for automation testing, the necessary dependencies (including WebDriver bindings for browsers like Chrome & Firefox) have to be added to the composer.json file for downloading and installing the same. We would look into these aspects in the subsequent sections of this Selenium PHP tutorial. To install Composer on Windows, download the Windows installer and start the installation process. Though the installation process is straightforward, you should ensure not enabling the ‘Developer Mode,’ as it is not necessary for Selenium test automation with PHP. ![](https://cdn-images-1.medium.com/max/2000/0*NtOvl60nbL0YPGRN.png) In the second step of Composer Setup, make sure to enter the correct path to php.exe (In our case, we have installed PHP in c:\PHP7) ![](https://cdn-images-1.medium.com/max/2000/0*RQM8RhPa87-XWih5.png) Enter the Proxy settings (if any); else press Finish to complete the installation process. ![](https://cdn-images-1.medium.com/max/2000/0*6igXRBeqqLJyu8s0.png) **_Perform browser [automation testing](Perform browser automation testing on the most powerful cloud infrastructure. Leverage LambdaTest automation testing for faster, reliable and scalable experience on cloud) on the most powerful cloud infrastructure. Leverage LambdaTest automation testing for faster, reliable and scalable experience on cloud._** Once the installation is complete, you can check if the Composer has been successfully installed on the system or not by executing the command **composer –version** on the terminal. This would give the installed version of Composer. If the Composer is not installed, then this command will show the respective message. ![](https://cdn-images-1.medium.com/max/2000/0*VQIAfF8SOuCHQ3dG.png) ## Enabling Curl with PHP on Windows PHP-WebDriver library, which is the PHP language binding for Selenium WebDriver, is dependent on CURL(i.e., Client URL). The same needs to be enabled for PHP. Let’s move forward with this Selenium PHP tutorial by enabling CURL. For enabling CURL for PHP on Windows, open **php.ini** in the location where PHP is installed and uncomment the line that mentions curl extension. extension =curl Along with curl, we will enable other extensions as well that would be required for Selenium test automation. Shown below is the list of extensions that we have enabled in php.ini available in our system: extension=bz2 extension=curl extension=fileinfo extension=gd2 extension=mbstring extension=openssl extension=pdo_mysql extension=pgsql extension=shmop extension=sqlite3 extension=tidy extension=xmlrpc extension=xsl Developers with expertise in PHP development might have a XAMPP server installed on their machines. However, the server is not required for performing automation testing with Selenium Web PHP. ## Setting SSL Certificate with PHP on Windows When executing Selenium test automation scenarios with PHP CURL calls to https URLs, you might encounter the following error: SSL certificate problem: unable to get local issuer certificate The error essentially means that the root certificates on the system from where the tests are triggered are invalid. ![](https://cdn-images-1.medium.com/max/2678/0*faq4cf-8TBjJSEyS.png) We need to perform the following steps to fix PHP CURL with a local certificate: 1. Download cacert.perm from http://curl.haxx.se/ca/cacert.pem 2. Copy the downloaded file to C:\PHP7\extras\ssl (i.e. extras\ssl folder in the location where PHP is installed) ![](https://cdn-images-1.medium.com/max/2000/0*6Pz98aBiTPR1X2qx.png) 3. Modify [curl] section in php.ini with curl.cainfo pointing to C:\PHP7\extras\ssl\cacert.pemRestart PHP to check if CURL can read https URL [curl] ; A default value for the CURLOPT_CAINFO option. This is required to be an ; absolute path. curl.cainfo = "C:\PHP7\extras\ssl\cacert.pem" 4. Restart PHP to check if CURL can read https URL If you are a PHP expert, you can acquire a benchmark industry certification specializing in core PHP programming skills and expand your opportunities to advance your career in [PHP automation testing](https://www.lambdatest.com/php-automation-testing?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=webpage). ## Configuring Eclipse for PHP Several IDEs such as Eclipse, Atom, Komodo Edit, PhpStorm, etc. are available for Selenium WebDriver with PHP development. Some IDEs are open-source (hence free for use), whereas IDEs like PhpStorm come with a 30-day free trial period. We would be using **Eclipse IDE** for our Selenium test automation. Follow the steps mentioned below for configuring Eclipse IDE for PHP: 1. Download and Install Eclipse with the help of the installation window. 2. Once Eclipse is installed, you need to enable support for PHP in Eclipse. The PHP Development Plugin (or PDT Plugin) has to be installed for using PHP in Eclipse. 3. Install the PDT update through the update sites, by going to Help? Install New Software options. The update site is http://download.eclipse.org/tools/pdt/updates/7.2 ![](https://cdn-images-1.medium.com/max/2000/0*LVjZDUV8sW0u-W_K.png) ![](https://cdn-images-1.medium.com/max/2000/0*IgMN5HNF4e3gSc3R.png) 4. Once PHP support on Eclipse is enabled, install composer on Eclipse by visiting https://marketplace.eclipse.org/content/composer-php-support With this, the overall development environment comprising of (PHP + Composer + Eclipse) for [Selenium PHP testing ](https://www.lambdatest.com/selenium-php-testing?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=webpage)is complete. ## PHP-WebDriver: Selenium WebDriver Bindings for PHP The Selenium WebDriver does not interact directly with the web elements (i.e., text boxes, checkboxes, buttons, etc.) on a webpage. Instead, a browser-specific Selenium WebDriver acts as a bridge between the browser and test script. Therefore we need a binding agent that allows us to interact with the web elements, such as PHP-WebDriver. **PHP-WebDriver** (earlier called facebook/php-webdriver ) is the language binding for Selenium WebDriver that lets you control the web browsers (under test) from PHP. It is compatible with all major Selenium server versions, i.e., 2.x, 3.x, and 4.x. Like the Selenium WebDriver library for other languages (i.e., C#, Python, Java, etc.), PHP-WebDriver library also supports JsonWireProtocol (JSON) and implements experimental support for W3C WebDriver. As the browser drivers are responsible for communicating with the respective web browsers, they should be present on the machine where Selenium PHP tests are performed. Let us see how we can download Selenium WebDriver. For popular browsers, the Selenium WebDriver can be downloaded from the following locations: <table> <tr> <td>BROWSER</td> <td>DOWNLOAD LOCATION</td> </tr> <tr> <td>Firefox</td> <td>https://github.com/mozilla/geckodriver/releases</td> </tr> <tr> <td>Chrome</td> <td>http://chromedriver.chromium.org/downloads</td> </tr> <tr> <td>Internet Explorer</td> <td>https://github.com/SeleniumHQ/selenium/wiki/InternetExplorerDriver</td> </tr> <tr> <td>Microsoft Edge</td> <td>https://developer.microsoft.com/en-us/microsoft-edge/tools/webdriver/</td> </tr> </table> **_Perform browser [test automation](https://www.lambdatest.com/automation-testing?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=webpage) on the most powerful cloud infrastructure. Leverage LambdaTest automation testing for faster, reliable and scalable experience on cloud._** For performing Selenium test automation on a local Selenium Grid, the Selenium Grid server (.jar), or remote end, should be started so that it can listen to the commands sent from the browser library. The Selenium commands are then executed on the browser on which tests have to be executed. The latest Selenium Server (Grid) version is [Selenium Grid 4 ](https://www.lambdatest.com/blog/selenium-grid-4-tutorial-for-distributed-testing/?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=blog)(i.e. 4.0.0-alpha-6). However, we found that parallel testing using Selenium PHP was not working as expected on Selenium Grid 4, hence we recommend to download Selenium Grid 3 (i.e., 3.141.59). For simplification, we recommend keeping the Selenium Server jar file and browser drivers in the same directory ![](https://cdn-images-1.medium.com/max/2000/0*oVuuPJ_Lk6Uydou0.png) Start Selenium Server by executing the following command on the terminal: java -jar selenium-server-standalone-3.141.59.jar By default, the Server uses port 4444 for listening to the incoming requests. ![](https://cdn-images-1.medium.com/max/2000/0*B6seE8fPexrS_Uic.png) **_Utilize LambdaTest [automated testing](https://www.lambdatest.com/automation-testing?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=webpage) to achieve a faster, more reliable, and scalable cloud-based experience when performing browser automation testing on the most powerful cloud infrastructure._** ## PHPUnit and Selenium PHP supports several test automation frameworks, but PHPUnit is one of the widely used unit-testing frameworks meant for automation testing. Like JUnit, PHPUnit is also an instance of xUnit and works similarly. The main advantage of PHPUnit is that code issues can be detected at a very early stage during the development process, as the developers themselves perform the testing. The latest version of PHPUnit is 9.3.8, with PHPUnit 10 staged for release in February 2021. ## How to install PHPUnit on Windows For automation testing (or [cross-browser testing](https://www.lambdatest.com/blog/automated-cross-browser-testing/?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=blog)) on Selenium WebDriver with PHP, we would be using [PHPUnit for Selenium testing](https://www.lambdatest.com/selenium-automation-testing-with-phpunit?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=webpage) by configuring it from composer.json in the PHP project. Hence, PHPUnit is managed as a project-level dependency and is not recommended for global installation. Extensions that aid in analyzing the DOM (Document Object Model) and parsing files in the JSON format are enabled by default in the test framework. ## Selenium test automation with PHPUnit PHPUnit (7.3 and later) supports Selenium WebDriver, and it provides the TestCase class for the WebDriver version. The TestCase class should be extended to get started with [Selenium PHP testing](https://www.lambdatest.com/selenium-php-testing?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=webpage). Utility methods and custom assertions in PHPUnit can be written in an abstract subclass of PHPUnit\Framework\TestCase and deriving the test case classes from that class. ![](https://cdn-images-1.medium.com/max/2000/0*4SG0x1V5IqwspNVy.png) Akin to the Selenium automation frameworks in other popular languages like Python, Java, etc.; PHPUnit also provides methods like setUp() and tearDown() for initialization & de-initialization of the test resources used for automation testing. <?php require 'vendor/autoload.php'; use PHPUnit\Framework\TestCase; class ClassNameTest extends TestCase { public function setUp(): void { /* Setup (or initialization) method goes here */ } public function tearDown(): void { /* tearDown (or de-initialization) method goes here */ } /* * test */ public function test_scenario_1() { /* Test Method implementation goes here */ } ### Writing Tests for PHPUnit When using PHPUnit for automation testing with Selenium, it is important to follow the below rules: 1. The name of files containing test implementation should end with Test. Hence, valid filenames are xxxTest.php, whereas invalid file names are Testxxx.php, xxxtest.php, etc. This is how the PHPUnit framework identifies files that might contain test methods. An example of a valid file name is shown below: ![](https://cdn-images-1.medium.com/max/2000/0*BLoBRqooVG5qkmx2.png) 2. As the TestCase class has to be extended for automation testing, its corresponding namespace (i.e. PHPUnit\Framework\TestCase) has to be imported at the start of the implementation. ![](https://cdn-images-1.medium.com/max/2000/0*-Sh4F7dAypDzUkhI.png) 3. Tests in PHPUnit are public methods that are named as test* Alternatively, there is an option to use the @test annotation in the method’s docblock to mark it as a test method. ![](https://cdn-images-1.medium.com/max/2000/0*Jq4O6mVJJ9J08BA8.png) 4. There is another way to call the test method in PHPUnit, which can be used as an alternative to the previous step. Create an object of the class that is extending the TestCase class and subsequently call the test method. However, we do not recommend using this approach since the code needs to be updated for calling new test methods each time you implement them. ![](https://cdn-images-1.medium.com/max/2000/0*l_MijNkhJxH8D2dy.png) ### Advantages of PHPUnit Here are some of the major reasons why you should use PHPUnit over other automation frameworks for Selenium PHP: * Issues are detected at an early stage since the developers are responsible for creating and executing the tests. * Does not require global installation and can be installed (or configured) on a per-project basis. However, there are some downsides of using PHPUnit over other Selenium WebDriver with PHP automation frameworks. The major shortcoming is that @covers annotation has to be added if multiple functions are being tested. A change in method (or function) name requires an update in the @ covers annotation, else tests are skipped for that particular method (or function). **_Take a look at which are the most wanted [tools for automation testing](https://www.lambdatest.com/blog/automation-testing-tools/?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=blog) that have climbed the top of the ladder so far._** ## Using Selenium WebDriver APIs with PHPUnit PHPUnit provides complete support for Selenium WebDriver APIs in the php/webdriver (or facebook/webdriver) package. From PHPUnit 1.8.0 onwards, the package has been renamed from facebook/webdriver to php/webdriver. Selenium web automation tests can be accomplished using this package. The first step for using the php-webdriver package is installing using the composer: These are the contents of composer.json for installing the PHP WebDriver and PHPUnit (version 9 or above): { "require":{ "php":">=7.1", "phpunit/phpunit":"^9", "phpunit/phpunit-selenium": "*", "php-webdriver/webdriver":"1.8.0", } } As seen in composer.json, the availability of PHP 7.1 (or above) is mandatory on the machine where PHP WebDriver 1.8 and PHPUnit 9.xxx (and above) have to be installed. For installing the packages mentioned in composer.json, run the command composer require and hit the ‘Enter button’ twice to proceed with the installation. ![](https://cdn-images-1.medium.com/max/2000/0*TFcYrZz2WBLIFv1t.png) ## Starting the Selenium Grid Server As stated earlier, when Selenium PHP tests have to be executed on a local Selenium Grid, the Standalone Grid Server has to be started, which, in turn, listens to the commands sent from the Browser Library. The Selenium Grid server receives the corresponding commands and starts a new session using the browser drivers that acts like a hub distributing the commands among multiple nodes. Since we faced issues with Selenium Grid 4 when used with Microsoft Edge, we opted for Selenium Grid 3 (i.e., 3.141.59). After downloading the Selenium Server jar file, start the server by executing the following command on the terminal: java -jar selenium-server-standalone-3.141.59.jar The server uses port 4444 for listening to the incoming requests. ![](https://cdn-images-1.medium.com/max/2000/0*Mu7_MleQ7XaSRJfH.png) ### Creating a browser session Before a browser session can be established with the web browser under test, Selenium WebDriver for the corresponding browsers has to be downloaded on the machine. It is recommended to keep the Selenium Server jar and browser drivers in the same location (as shown below). ![](https://cdn-images-1.medium.com/max/2000/0*qVkm_BFLs8xpeWf2.png) In Selenium PHP, the URL of the running Selenium server has to be passed during the process of creation of browser session. // selenium-server-standalone-#.jar (version 3.x) $host = 'http://localhost:4444/wd/hub'; // selenium-server-standalone-#.jar (version 4.x) $host = 'http://localhost:4444'; This is how you can start the browser of your choice: * **Chrome** use PHPUnit\Framework\TestCase; use Facebook\WebDriver\Chrome\ChromeOptions; use Facebook\WebDriver\Chrome\ChromeDriver; protected $webDriver; $capabilities = DesiredCapabilities::chrome(); $this->webDriver = RemoteWebDriver::create($host, $capabilities); * **Firefox** use PHPUnit\Framework\TestCase; use Facebook\WebDriver\Firefox\FirefoxDriver; use Facebook\WebDriver\Firefox\FirefoxProfile; use Facebook\WebDriver\Remote\RemoteWebDriver; protected $webDriver; $capabilities = DesiredCapabilities::firefox(); $this->webDriver = RemoteWebDriver::create($host, $capabilities); * **Microsoft Edge** use PHPUnit\Framework\TestCase; use Facebook\WebDriver\Remote\RemoteWebDriver; protected $webDriver; $capabilities = DesiredCapabilities::microsoftEdge(); $this->webDriver = RemoteWebDriver::create($host, $capabilities); > **Read More**– Get started with your [Selenium Python](https://www.lambdatest.com/blog/getting-started-with-selenium-python/) easy tutorial!!! ### Customizing Desired Browser Capabilities The DesiredCapabilities class has to be imported before customizing the browser capabilities of the browser being used for Selenium web automation. ![](https://cdn-images-1.medium.com/max/2000/0*p1RfmEvc4NblfQna.png) The setCapability() method of the DesiredCapabilities class sets the capabilities by taking — using the (string, value) input combination. Here is the sample of disabling SSL certificates support in Firefox ![](https://cdn-images-1.medium.com/max/2724/0*s0VQaHdXm5Ej5dlb.png) use PHPUnit\Framework\TestCase; use Facebook\WebDriver\Firefox\FirefoxDriver; use Facebook\WebDriver\Firefox\FirefoxProfile; use Facebook\WebDriver\Remote\RemoteWebDriver; $capabilities = DesiredCapabilities::firefox(); // Disable accepting SSL certificates // selenium-server-standalone-#.jar (version 3.x) // $host = 'http://localhost:4444/wd/hub' $capabilities->setCapability('acceptSslCerts', false); $this->webDriver = RemoteWebDriver::create($host, $capabilities); ## Web automation using Selenium PHP on local Selenium Grid For demonstration purposes in this Selenium PHP tutorial on the local Selenium Grid, we first set up a PHP project in Eclipse. We name the project as ‘CBT_Project’ and create a folder named test in the project. The test folder will contain the php files that will contain the implementation for Selenium web automation test scenarios. ![](https://cdn-images-1.medium.com/max/2726/0*G9O9Q2kFOzZcrz1r.png) We use two different test scenarios that are tested on Chrome and Firefox browsers, respectively. Since the [Selenium WebDriver](https://www.lambdatest.com/blog/selenium-webdriver-tutorial-with-examples/?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=blog) with PHP tests are performed on the local Selenium Grid, the machine on which tests are being performed should have Chrome & Firefox browsers, and their respective browser WebDrivers installed on that machine. **Test Scenario — 1 (Browser — Chrome, Platform — Windows 10)** 1. Navigate to the URL https://www.google.com/ncr 2. Locate the Search text box 3. Enter ‘LambdaTest’ in the search-box 4. Click on the search button and assert if the title does not match with the expected window title **Test Scenario — 2 (Browser — Firefox, Platform — Windows 10)** 1. Navigate to the URL [https://lambdatest.github.io/sample-todo-app/](https://lambdatest.github.io/sample-todo-app/) 2. Select the first two checkboxes 3. Send ‘Yey, Let’s add it to list’ to the textbox with id = sampletodotext 4. Click the Add Button and verify whether the text has been added or not 5. Assert if the title does not match with the expected window title ### Implementation The first step is creation of composer.json in the root project folder (i.e. CBT_Project) to download the dependencies for the Selenium PHP project under development. { "require":{ "php":">=7.1", "phpunit/phpunit":"^9", "phpunit/phpunit-selenium": "*", "php-webdriver/webdriver":"1.8.0", "symfony/symfony":"4.4", } } The format of the JSON file (i.e., composer.json) should be validated by going to https://jsonlint.com/. Composer.json contains the list of dependencies required for the project. So in this Selenium PHP tutorial, this list will comprise of: 1. PHP (version 7.1 or above) 2. PHPUnit (version 9 or above) 3. PHPUnit-selenium (latest version) 4. PHP-WebDriver (version 1.8.0) — Selenium WebDriver library for automation with Selenium PHP 5. Symphony (version 4.4) — Set of reusable PHP components and PHP framework for building web applications, APIs, and micro-services. To install the dependencies mentioned in composer.json, open the terminal (or command prompt) and run the following command: composer require Press the ‘Enter button’ twice to download the dependencies mentioned in composer.json. ![](https://cdn-images-1.medium.com/max/2120/0*kDaVjqmuifDFDlFN.png) On completion, you will have composer.lock and vendor folder inside the project folder (i.e. CBT_Project). The file composer.lock contains information of all the dependencies and the vendor folder contains all the downloaded dependencies. ![](https://cdn-images-1.medium.com/max/2000/0*ngJha1TT6VYkYPNR.png) ![](https://cdn-images-1.medium.com/max/2000/0*VqARhE2YsAhzauqT.png) As shown above, composer has generated vendor\autoload.php file. This file can be simply included in the file containing the test implementation so that the classes provided by those libraries can be used without any extra effort. **FileName — GoogleSearchChromeTest.php (Test Scenario — 1)** <?php require 'vendor/autoload.php'; use PHPUnit\Framework\TestCase; use Facebook\WebDriver\Chrome\ChromeOptions; use Facebook\WebDriver\Chrome\ChromeDriver; use Facebook\WebDriver\Firefox\FirefoxDriver; use Facebook\WebDriver\Firefox\FirefoxProfile; use Facebook\WebDriver\Remote\DesiredCapabilities; use Facebook\WebDriver\Remote\RemoteWebDriver; use Facebook\WebDriver\WebDriverBy; class GoogleSearchChromeTest extends TestCase { protected $webDriver; public function build_chrome_capabilities(){ $capabilities = DesiredCapabilities::chrome(); return $capabilities; } public function setUp(): void { $capabilities = $this->build_chrome_capabilities(); /* Download the Selenium Server 3.141.59 from https://selenium-release.storage.googleapis.com/3.141/selenium-server-standalone-3.141.59.jar */ $this->webDriver = RemoteWebDriver::create('http://localhost:4444/wd/hub', $capabilities); } public function tearDown(): void { $this->webDriver->quit(); } /* * @test */ public function test_searchTextOnGoogle() { $this->webDriver->get("https://www.google.com/ncr"); $this->webDriver->manage()->window()->maximize(); sleep(5); $element = $this->webDriver->findElement(WebDriverBy::name("q")); if($element) { $element->sendKeys("LambdaTest"); $element->submit(); } print $this->webDriver->getTitle(); $this->assertEquals('LambdaTest - Google Search', $this->webDriver->getTitle()); } } ?> ### Code WalkThrough **Line (2) **— autoload.php, the file located in the vendor folder and auto-created by the composer, is imported so that the classes provided the downloaded libraries can be used in the implementation. require ‘vendor/autoload.php’; **Line (4) **— The TestCase class that is provided by the WebDriver version has to be extended for Selenium web automation with PHP. Hence, the package PHPUnit\Framework\TestCase is imported before the test case classes can be derived from the TestCase class. use PHPUnit\Framework\TestCase; **Line (5–6) **— The ChromeDriver and ChromeOptions class (used for customizing the browser’s DesiredCapabilities) are imported. use Facebook\WebDriver\Chrome\ChromeOptions; use Facebook\WebDriver\Chrome\ChromeDriver; **Lines (9–10) **— The DesiredCapabilities class is imported so that methods for modifying the browser capabilities can be used. The RemoteWebDriver class is primarily responsible for handling all the interactions with the Selenium server. Its package is imported so that the RemoteWebDriver class can be used in the implementation. use Facebook\WebDriver\Remote\DesiredCapabilities; use Facebook\WebDriver\Remote\RemoteWebDriver; **Line (13)** — The class GoogleSearchChromeTest extends TestCase so that methods provided by the TestCase class can be used. For PHP versions earlier than 7.2, the TestCase class was called PHPUnit_Framework_TestCase, but we recommend using the latest PHP version (i.e., 9.3) since extensions for Selenium PHP are only compatible with newer versions of PHP (7.2 and above). class GoogleSearchChromeTest extends TestCase **Line (18–21)** — In the build_chrome_capabilities() method, the DesiredCapabilities of Chrome browser is set. For the demonstration, we have not modified any capabilities of the Chrome browser. public function build_chrome_capabilities(){ $capabilities = DesiredCapabilities::chrome(); return $capabilities; } **Line (23–30)** — As mentioned earlier, the setUp() method in PHPUnit should contain initialization related implementation related to Selenium web automation.sh The variable $capabilities contain the Chrome browser capabilities set by invoking the build_chrome_capabilities() method. The create method of the RemoteWebDriver class is used for creating a new instance of Chrome browser. The first parameter is the Selenium server host address that should be started before Selenium PHP tests are executed. The default port of the Selenium server is 4444, and its default address is http://localhost:4444/wd/hub. The second parameter in the create method is Chrome browser capabilities that were earlier set in the code. public function setUp(): void { $capabilities = $this->build_chrome_capabilities(); /* Download the Selenium Server 3.141.59 from https://selenium-release.storage.googleapis.com/3.141/selenium-server-standalone-3.141.59.jar */ $this->webDriver = RemoteWebDriver::create('http://localhost:4444/wd/hub', $capabilities); } **Lines (32–35)** — The tearDown() method contains the implementation for closing the browser session after the tests are done. public function tearDown(): void { $this->webDriver->quit(); } **Lines (36–39)** — The PHPUnit framework has different mechanisms for location test methods. Methods with names prefixed with tests are considered as test methods (e.g. test_searchTextOnGoogle). An alternative is using the @test annotation in the method’s DocBlock for marking it as a test method. /* * @test */ public function test_searchTextOnGoogle(){ } **Lines (44–46)** — The maximize() method of RemoteWebDriver class is used for maximizing the browser window. There are [eight locators](https://www.lambdatest.com/blog/locators-in-selenium-webdriver-with-examples/?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=blog) that are supported by the Selenium WebDriver for locating Web elements, namely — ‘css selector’, ‘class name’, ‘id’, ‘name’, ‘link text’, ‘partial link text’, ‘tag name’ and ‘xpath. The Inspect tool in Chrome lets you get the information of the required web element (i.e. Search Box). ![](https://cdn-images-1.medium.com/max/2726/0*9tfBgJB1TkolLRiY.png) The methods for locating web elements are implemented in the WebDriverBy class, and the findElement method of RemoteWebDriver class is used with the name locator to locate the search box. $this->webDriver->manage()->window()->maximize(); sleep(5); $element = $this->webDriver->findElement(WebDriverBy::name("q")); **Lines (47–50)** — The sendKeys() method of WebDriverElement simulates the typing in the searched web element i.e. search box. The submit() method helps in submitting the request to the remote server. if($element) { $element->sendKeys("LambdaTest"); $element->submit(); } **Line (53) **— The assertEquals function which is a built-in function in PHPUnit asserts when the current window title does not match with the expected title. $this->assertEquals(‘LambdaTest — Google Search’, $this->webDriver->getTitle()); **FileName — GoogleSearchFirefoxTest.php (Test Scenario — 2)** <?php require 'vendor/autoload.php'; use PHPUnit\Framework\TestCase; use Facebook\WebDriver\Chrome\ChromeOptions; use Facebook\WebDriver\Chrome\ChromeDriver; use Facebook\WebDriver\Firefox\FirefoxDriver; use Facebook\WebDriver\Firefox\FirefoxProfile; use Facebook\WebDriver\Remote\DesiredCapabilities; use Facebook\WebDriver\Remote\RemoteWebDriver; use Facebook\WebDriver\WebDriverBy; class GoogleSearchFirefoxTest extends TestCase { protected $webDriver; public function build_firefox_capabilities(){ $capabilities = DesiredCapabilities::firefox(); return $capabilities; } public function setUp(): void { $capabilities = $this->build_firefox_capabilities(); /* Download the Selenium Server 3.141.59 from https://selenium-release.storage.googleapis.com/3.141/selenium-server-standalone-3.141.59.jar */ $this->webDriver = RemoteWebDriver::create('http://localhost:4444/wd/hub', $capabilities); } public function tearDown(): void { $this->webDriver->quit(); } /* * @test */ public function test_LT_ToDoApp() { $itemName = 'Item in Selenium PHP Tutorial'; $this->webDriver->get("https://lambdatest.github.io/sample-todo-app/"); $this->webDriver->manage()->window()->maximize(); sleep(5); $element1 = $this->webDriver->findElement(WebDriverBy::name("li1")); $element1->click(); $element2 = $this->webDriver->findElement(WebDriverBy::name("li2")); $element2->click(); $element3 = $this->webDriver->findElement(WebDriverBy::id("sampletodotext")); $element3->sendKeys($itemName); $element4 = $this->webDriver->findElement(WebDriverBy::id("addbutton")); $element4->click(); $this->webDriver->wait(10, 500)->until(function($driver) { $elements = $this->webDriver->findElements(WebDriverBy::cssSelector("[class='list-unstyled'] li:nth-child(6) span")); return count($elements) > 0; }); sleep(5); $this->assertEquals('Sample page - lambdatest.com', $this->webDriver->getTitle()); } } ?> [ **_Let’s explore which automation testing tools](https://www.lambdatest.com/blog/automation-testing-tools/?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=blog) have risen to the top of the ladder so far. And which are the most in-demand automation testing tools?_** ### Code WalkThrough The major part of the implementation remains the same as used for TestCase–1; hence, we would focus only on the code’s major aspects in this walkthrough. **Lines (7–8)** — The FirefoxDriver and FirefoxProfile classes are imported to make changes to Firefox Profile (if required). use Facebook\WebDriver\Firefox\FirefoxDriver; use Facebook\WebDriver\Firefox\FirefoxProfile; **Line (10) **— The RemoteWebDriver class that implements WebDriver, JavaScriptExecutor, etc. is imported. use Facebook\WebDriver\Remote\RemoteWebDriver; **Line (19) **— The desired capabilities of the Firefox browser are set. $capabilities = DesiredCapabilities::firefox(); **Line (45–46) **— The findElement method of RemoteWebDriver class returns WebDriverElement. Here, a web element with the name “li1” is located, and the click() method in WebDriverElement class is used for clicking on the element. $element1 = $this->webDriver->findElement(WebDriverBy::name(“li1”)); $element1->click(); The same operations are performed for web elements with names “li2” and “li3”. ![](https://cdn-images-1.medium.com/max/2726/0*7nnZIWv59SD6LYVK.png) **Lines (53–56)** — A non-blocking wait of 10 seconds is performed using wait(10,500) with a check for the presence of the web element being done every 500 ms. The required web element count returns a non-zero value if the web element is present on the page. In our case, it is the newly added item ‘Item in Selenium PHP Tutorial.’ $this->webDriver->wait(10, 500)->until(function($driver) { $elements = $this->webDriver- >findElements(WebDriverBy::cssSelector("[class='list-unstyled'] li:nth- child(6) span")); return count($elements) > 0; }); **Line (58) **— Assert is raised if the current window title does not match the expected window title (i.e. Sample page — lambdatest.com). $this->assertEquals(‘Sample page — lambdatest.com’, $this->webDriver->getTitle()); ### Execution For executing the test code, PHPUnit framework from the vendor folder that was created by Composer, the dependency manager for PHP — PHPUnit offers a number of options that can be obtained via vendor\bin\phpunit --help. ![](https://cdn-images-1.medium.com/max/2000/0*4tXqWrFBDOaMGhvh.png) The Selenium Grid Server has to be started before we execute the tests. The server listens to incoming requests on port#4444. java -jar selenium-server-standalone-3.141.59.jar ![](https://cdn-images-1.medium.com/max/2000/0*Xk149ZuWmVChTebz.png) For running the Selenium web automation tests, the PHPUnit command is triggered against individual files [i.e. GoogleSearchChromeTest.php and GoogleSearchFirefoxTest.php]. ![](https://cdn-images-1.medium.com/max/2000/0*u7ebui96ttp8CcNq.png) ![](https://cdn-images-1.medium.com/max/2000/0*BBTdHAruC-qE_dkF.png) Shown below is the execution snapshot for the Firefox browser: ![](https://cdn-images-1.medium.com/max/3200/0*pp1DpYJKV0VWbC9Y.png) ## Shortcomings of Selenium PHP testing on Local Selenium Grid In the previous section, we performed Selenium web automation tests on Chrome and Firefox browsers. It won’t be easy if the same tests have to be conducted on browsers like Safari, Internet Explorer, etc. The complexities would multiply further if the tests have to be performed on older and newer versions of a browser, e.g., Chrome, Firefox, Safari, etc. Selenium PHP test execution on local Selenium Grid has numerous shortcomings, some of which are as follows: 1. Selenium web automation on local Selenium Grid is not a scalable option, especially for large projects as a big investment is required in setting up the IT infrastructure. 2. Maintenance and upgrade of the Selenium Grid has to be performed on a timely basis. 3. The Selenium Grid server has to be started before [cross browser testing](https://www.lambdatest.com/?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=webpage) can be done using Selenium PHP. Automated browser testing on different browser and platform combinations becomes challenging since browser drivers (according to the browser version) have to be downloaded on the local machine. 4. Automation testing with Selenium PHP on outdated browsers (like Internet Explorer) and older browser versions becomes a daunting task with local Selenium Grid. 5. The advantages of [parallel execution](https://www.lambdatest.com/blog/what-is-parallel-testing-and-why-to-adopt-it/?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=blog) can be leveraged to a certain extent on local Selenium Grid, but its potential can only be exploited on a cloud-based [Selenium](https://www.lambdatest.com/selenium-automation?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=webpage) Grid. ## Web automation testing using Selenium PHP on cloud-based Selenium Grid Let’s take this Selenium PHP tutorial forward, shall we? Now that you know the main disadvantages of using local Selenium Grid for Selenium web automation let’s look at how testing can be accelerated using cloud-based Selenium Grid. LambdaTest provides a cloud-based Selenium Grid that lets you perform automation testing with PHPUnit on 3000+ real browsers and operating systems online. To get started, you have to [create an account on LambdaTest](https://accounts.lambdatest.com/register?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=webpage). After creating an account, you should make a note of username & access-key from the [Profile page](https://accounts.lambdatest.com/detail/profile?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=webpage). The browser capabilities can be generated using the [LambdaTest capabilities generator.](https://www.lambdatest.com/capabilities-generator/?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=webpage) ![](https://cdn-images-1.medium.com/max/2682/0*I6Yiw2CSSQqp5X2I.png) Below are the four test scenarios that would be tested on LambdaTest’s cloud-based Selenium Grid: **Test Scenario — 1 (Browser — Edge 84.0, Platform — macOS High Sierra)** 1. Navigate to the URL https://www.google.com/ncr 2. Locate the Search text box 3. Enter ‘LambdaTest’ in the search-box 4. Click on the search button and assert if the title does not match with the expected window title **Test Scenario — 2 (Browser — Safari 12.0, Platform — macOS Mojave)** 1. Navigate to the URL https://lambdatest.github.io/sample-todo-app/ 2. Select the first two checkboxes 3. Send ‘Yey, Let’s add it to list’ to the textbox with id = sampletodotext 4. Click the Add Button and verify whether the text has been added or not 5. Assert if the title does not match with the expected window title **Test Scenario — 3 (Browser — Firefox 64.0, Platform — OS X Mavericks)** 1. Navigate to the URL [https://www.lambdatest.com/blog/](https://www.lambdatest.com/blog/?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=blog) 2. Assert if the title does not match with the expected window title **Test Scenario — 4 (Browser — Internet Explorer 11.0, Platform — Windows 10)** 1. Navigate to the URL https://www.google.com/ncr 2. Search for ‘LambdaTest’ 3. Click on the first test result 4. Assert if the title does not match with the expected window title ## Implementation (Test Scenario — 1) <?php require 'vendor/autoload.php'; use PHPUnit\Framework\TestCase; use Facebook\WebDriver\Remote\DesiredCapabilities; use Facebook\WebDriver\Remote\RemoteWebDriver; use Facebook\WebDriver\WebDriverBy; $GLOBALS['LT_USERNAME'] = "user-name"; # accessKey: AccessKey can be generated from automation dashboard or profile section $GLOBALS['LT_APPKEY'] = "access-key"; class GoogleSearchChromeTest extends TestCase { protected $webDriver; public function build_browser_capabilities(){ /* $capabilities = DesiredCapabilities::chrome(); */ $capabilities = array( "build" => "[PHP] Test-2 on Edge and macOS High Sierra", "name" => "[PHP] Test-2 on Edge and macOS High Sierra", "platform" => "macOS High Sierra", "browserName" => "MicrosoftEdge", "version" => "84.0" ); return $capabilities; } public function setUp(): void { $capabilities = $this->build_browser_capabilities(); /* Download the Selenium Server 3.141.59 from https://selenium-release.storage.googleapis.com/3.141/selenium-server-standalone-3.141.59.jar */ $url = "https://". $GLOBALS['LT_USERNAME'] .":" . $GLOBALS['LT_APPKEY'] ."@hub.lambdatest.com/wd/hub"; $this->webDriver = RemoteWebDriver::create($url, $capabilities); } public function tearDown(): void { $this->webDriver->quit(); } /* * @test */ public function test_searchTextOnGoogle() { $this->webDriver->get("https://www.google.com/ncr"); $this->webDriver->manage()->window()->maximize(); sleep(5); $element = $this->webDriver->findElement(WebDriverBy::name("q")); if($element) { $element->sendKeys("LambdaTest"); $element->submit(); } print $this->webDriver->getTitle(); $this->assertEquals('LambdaTest - Google Search', $this->webDriver->getTitle()); } } ?> ### Code WalkThrough Except for the browser and OS combination, this test scenario is the same as Test Scenario — 1 (which was used in the demonstration of Selenium PHP on local Selenium Grid). **Lines (13–15) **— Global variables that hold user-name and access-key obtained from the [LambdaTest profile page](https://accounts.lambdatest.com/detail/profile?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=webpage) are declared for usage in the test code. $GLOBALS['LT_USERNAME'] = "user-name"; # accessKey: AccessKey can be generated from automation dashboard or profile section $GLOBALS['LT_APPKEY'] = "access-key"; **Lines (24–30)** — The Desired Capabilities array holding the browser and OS test combination is declared in the build_browser_capabilities() method. $capabilities = array( "build" => "[PHP] Test-2 on Edge and macOS High Sierra", "name" => "[PHP] Test-2 on Edge and macOS High Sierra", "platform" => "macOS High Sierra", "browserName" => "MicrosoftEdge", "version" => "84.0" ); **Lines (40–41)** — The combination of globals holding the user-name and access-key are used for accessing the LambdaTest Grid URL [@hub.lambdatest.com/wd/hub] The create method in the RemoteWebDriver class is used for creating WebDriver with the specified desired capabilities. The first parameter to create takes the Selenium Grid URL as input and the second parameter holds the browser capabilities. $url = “https://”. $GLOBALS[‘LT_USERNAME’] .”:” . $GLOBALS[‘LT_APPKEY’] .”@hub.lambdatest.com/wd/hub”; $this->webDriver = RemoteWebDriver::create($url, $capabilities); Rest of the implementation is self-explanatory, as it has no dependency on the Selenium Grid on which tests are performed. ### Implementation (Test Scenario — 2) <?php require 'vendor/autoload.php'; use PHPUnit\Framework\TestCase; use Facebook\WebDriver\Remote\DesiredCapabilities; use Facebook\WebDriver\Remote\RemoteWebDriver; use Facebook\WebDriver\WebDriverBy; $GLOBALS['LT_USERNAME'] = "user-name"; # accessKey: AccessKey can be generated from automation dashboard or profile section $GLOBALS['LT_APPKEY'] = "access-key"; class GoogleSearchFirefoxTest extends TestCase { protected $webDriver; public function build_browser_capabilities(){ /* $capabilities = DesiredCapabilities::firefox(); */ $capabilities = array( "build" => "[PHP] Test-1 on Safari and macOS Mojave", "name" => "[PHP] Test-1 on Safari and macOS Mojave", "platform" => "macOS Mojave", "browserName" => "Safari", "version" => "12.0" ); return $capabilities; } public function setUp(): void { $capabilities = $this->build_browser_capabilities(); /* Download the Selenium Server 3.141.59 from https://selenium-release.storage.googleapis.com/3.141/selenium-server-standalone-3.141.59.jar */ /* $this->webDriver = RemoteWebDriver::create('http://localhost:4444/wd/hub', $capabilities); */ $url = "https://". $GLOBALS['LT_USERNAME'] .":" . $GLOBALS['LT_APPKEY'] ."@hub.lambdatest.com/wd/hub"; $this->webDriver = RemoteWebDriver::create($url, $capabilities); } public function tearDown(): void { $this->webDriver->quit(); } /* * @test */ public function test_LT_ToDoApp() { $itemName = 'Item in Selenium PHP Tutorial'; $this->webDriver->get("https://lambdatest.github.io/sample-todo-app/"); $this->webDriver->manage()->window()->maximize(); sleep(5); $element1 = $this->webDriver->findElement(WebDriverBy::name("li1")); $element1->click(); $element2 = $this->webDriver->findElement(WebDriverBy::name("li2")); $element2->click(); $element3 = $this->webDriver->findElement(WebDriverBy::id("sampletodotext")); $element3->sendKeys($itemName); $element4 = $this->webDriver->findElement(WebDriverBy::id("addbutton")); $element4->click(); $this->webDriver->wait(10, 500)->until(function($driver) { $elements = $this->webDriver->findElements(WebDriverBy::cssSelector("[class='list-unstyled'] li:nth-child(6) span")); return count($elements) > 0; }); sleep(5); $this->assertEquals('Sample page - lambdatest.com', $this->webDriver->getTitle()); } } ?> **_Test your native, hybrid, and web apps across all legacy and latest mobile operating systems on the most powerful [Android emulator online](https://www.lambdatest.com/android-emulator-online?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=webpage)._** ### Code WalkThrough **Lines (24–30)** — Array of browser and OS capabilities is generated using the LambdaTest Capabilities Generator. $capabilities = array( "build" => "[PHP] Test-1 on Safari and macOS Mojave", "name" => "[PHP] Test-1 on Safari and macOS Mojave", "platform" => "macOS Mojave", "browserName" => "Safari", "version" => "12.0" ); The rest of the implementation is similar to the one used for Test Scenario — 2, which we have used for web automation testing using Selenium PHP on the local Selenium Grid. ### Implementation (Test Scenario — 3) <?php require 'vendor/autoload.php'; use PHPUnit\Framework\TestCase; use Facebook\WebDriver\Remote\DesiredCapabilities; use Facebook\WebDriver\Remote\RemoteWebDriver; use Facebook\WebDriver\WebDriverBy; $GLOBALS['LT_USERNAME'] = "user-name"; # accessKey: AccessKey can be generated from automation dashboard or profile section $GLOBALS['LT_APPKEY'] = "faccess-key"; class LTBlogTest extends TestCase { protected $webDriver; public function build_browser_capabilities(){ $capabilities = array( "build" => "[PHP] Test-4 on Firefox and OS X Mavericks", "name" => "[PHP] Test-4 on Firefox and OS X Mavericks", "platform" => "OS X Mavericks", "browserName" => "Firefox", "version" => "64.0" ); return $capabilities; } public function setUp(): void { $capabilities = $this->build_browser_capabilities(); /* Download the Selenium Server 3.141.59 from https://selenium-release.storage.googleapis.com/3.141/selenium-server-standalone-3.141.59.jar */ /* $this->webDriver = RemoteWebDriver::create('http://localhost:4444/wd/hub', $capabilities); */ $url = "https://". $GLOBALS['LT_USERNAME'] .":" . $GLOBALS['LT_APPKEY'] ."@hub.lambdatest.com/wd/hub"; $this->webDriver = RemoteWebDriver::create($url, $capabilities); } public function tearDown(): void { $this->webDriver->quit(); } /* * @test */ public function test_LT_Blog() { $expected_title = "LambdaTest | A Cross Browser Testing Blog"; $this->webDriver->get("https://www.lambdatest.com/blog/"); $this->webDriver->manage()->window()->maximize(); sleep(5); print $this->webDriver->getTitle(); $this->assertEquals($expected_title, $this->webDriver->getTitle()); } } ?> ### Code WalkThrough **Lines (56–57) **— The maximize() method of the RemoteWebDriver class is used for maximizing the browser window. $this->webDriver->get(“https://www.lambdatest.com/blog/"); $this->webDriver->manage()->window()->maximize(); **Line (62) **— The getTitle() method returns the current window title. The title is compared with the expected window title, and assert is raised if the titles do not match. $this->assertEquals($expected_title, $this->webDriver->getTitle()); ### Implementation (Test Scenario — 4) <?php require 'vendor/autoload.php'; use PHPUnit\Framework\TestCase; use Facebook\WebDriver\Remote\DesiredCapabilities; use Facebook\WebDriver\Remote\RemoteWebDriver; use Facebook\WebDriver\WebDriverBy; $GLOBALS['LT_USERNAME'] = "user-name"; # accessKey: AccessKey can be generated from automation dashboard or profile section $GLOBALS['LT_APPKEY'] = "access-key"; class LTWebsiteTest extends TestCase { protected $webDriver; public function build_browser_capabilities(){ /* $capabilities = DesiredCapabilities::chrome(); */ $capabilities = array( "build" => "[PHP] Test-3 on IE and Windows 10", "name" => "[PHP] Test-3 on IE and Windows 10", "platform" => "Windows 10", "browserName" => "Internet Explorer", "version" => "11.0", "ie.compatibility" => 11001 ); return $capabilities; } public function setUp(): void { $capabilities = $this->build_browser_capabilities(); /* Download the Selenium Server 3.141.59 from https://selenium-release.storage.googleapis.com/3.141/selenium-server-standalone-3.141.59.jar */ $url = "https://". $GLOBALS['LT_USERNAME'] .":" . $GLOBALS['LT_APPKEY'] ."@hub.lambdatest.com/wd/hub"; $this->webDriver = RemoteWebDriver::create($url, $capabilities); } public function tearDown(): void { $this->webDriver->quit(); } /* * @test */ public function test_LT_Blog() { $expected_title = "Most Powerful Cross Browser Testing Tool Online | LambdaTest"; $this->webDriver->get("https://www.google.com/ncr"); $this->webDriver->manage()->window()->maximize(); sleep(5); $element = $this->webDriver->findElement(WebDriverBy::name("q")); if($element) { $element->sendKeys("LambdaTest"); $element->submit(); } /* Click on the first result */ $search_result = $this->webDriver->findElement(WebDriverBy::Xpath("//h3[.='LambdaTest: Most Powerful Cross Browser Testing Tool Online']")); $search_result->click(); sleep(5); print $this->webDriver->getTitle(); $this->assertEquals($expected_title, $this->webDriver->getTitle()); print "Test Completed"; } } ?> ### Code WalkThrough **Lines (67- 68) **— The findElement() method of RemoteWebDriver class is used for locating the first search result on Google (for LambdaTest). XPath property of the web element is used for the same. The POM Builder extension in Chrome helps to get the details of any web element on the page with ease. ![](https://cdn-images-1.medium.com/max/2722/0*fvuyWqN5FPi5MI60.png) The click() method is applied on the identified WebElement. /* Click on the first result */ $search_result = $this->webDriver->findElement(WebDriverBy::Xpath("//h3[.='LambdaTest: Most Powerful Cross Browser Testing Tool Online']")); $search_result->click(); **Line (72) **— The window title of the LambdaTest homepage is compared with the expected title. Assert is raised if the titles do not match. public function test_LT_Blog() { .............................................. $expected_title = "Most Powerful Cross Browser Testing Tool Online | LambdaTest"; $this->assertEquals($expected_title, $this->webDriver->getTitle()); .............................................. .............................................. } ### Execution As shown below, all the test files are stored in a folder named ‘test’ ![](https://cdn-images-1.medium.com/max/2000/0*20Ums6W0RZNXtXps.png) The following command is used for invoking the test execution vendor\bin\phpunit — debug test As shown in the execution screenshot, all four tests are executed serially, and the total execution time is 2 minutes, 1 second. ![](https://cdn-images-1.medium.com/max/2686/0*fF0mWfzmU6qeObWo.png) ![](https://cdn-images-1.medium.com/max/2000/0*lEJ_GYjNdS1DrjI7.png) ![](https://cdn-images-1.medium.com/max/2682/0*T2e5pMM7C9wID-a-.png) *Completion of test execution* ## Parallel Testing for PHPUnit on cloud-based Selenium Grid Automation testing on different combinations of browsers and operating systems can take a significant amount of time. Serial testing is not an ideal solution, irrespective of whether tests are executed on a local Selenium Grid or a cloud-based Selenium Grid like LambdaTest. As seen in the execution snapshot, our plan on LambdaTest lets you perform five tests in parallel, and serial testing does not provide an opportunity to use this feature offered by the grid. ![](https://cdn-images-1.medium.com/max/2706/0*_J_8f65K-tPeoVDQ.png) However, parallel testing in PHPUnit is supported through ParaTest. ParaTest is a command-line tool that lets you run PHPUnit Framework tests in parallel without the necessity of installing any extensions. ParaTest thereby accelerates the execution speed of functional tests, cross-browser tests, as well as integration tests. Though there are other alternatives to ParaTest, it offers several advantages in comparison to other parallel test runners: * ParaTest runs tests in ’N’ parallel processes, and the code coverage output is combined with the test results in a single test report. * The simple installation process (with no additional configurations) via the Composer * Test files can be isolated in separate processes, and faster test runs can be achieved by leveraging the advantage of WrapperRunner in ParaTest. * Supported on Windows, Mac, and Linux. The latest stable version of ParaTest is 4.0.4, and the project is hosted on GitHub. ### How to install ParaTest for PHPUnit The only way to install ParaTest is through the Composer. Instead of creating a new composer.json, we would be appending the additional step in the existing composer.json For fetching the latest development version of ParaTest, the following has to be added in composer.json { "require":{ "brianium/paratest": "dev-master" } } For fetching the stable version, add the following in composer.json { "require":{ "brianium/paratest": "0.4.4" } } Here is the complete content of composer.json (including the requirements of phpunit, php-webdriver, symfony, and more): { "require":{ "php":">=7.1", "phpunit/phpunit":"^9", "phpunit/phpunit-selenium": "*", "php-webdriver/webdriver":"1.8.0", "symfony/symfony":"4.4", "brianium/paratest": "dev-master" } } For installing ParaTest, run composer require and press the ‘Enter button’ twice on the terminal. Here is the snapshot which indicates that ParaTest was downloaded: ![](https://cdn-images-1.medium.com/max/2000/0*FgK1aBAJWoEXAmVb.png) The vendor\bin directory would also be updated with the downloaded ParaTest package. ![](https://cdn-images-1.medium.com/max/2000/0*OEfKKWWUWO6C20fy.png) ![](https://cdn-images-1.medium.com/max/2000/0*-Zgbu-ZgPQu_wDtw.png) ### ParaTest Command Line Interface The ParaTest provides a number of command-line options that facilitate parallel testing. Run vendor\bin\paratest --help to explore different options offered by ParaTest. ![](https://cdn-images-1.medium.com/max/2690/0*Zfoo0Bna5PvcUHN5.png) The options in ParaTest which we found useful for performing parallel testing in this Selenium PHP tutorial are: ![](https://cdn-images-1.medium.com/max/2000/0*zcYPNXfxhmE2ny1k.png) ### Parallel Testing for PHPUnit using ParaTest With ParaTest downloaded and ready for use, we can execute the [four tests which were executed serially on LambdaTest](https://www.lambdatest.com/blog/selenium-php-tutorial/#SeleniumGrid?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=blog). For running the four tests in parallel, run the following command on the terminal: vendor\bin\paratest — processes=4 — verbose=1 — functional test As shown above, 4 processes are executed in parallel through the –processes option. The –functional option is used for running the test methods in separate processes. ![](https://cdn-images-1.medium.com/max/2686/0*rPh76TWpH-B8fLJY.png) As specified in the ParaTest command, 4 tests can run in parallel on the Grid. The execution of four tests is completed without any issues. ![](https://cdn-images-1.medium.com/max/2678/0*osIPbyiyj5m1W1AA.png) The total time duration for executing the four tests in parallel is **41 seconds, 243 milliseconds**. On the other hand, the serial execution of these tests on cloud-based Selenium Grid was **2 minutes, 1 second**. ![](https://cdn-images-1.medium.com/max/2000/0*ASLP2LVdebzVkAH0.png) *Parallel testing on LambdaTest using ParaTest* ![](https://cdn-images-1.medium.com/max/2000/0*iUTZ3FIN-D-eo0WO.png) *Serial Testing on LambdaTest using PHPUnit* Hence, parallel testing in PHPUnit using ParaTest resulted in overall savings of 80 seconds, which is a significant number if automation tests have to be run across many browsers and platform combinations. **_Are you using [Playwright](https://www.lambdatest.com/playwright-testing?utm_source=devto&utm_medium=organic&utm_campaign=apr03_bh&utm_term=bh&utm_content=webpage) for automation testing? Run your Playwright test scripts instantly on 50+ browser/OS combinations using the LambdaTest cloud._** ## Wrap Up So in this Selenium PHP tutorial, we have seen that PHP is one of the widely used server-side programming languages used for web automation testing. Like other popular programming languages, PHP also supports the Selenium framework. Hence, Selenium PHP is the ideal combination for automated browser testing (or automation testing). PHPUnit is the default unit testing framework in PHP; hence it does not require any separate installation. Parallel testing in PHPUnit can be performed using ParaTest, a popular tool that sits on top of the PHPUnit framework. It can be used without installing any additional extensions. Parallel testing in PHPUnit using ParaTest can reap more benefits when used on a cloud-based Selenium Grid like LambdaTest, as tests on different browsers and operating system combinations can be performed in parallel. It also avoids the need to start up the Selenium Grid server and install Selenium WebDriver for different browsers on which Selenium web automation tests have to be performed. The combination of Selenium PHP, ParaTest, and cloud-based Grid is ideal for exploiting the features of PHP, ParaTest, and Selenium for accelerating automation testing in PHPUnit. We hope this Selenium PHP tutorial will help you and your team kickstart your local testing, as well as parallel testing. Thank you for reading. If you have any issues or questions, don’t hesitate to reach out via the comment section below.
himanshusheth004
1,423,990
Bind Route Info to Component Inputs (New Router feature)
Pass router info to routed component inputs Topics covered in this...
0
2023-04-05T15:41:34
https://eneajahollari.medium.com/bind-route-info-to-component-inputs-new-router-feature-1d747e559dc4
angular, router, input, v16
### Pass router info to routed component inputs #### Topics covered in this article: - How it works today - How it will work in Angular v16 - How to use it - How to migrate to the new API - How to test it - Caveats When building applications with Angular, most of the time we use the Router to render different pages for different urls. And based on the url we also load the data based on its path parameters or query parameters. In the latest version of Angular v16, we will get a new feature that will simplify the process of retrieving route information in the component and make it way easier. ### How it works today Let's say we have a routes array like this one: ```ts const routes: Routes = [ { path: "search", component: SearchComponent, }, ]; ``` And inside the component we need to read the query params in order to fill a search form. With an URL like this: `http://localhost:4200/search?q=Angular`; ```ts @Component({}) export class SearchComponent implements OnInit { // here we inject the ActivatedRoute class that contains info about our current route private route = inject(ActivatedRoute); query$ = this.route.queryParams.pipe(map(queryParams) => queryParams['q']); ngOnInit() { this.query$.subscribe(query => { // do something with the query }); } } ``` As you can see, we need to inject the `ActivatedRoute` service and then we can access the query params from it. But we can also access the path params and the data, or even the resolved data, as we can see in the following example: ```ts const routes: Routes = [ { path: "search/:id", component: SearchComponent, data: { title: "Search" }, resolve: { searchData: SearchDataResolver } }, ]; ``` ```ts @Component({}) export class SearchComponent implements OnInit { private route = inject(ActivatedRoute); query$ = this.route.queryParams.pipe(map(queryParams) => queryParams['q']); id$ = this.route.params.pipe(map(params) => params['id']); title$ = this.route.data.pipe(map(data) => data['title']); searchData$ = this.route.data.pipe(map(data) => data['searchData']); ngOnInit() { this.query$.subscribe(query => { // do something with the query }); this.id$.subscribe(id => { // do something with the id }); this.title$.subscribe(title => { // do something with the title }); this.searchData$.subscribe(searchData => { // do something with the searchData }); } } ``` ### How it will work in Angular v16 In Angular v16 we will get a new feature that will simplify the process of retrieving route information in the component and make it way easier. We will be able to pass the route information to the component inputs, so we don't need to inject the `ActivatedRoute` service anymore. ```ts const routes: Routes = [ { path: "search", component: SearchComponent, }, ]; ``` ```ts @Component({}) export class SearchComponent implements OnInit { /* We can use the same name as the query param, for example 'query' Example url: http://localhost:4200/search?query=Angular */ @Input() query?: string; // we can use the same name as the query param /* Or we can use a different name, for example 'q', and then we can use the @Input('q') Example url: http://localhost:4200/search?q=Angular */ @Input('q') queryParam?: string; // we can also use a different name ngOnInit() { // do something with the query } } ``` And we can also pass the path params, the data and resolved data to the component inputs: ```ts const routes: Routes = [ { path: "search/:id", component: SearchComponent, data: { title: "Search" }, resolve: { searchData: SearchDataResolver } }, ]; ``` ```ts @Component({}) export class SearchComponent implements OnInit { @Input() query?: string; // this will come from the query params @Input() id?: string; // this will come from the path params @Input() title?: string; // this will come from the data @Input() searchData?: any; // this will come from the resolved data ngOnInit() { // do something with the query // do something with the id // do something with the title // do something with the searchData } } ``` And of course we can rename the inputs to whatever we want: ```ts const routes: Routes = [ { path: "search/:id", component: SearchComponent, data: { title: "Search" }, resolve: { searchData: SearchDataResolver } }, ]; ``` ```ts @Component({}) export class SearchComponent implements OnInit { @Input() query?: string; @Input('id') pathId?: string; @Input('title') dataTitle?: string; @Input('searchData') resolvedData?: any; ngOnInit() { // do something with the query // do something with the pathId // do something with the dataTitle // do something with the resolvedData } } ``` ### How to use it In order to use this new feature, we need to enable it in the `RouterModule`: ```ts @NgModule({ imports: [ RouterModule.forRoot([], { //... other features bindToComponentInputs: true // <-- enable this feature }) ], }) export class AppModule {} ``` Or if we are in a standalone application, we can enable it like this: ```ts bootstrapApplication(App, { providers: [ provideRouter(routes, //... other features withComponentInputBinding() // <-- enable this feature ) ], }); ``` ### How to migrate to the new api If we have a component that is using the `ActivatedRoute` service, we can migrate it to the new api by doing the following: 1. Remove the `ActivatedRoute` service from the component constructor. 2. Add the `@Input()` decorator to the properties that we want to bind to the route information. 3. Enable the `bindToComponentInputs` feature in the `RouterModule` or `provideRouter` function. Example with before and after for path params, with url: http://localhost:4200/search/123 ```ts // Before @Component({}) export class SearchComponent implements OnInit { private route = inject(ActivatedRoute); id$ = this.route.params.pipe(map(params) => params['id']); ngOnInit() { this.id$.subscribe(id => { // do something with the id }); } } ``` ```ts // After @Component({}) export class SearchComponent implements OnInit { @Input() id?: string; // this will come from the path params ngOnInit() { // do something with the id } } ``` ### How to test it In order to test the new feature, we can use the `RouterTestingHarness` and let it handle the navigation for us. Here is an example of how to test the route info bound to component inputs with the `RouterTestingHarness`: ```ts @Component({}) export class SearchComponent { @Input() id?: string; @Input() query?: string; } ``` ```ts it('sets id and query inputs from matching query params and path params', async () => { TestBed.configureTestingModule({ providers: [ provideRouter( [{ path: 'search/:id', component: SearchComponent }], withComponentInputBinding() ) ], }); const harness = await RouterTestingHarness.create(); const instance = await harness.navigateByUrl( '/search/123?query=Angular', TestComponent ); expect(instance.id).toEqual('123'); expect(instance.query).toEqual('Angular'); await harness.navigateByUrl('/search/2?query=IsCool!'); expect(instance.id).toEqual('2'); expect(instance.query).toEqual('IsCool!'); }); ``` It's as simple as that! ### Caveats - Sometimes we want the `id` or `queryParams` to be observables, so we can combine them with other observable to get some data. For example, let's say we have a component that is using the `id` and `queryParams` to get some data from the server: ```ts @Component({}) export class SearchComponent implements OnInit { private dataService = inject(DataService); @Input() id?: string; @Input() query?: string; ngOnInit() { this.dataService.getData(this.id, this.query).subscribe(data => { // do something with the data }); } } ``` If we want to use the async pipe in order to subscribe to the data, we need to make sure that the `id` and `query` are observables instead of strings, otherwise this example below will not work: ```ts @Component({}) export class SearchComponent implements OnInit { private dataService = inject(DataService); @Input() id?: string; @Input() query?: string; // this will not work because the id and the query don't have a value yet (they are undefined) // they will have a value only after the component is initialized and the inputs are set data$ = this.dataService.getData(this.id, this.query); } ``` In order to make the `id` and `query` observables, we can use the `BehaviorSubject`: ```ts @Component({ template: ` <div *ngIf="data$ | async as data"> {{ data }} </div> ` }) export class SearchComponent implements OnInit { private dataService = inject(DataService); id$ = new BehaviorSubject<string | null>(null); query$ = new BehaviorSubject<string | null>(null); @Input() set id(id: string) { this.id$.next(id); } @Input() set query(query: string) { this.query$.next(query); } data$ = combineLatest([ this.id$.pipe(filter(id => id !== null)), this.query$.pipe(filter(query => query !== null)) ]).pipe( switchMap(([id, query]) => this.dataService.getData(id, query)) ); } ``` As you can see, we are using the `BehaviorSubject` to make the `id` and `query` observables, and we are using the `combineLatest` operator to combine them with the `switchMap` operator to get the data from the server. Personally, I think that this is a bit too much code for a simple example, so I would recommend to use the `ActivatedRoute` service instead of the new api in this case. - Priority of the route information when the route infos have the same name. For example, let's say we have a route with the following configuration: ```ts const routes: Routes = [ { path: 'test/:value', component: TestComponent, data: { value: 'Hello from data' }, } ]; ``` ```ts @Component({ template: `{{ value }}` }) export class TestComponent { @Input() value?: string; } ``` The new api will bind the route information to the component inputs in the following order: 1. Data 2. Path params 3. Query params If there's no data, it will use the path params, if there's no path params, it will use the query params If there's no query params, the value input will be undefined! - We don't know where the input value will come from 😬 In my opinion, for this "issue" what we can do is to rename the Input in imports and use it like this:  ```ts import { Input as RouteInput, Component } from "@angular/core"; @Component({ template: `{{ value }}` }) export class TestComponent { @RouteInput() value?: string; } // OR import { Input as QueryParamInput, Component } from "@angular/core"; @Component({ template: `{{ value }}` }) export class TestComponent { @QueryParamInput() value?: string; } ``` Not the best way possible, but we can see that it's not a normal input and that it is connected with the router info. ### Conclusion I hope you enjoyed this article, and I hope that you will find this new feature useful. If you have any questions or suggestions, feel free to leave a comment below. Play with the feature here: https://stackblitz.com/edit/angular-jb85mb?file=src/main.ts 🎮 Thanks for reading! --- I tweet a lot about Angular (latest news, videos, podcasts, updates, RFCs, pull requests and so much more). If you’re interested about it, give me a follow at [@Enea_Jahollari](https://twitter.com/Enea_Jahollari). Give me a follow on [dev.to](https://dev.to/eneajaho) if you liked this article and want to see more like this!
eneajaho
1,423,995
V8 JavaScript engine — Understanding JavaScript API Requests and Responses in the Data Fetching lifecycle
We know that data fetching is a crucial aspect of modern web development with the increasing complexity of web applications needing to work.
0
2023-04-03T09:08:39
https://dev.to/rodcast/v8-javascript-engine-understanding-javascript-api-requests-and-responses-in-the-data-fetching-lifecycle-1m6j
javascript, node, webdev, programming
--- title: V8 JavaScript engine — Understanding JavaScript API Requests and Responses in the Data Fetching lifecycle published: true description: We know that data fetching is a crucial aspect of modern web development with the increasing complexity of web applications needing to work. tags: javascript, node, webdev,programming # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2023-04-03 09:06 +0000 --- ![V8 JavaScript engine by Google](https://cdn-images-1.medium.com/max/2000/1*jGfbelP4ZaxnFaJvMFviPA.jpeg) Before this article, I mentioned to you that would start the articles series and create an Introduction. If you lost what I did, check it: [Introduction — Understanding JavaScript API Requests and Responses in the Data Fetching lifecycle](https://dev.to/rodcast/introduction-understanding-javascript-api-requests-and-responses-in-the-data-fetching-lifecycle-2f08) *There are differences between the JavaScript engine and the Browser engine in this article. In summary, the JavaScript engine executes JavaScript code, while the browser engine renders web pages and coordinates with the JavaScript engine to execute JavaScript code within a web browser.* I’m writing about the V8 JavaScript engine because is the most popular JavaScript engine and in comparison with other JavaScript engines there’s no single fastest JavaScript engine. When you write JavaScript, you need to have in your mind that your code will be interpreted by a JavaScript engine in the browser such as V8 (Chromium-based browsers) as well as in server-side JavaScript platforms, such as Node.js., SpiderMonkey (Firefox), and others. It takes the JavaScript code as input, parses it, and generates machine code or bytecode that can be executed by the computer’s processor. JavaScript engines are typically used in web browsers to execute JavaScript code on web pages, but they can also be used outside of the browser context in environments such as Node.js. Learning what is an event-based platform, what is a thread vs a process, and how it works in practice gives you leverage to build highly scalable apps respecting the platform’s lifecycle. Here are some details of V8 that developers should know about: *V8 is used in many popular applications:* The V8 engine is used in many popular applications, including the Google Chrome browser, Node.js, and the Electron framework. *V8 is Cross-platform compatibility:* V8 is designed to work on multiple platforms, including Windows, macOS, and Linux. *V8 supports multiple programming languages:* Although V8 is primarily used to execute JavaScript code, it can also be used to execute other languages that can be compiled into machine code, such as TypeScript and Dart. *V8 has support for WebAssembly:* V8 has built-in support for WebAssembly, a low-level bytecode format designed for executing code on the web. This support allows developers to run WebAssembly modules alongside JavaScript code in the same application. *Just-in-time compilation:* V8 uses a just-in-time (JIT) compilation technique to optimize JavaScript code on-the-fly. This allows V8 to generate highly optimized machine code that can execute JavaScript code much faster than traditional interpreters. *Memory management: *V8 employs a garbage collector to manage memory in JavaScript applications. The garbage collector automatically frees the memory that is no longer needed, which can help prevent memory leaks and improve application performance. *ECMAScript compatibility:* V8 supports the latest ECMAScript (JavaScript) language specifications, including ECMAScript 2021. This means developers can use the latest JavaScript language features in their applications. *Debugging tools:* V8 includes a robust set of debugging tools (Chrome DevTools) that allow developers to inspect and debug their JavaScript applications. These tools include a JavaScript debugger, a profiler, and a heap snapshot tool. *Use the right data structures: *V8 is optimized for certain data structures, such as arrays and objects. When possible, use these data structures instead of custom data structures to improve performance. *Integration with Node.js:* V8 is the default JavaScript engine used in Node.js, which is a popular server-side JavaScript runtime environment. This means that developers can use the same JavaScript code on the client and server sides of their applications, and benefit from the same high-performance optimizations provided by V8. If we compare Node.js with other competitors Deno uses V8 too, and Bun uses JavaScriptCore to execute JavaScript code. ### **How does the V8 JavaScript engine work?** The V8 JavaScript engine follows a sequence of steps to execute JavaScript code. Here’s an overview of the sequence of steps that V8 follows: *Parsing: *The first step in executing JavaScript code is parsing. The V8 engine takes the JavaScript code as input and parses it into an abstract syntax tree (AST) that represents the structure of the code. *Compilation:* Once the code has been parsed, the V8 engine compiles it into bytecode, a low-level representation of the code that can be executed more efficiently than the source code. V8 compiles the bytecode using a technique called Just-In-Time (JIT) compilation, which means that the compilation happens at runtime, just before the code is executed. *Optimization: *After the code has been compiled, the V8 engine applies several optimization techniques to improve its performance. These optimizations include inlining functions, eliminating unused code, and using feedback from previous runs of the code to improve the generated machine code. *Execution:* Finally, the V8 engine executes the compiled and optimized bytecode. During execution, the V8 engine uses a call stack to keep track of function calls and uses a garbage collector to manage memory allocation and deallocation. *Profiling:* As the code is executed, the V8 engine collects profiling data that can be used to further optimize the code. This profiling data includes information about which functions are being called frequently and which functions are taking the most time to execute. ### **If JavaScript is single-threaded, how it works asynchronously?** Which means that it can only execute one task at a time. However, it can achieve asynchronous behavior using various techniques, including callbacks, promises, and async/await. In all of these techniques, the single thread of execution is used to manage the event loop, which is responsible for queuing and executing asynchronous tasks. When an asynchronous task is started, it is added to the event loop, and when it is complete, the callback function or promise resolution is added to a task queue. The event loop then picks up these tasks and executes them when the thread is idle. So, If you want to know more about callbacks, promises, and async/await, please wait for the next article that I’ll publish. Thank you for reading, I hope this article can somehow have increased your knowledge base about it.
rodcast
1,424,161
OpenAPI 3.1 - The Gnarly Bits
An in-depth look at the differences between OpenAPI 3.0 and 3.1
0
2023-04-03T13:08:00
https://dev.to/mikeralphson/openapi-31-the-gnarly-bits-58d0
openapi, oas, api
--- title: OpenAPI 3.1 - The Gnarly Bits published: true description: An in-depth look at the differences between OpenAPI 3.0 and 3.1 tags: openapi, oas, api, apis cover_image: https://user-images.githubusercontent.com/21603/229518009-ea09c9ba-764c-439f-8f21-5f31f84470e1.jpeg # Use a ratio of 100:42 for best results. published_at: 2023-04-03 13:08 +0000 --- Though obviously support in [tooling](https://tools.openapis.org/) has taken a little while to begin to appear, both for commercial and Open-Source offerings, there are already a number of resources available to help you get to grips with the latest version of the OpenAPI specification (OAS), whether you are entirely new to it, or an older hand looking to focus on the new features. [Lorna Mitchell](https://twitter.com/lornajane), who championed the new `webhooks` feature, has a range of information available: a [blog post](https://lornajane.net/posts/2020/whats-new-in-openapi-3-1), a [video](https://www.youtube.com/watch?v=49XpXD-HP0U) and associated [slides](https://noti.st/lornajane/YRdDlZ/whats-new-in-openapi-specification-3-1). [Phil Sturgeon](https://twitter.com/philsturgeon), who along with [Ben Hutton](https://twitter.com/relequestual) and [Henry Andrews](https://twitter.com/ixat_totep) from the [JSON Schema](https://json-schema.org/) community, helped drive the push to full JSON Schema [Draft 2020-12](https://tools.ietf.org/html/draft-bhutton-json-schema-00) compliance, has written a [blog post](https://www.openapis.org/blog/2021/02/16/migrating-from-openapi-3-0-to-3-1-0) for the official [OpenAPIs.org](https://openapis.org/) website on how to transition your OAS documents from v3.0.x to v3.1.0. My fellow OpenAPI Initiative TSC members [Ron Ratovsky](https://twitter.com/webron) and [Darrel Miller](https://twitter.com/darrel_miller) presented a webinar on [what's new in v3.1](https://www.youtube.com/watch?v=Sflpzh_cAcA). My fellow [Postman](https://postman.com/) colleague [Arnaud Lauret](https://twitter.com/apihandyman) (the API Handyman) gave a talk at the [API Specifications Conference](https://events.linuxfoundation.org/openapi-asc/) (ASC) in 2022 entitled [OpenAPI 3.x Does What Swagger 2.0 Don’t](https://www.youtube.com/watch?v=WePbF4_7RkY). Last but not least, the OpenAPI Initiative now has an official [getting started guide](https://oai.github.io/Documentation). So, with all that going on, is there anything much else to add? Indeed there is, so let's take a top-down stroll through the OpenAPI specification, to focus on the details of what has changed. ### Top-Level changes As part of this release, we have decided to not follow SemVer anymore, and as such allow ourselves to introduce minor, but breaking changes. These changes are documented as part of the [release notes](https://github.com/OAI/OpenAPI-Specification/releases/tag/3.1.0). ### Additions * Introduced a new top-level field - `webhooks`. This allows describing out-of-band registered webhooks that are available as part of the API. * The Info Object has a new summary field. * The License Object now has a new identifier field for [SPDX](https://spdx.dev/) license codes. * The Components Object now has a new entry `pathItems`, to allow for reusable Path Item Objects to be defined within a OpenAPI document. #### Extended Functionality * Updated primitive types to be based on JSON Schema Specification Draft 2020-12. This now includes type `"null"`. * Lifted the restriction of allowing Request Body only in HTTP methods where the HTTP 1.1 specification [RFC9110](https://www.rfc-editor.org/rfc/rfc9110) has explicitly defined semantics. While now allowed for other methods, this use is still not recommended. * Added support for the object type for `spaceDelimited` and `pipeDelimited` style values. * The Encoding Object now supports `style`, `explode` and `allowReserved` for the `multipart/form-data` media type as well. * To enable better webhooks support, expressions in the Callback Object can now also reference Path Item Objects. * When using the Reference Object, `summary` and `description` fields can now be overridden. This is the default behaviour, tooling may optionally allow merging or appending the text. * The Schema Object is now fully compliant with JSON Schema draft 2020-12 (see [JSON Schema Core](https://json-schema.org/draft/2020-12/json-schema-core.html) and [JSON Schema Validation](https://json-schema.org/draft/2020-12/json-schema-validation.html)). See also, Breaking Changes. * The `$ref` keyword within Schema Objects can now contain [relative JSON Pointers](https://json-schema.org/draft/2020-12/relative-json-pointer.html). * The Discriminator Object can now be extended with Specification Extensions, correcting an oversight in version 3.0. * Added support for mutual TLS (`mutualTLS`) as a security scheme type. * Security requirements (such as for API keys) can now define an array of roles that are required for execution (and not only scopes for OAuth 2.0 security schemes). * Added the `jsonSchemaDialect` top-level field to allow the definition of a default `$schema` value for Schema Objects. This allows any past or future version of JSON Schema to be used in your OAS documents, provided that tools support them. #### Changes * An OpenAPI Document now requires at least one of `paths`, `components` or `webhooks` to exist at the top level. While previous versions required `paths`, now a valid OpenAPI Document can describe only `webhooks`, or even only reusable `components`. Thus, an OpenAPI Document no longer necessarily describes an API. * Anywhere in the 3.0 Specification that had a type of Schema Object \| Reference Object has been replaced to be Schema Object only. With the move to full JSON Schema support, `$ref` is inherently part of the Schema Object and has its own defined behavior. * Extensions prefixed with `x-oas-` and `x-oai-` are now reserved for the OpenAPI Initiative. * The `format` keyword is now not validated by default. It is treated as an annotation. Tooling may allow opt-in validation on a best-case basis. * The `allowEmpty` property on `parameters` is now deprecated as it had confusing and less-than-useful behaviour. #### Breaking changes * The specification versioning no longer follows SemVer. * The `nullable` keyword has been removed from the Schema Object (`"null"` can be used as a type value). * `exclusiveMaximum` and `exclusiveMinimum` do not accept boolean values (following JSON Schema). They are independent keywords which take a `number`. * Due to the compliance with JSON Schema, there is no longer interaction between `required` and `readOnly`/`writeOnly` in relation to requests and responses. * `format` (whether `byte`, `binary`, or `base64`) is no longer used to describe file payloads. As part of JSON Schema compliance, now `contentEncoding` and `contentMediaType` can be used to control this. * The Server Object Variable's `enum` array now MUST not be empty (changed from SHOULD). * The Server Object Variable's `default` property now MUST exist in the `enum` values, if such values are defined (changed from SHOULD). * `responses` are no longer required to be defined under the Operation Object. #### Clarifications * Reworded the definition of OpenAPI Document to reflect that a document no longer must describe `paths`, but can describe either `paths`, `webhooks`, `components` or any combination of these. * Dropped the term "RESTful APIs" in favor of "HTTP APIs" * Resolution of relative references has been redefined and clarified. Note there's a difference in resolution between Schema Object References and all others. * Modification of examples to improve them and provide context for new fields and objects. * It is now clarified what happens when path template expressions do not have a corresponding path parameter. * Data types (and just primitive data types) now correspond to JSON Schema. * A new section was added to address how to handle the `$schema` keyword (implicitly and explicitly). * Updated some inline links to more accurate or secure locations. * Path parameter values cannot contain the unescaped characters `/`, `?` or `#`. * Further explanation of where Reference Object and JSON Schema's reference should be used. * Unified wording when values are URLs/URIs. * Reworded Path Item's `$ref` to take into account reference and component changes. * Minor text changes to improve consistency and readability. * The description of the Reference Object has been updated to further clarify its behavior. * Further updated Schema Object's description to take into account the latest draft, and the default use of `https://spec.openapis.org/oas/3.1/dialect/base` as the default OAS dialect. * Reworded "Schema Vocabularies" to "Schema dialects" ## Conclusion I hope this guide proves helpful to those considering using OAS 3.1, and those migrating from earlier versions. As ever, let me know in the comments if anything is wrong, unclear or missing. Why not get involved in the [discussions](https://github.com/OAI/moonwalk/discussions) around a tentative OpenAPI 4.0, codename 'Moonwalk'?
mikeralphson
1,424,212
Virtual Coffee Lightning Talks
Virtual Coffee streamed lightning talks last week. Today our AV team has clipped them from the 3 hour...
0
2023-04-03T14:12:16
https://dev.to/jarvisscript/virtual-coffee-lightning-talks-48km
discuss, javascript, career, speaking
Virtual Coffee streamed lightning talks last week. Today our AV team has clipped them from the 3 hour stream to individual talks. So you can watch or re-watch these short videos. The talks covered Public speaking, regular expression, story telling, Job hunt, survivor bias and more. {% embed https://www.youtube.com/watch?v=9XNeuv5W7xE %} @gant Humorous talk on public speaking. {% embed https://www.youtube.com/watch?v=RmUhP1Nxqkw %} {% embed https://www.youtube.com/watch?v=RtASfQrr6YQ %} I am very proud of all the work the Virtual Coffee members put into presenting these talks. Here is the [Virtual Coffee YouTube page.](https://www.youtube.com/@VirtualCoffeeIO/videos) Thanks to the speakers and @bekahH, @danieltott, and @virtualcoffee .
jarvisscript
1,424,311
#interpreted_language
Q: What is interpreted language means? An interpreted language is a programming language...
0
2023-04-03T14:26:16
https://dev.to/mahfuzurrahman01/interpretedlanguage-318l
webdev, javascript, beginners, programming
## Q: What is interpreted language means? An interpreted language is a programming language where the code is executed directly, without being compiled into machine code first. In an interpreted language, the source code is read by the interpreter, which then executes the instructions directly. Interpreted languages are typically easier to learn and use than compiled languages because they do not require the developer to perform a compilation step before running the code. Interpreted languages also tend to have more flexible and dynamic features than compiled languages, making them well-suited for rapid prototyping and scripting. Some examples of popular interpreted languages include Python, JavaScript, Ruby, and PHP.
mahfuzurrahman01
1,424,763
Create Your Technical Interview Toolkit: Plan and Prepare for Your Next Software Engineering Interview
It's time to create your technical interview toolkit! Fill it with resources that will help you plan,...
0
2023-04-03T15:45:05
https://dev.to/bytesofbree/create-your-technical-interview-toolkit-plan-and-prepare-for-your-next-software-engineering-interview-322d
career, technicalinterviews
It's time to create your technical interview toolkit! Fill it with resources that will help you plan, prepare, and pass your next software engineering interview. Coding interviews can feel difficult, but with the right preparation, you'll walk into your next interview with confidence. These are some of my favorite resources specifically for coding interviews. You may not need every item on this list, so feel free to pick the ones that will enhance your interview experience.<br><br> ![Header for data structures and algorithms](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h7rapb7g8t6jbhyoxioh.png) ## 1\. Practice Data Structures & Algorithms and System Design 💻 [Cracking the Coding Interview](https://www.amazon.com/Cracking-Coding-Interview-Gayle-McDowell/dp/0984782850/) | 💸 $39.95 A walkthrough and deep dive of 189 data structures & algorithms technical interview questions 💻 [Technical Interview Handbook](https://www.techinterviewhandbook.org/) | 💸 Free Guided behavioral and technical interview prep that includes data structure & algorithm prep, resume guides, and salary negotiation advice 💻 [System Design Primer](https://github.com/donnemartin/system-design-primer) | 💸 Free Learn the ins and outs of building scalable systems 💻 [FullStack Cafe](https://www.fullstack.cafe/) | 💸 Free Tier & Pro ($69/lifetime access) Questions, answers, and explanations to the most common full stack and mobile dev, data structure, and system design interview questions 💻 [Tech Dev Guide: Interview Prep by Google](https://techdevguide.withgoogle.com/paths/interview/) | 💸 Free Interview prep materials and coding questions previously used during Google's hiring process 💻 [Tech Mock Interview](https://www.techmockinterview.com/) | 💸 Varies Technical and behavioral interviews with experts at top tech companies 💻 [Hiring Without Whiteboards](https://github.com/poteto/hiring-without-whiteboards) | 💸 Free Technical and behavioral interviews with experts at top tech companies 💻 [Grokking Dynamic Programming Patterns for Coding Interviews](https://www.educative.io/courses/grokking-dynamic-programming-patterns-for-coding-interviews) | 💸 [Educative.io](http://Educative.io) membership ($59/monthly OR $199/annually) Learn to solve dynamic programming problems and identify dynamic programming patterns for coding interviews 💻 [Grokking the System Design Interview](https://www.educative.io/courses/grokking-modern-system-design-interview-for-engineers-managers) | 💸 [Educative.io](http://Educative.io) membership ($59/monthly OR $199/annually) Learn and practice modern system design to prepare for coding interviews 💻 Ace the Coding Interview | 💸 [Educative.io](http://Educative.io) membership ($59/monthly OR $199/annually) Get ready for technical interviews within your niche of software engineering with these comprehensive interview prep courses * [Ace the JavaScript Coding Interview](https://www.educative.io/path/ace-javascript-coding-interview) * [Ace the Java Coding Interview](https://www.educative.io/path/ace-java-coding-interview) * [Ace the Python Coding Interview](https://www.educative.io/path/ace-python-coding-interview) * [Ace the C++ Coding Interview](https://www.educative.io/path/ace-cpp-coding-interview) * [Ace the Frontend Interview](https://www.educative.io/path/ace-front-end-interview)<br><br> ![Header for coding section](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zkvtliup3bzit96ipqrf.png) ## 2\. Practice Coding 💻 [HackerRank](https://www.hackerrank.com/dashboard) | 💸 Free Practice solving coding problems with data structures and algorithms 💻 [LeetCode](https://leetcode.com/) | 💸 Free Tier & Pro ($35/monthly OR $159/annually) Practice solving coding problems with data structures and algorithms 💻 [Great Frontend](https://www.greatfrontend.com/) | 💸 Multiple plans available - $29/month, $128/lifetime Gear up for frontend development interviews with frontend technical challenges, curated study plans, and interview simulation 💻 [Frontend Mentor](https://www.frontendmentor.io) | 💸 Free Tier & Pro ($96/annually OR $12/monthly) Practice using HTML, CSS, JavaScript, and frontend frameworks with FrontendMentor. They provide the design and assets, you provide the code. 💻 [Frontend Practice](https://www.frontendpractice.com/) | 💸 Free Practice using HTML, CSS, JavaScript, and frontend frameworks by replicating real company websites as best as you can!<br><br> ![Header for mock interview section](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mbm9yhq6h2zeefudjixk.png) ## 3\. Mock Interviews 💻 [Pramp](https://www.pramp.com/#/) | 💸 Free Free data structures & algorithms, product management, behavioral, system design, frontend, and data science mock interviews 💻 [Interviewing.io](http://Interviewing.io) | 💸 Interviews start at $150/interview Anonymous technical, behavioral, and management mock interviews with real-time feedback <br><br> {% embed https://youtu.be/9Axp3syl0Pc %} --- Thank you for reading! I hope this article was informative or entertaining (or both)! If you liked this post, feel free to like and follow me on my socials around the web. [![Bree's social media links](https://cdn.hashnode.com/res/hashnode/image/upload/v1677099944032/f4747bca-d26d-49ce-ac8b-b0fcbf569dd1.png)](https://www.bytesofbree.com/hello)
bytesofbree
1,424,776
WebRtc Websocket
I have developed a application for video chat app to connect random developers for coding problem...
0
2023-04-03T16:11:25
https://dev.to/veercodeprog/webrtc-websocket-142i
node, javascript, programming, tutorial
I have developed a application for video chat app to connect random developers for coding problem solving hangout. I want to integrate screen sharing option in it,any references...
veercodeprog
1,424,801
Create a custom Symfony Normalizer for mapping values
The requirements and history The task was to integrate a CRM (Emarsys) into the...
22,911
2023-04-03T16:33:06
https://dev.to/alaugks/create-a-custom-symfony-normalizer-for-mapping-values-4nc2
symfony, php, api
--- title: "Create a custom Symfony Normalizer for mapping values" published: true description: tags: symfony, php, api series: Mapping FieldValueIDs for the payload of the Emarsys API --- ## <a name="the_requirements_and_history"></a> The requirements and history The task was to integrate a CRM ([Emarsys](https://emarsys.com/)) into the e-commerce platform. The CRM provides predefined [system fields](https://help.emarsys.com/hc/en-us/articles/115004637665-Overview-The-Emarsys-system-fields) and [field types](https://help.emarsys.com/hc/en-us/articles/115004634729-overview-available-field-types). [Custom fields](https://help.emarsys.com/hc/en-us/articles/115004634689-End-user-guides-Creating-custom-fields) can also be created. The label for the field and the field name can be assigned by the user. The FieldID for the field and the FieldValueID for the choices are assigned by the CRM when the field is created. The field names are not relevant when interacting with the Emarsys API. Here are some examples: | Label | Fieldname | FieldID | Type: FieldValue => FieldValueID | |:------------------------|:-----------------------|:--------|:-----------------------------------------------------------| | Salutation | salutation | 46 | Single-choice:<br>Male => 1<br>Female => 2<br> Divers => 6 | | Firstname | firstName | 1 | Short text | | Lastname | lastName | 2 | Short text | | Email | email | 3 | Long text | | Birthdate | birthday | 4 | Date: YYYY-MM-DD | | Marketing Information * | marketing\_information | 100674 | Single-choice:<br>Yes* => 1<br>No* => 2 | *custom For a creation of a contact in CRM, with this data * Salutation: Female * Name: Jane Doe * Email: jane.doe<span>@</span>example<span>.</span>com * Birthday: 1989-11-09 * Marketing Information: Yes the payload must be: ```json { "1": "Jane", "2": "Doe", "3": "john.doe@example.com", "4": "1989-11-09", "46": "2", "100674": "1" } ``` The response body, for example, when searching for a contact, has the same structure. However, all the fields of the contact are provided. ### Built-in mapping in the API-Client From the beginning, the [snowcap/emarsys](https://github.com/snowcap/Emarsys) package has been used as the API client. This package is good, extensible and provides a simple mapping (Fieldname <-> FieldId). The system fields are already stored in the package. The built-in mapping is sufficient if you have only a few custom fields. However, if you have 30 custom fields and multiple accounts in use, it becomes difficult to maintain and manage. ### Use a custom Attribute So it would be better to configure the FieldID and the mapping to the FieldValueID directly on the properties. So I implemented a custom attribute and a simple PropertyReader and PropertyWriter. I have taken the solution with the [custom attribute](https://github.com/alaugks/article-php-attribute-emarsys-example) out of the project and made it available on github. ## Using the Symfony serializer component and creating a custom normalizer I wanted to see if it was possible to replace the custom attribute with features provided by the [Symfony serializer component](https://symfony.com/doc/current/components/serializer.html). ### FieldIDs with attribute #[SerializedName] First the FieldIDs. This is easily done with the `#[SerializedName]` attribute. ```php <?php namespace App\Dto; use Symfony\Component\Serializer\Annotation\SerializedName; use Symfony\Component\Serializer\Normalizer\DateTimeNormalizer; class ContactDto { #[SerializedName("1")] private ?string $firstname = null; #[SerializedName("2")] private ?string $lastname = null; #[SerializedName("3")] private ?string $email = null; #[Context([DateTimeNormalizer::FORMAT_KEY => 'Y-m-d'])] #[SerializedName("4")] private ?\DateTimeInterface $birthdate = null; #[SerializedName("46")] private ?string $salutation = null; #[SerializedName("100674")] private ?boolean $marketingInformation = null; /* getter and setter */ } ``` ### Normalize/Denormalize and Serialize/Deserialize With the Serializer you can normalize, denormalize, serialize and deserialize, encode und decode. With `serialize()`, the `normalize()` and `encode()` are called, and with a `deserialize()`, the `denormalize()` and `decode()` are called. I use `normalize()` and `denormalize()` because I need an array for the API client: ```php <?php use Symfony\Component\Serializer\Serializer; $contactDto = new ContactDto(); $contactDto->setSalutation('FEMALE'); $contactDto->setFirstname('Jane'); $contactDto->setLastname('Doe'); $contactDto->setEmail('jane.doe@example.com'); $contactDto->setBirthdate(new \DateTimeImmutable('1989-11-09')); $contactDto->setMarketingInformation(true); // Normalize $fields = $this->serializer->normalize($contactDto); /* Array ( [1] => Jane [2] => Doe [3] => jane.doe@example.com [4] => 1989-11-09 [46] => FEMALE [100674] => true ) */ // Denormalize $contactDto = $this->serializer->denormalize($fields, ContactDto::class); /* App\Dto\ContactDto Object ( [firstname:App\Dto\ContactDto:private] => Jane [lastname:App\Dto\ContactDto:private] => Doe [email:App\Dto\ContactDto:private] => jane.doe@example.com [birthdate:App\Dto\ContactDto:private] => DateTimeImmutable Object ( [date] => 1989-11-09 13:54:11.000000 [timezone_type] => 3 [timezone] => UTC ) [salutation:App\Dto\ContactDto:private] => FEMALE [marketingInformation:App\Dto\ContactDto:private] => 1 ) */ ``` This cannot yet be sent to CRM via the API because CRM cannot handle the FieldValue FEMALE for `$salutation` and true for `$marketingInformation`. For this I create a custom normalizer and denormalizer. #### ⚠️ Note: #[SerializedName] with a number as name in Symfony 6.2 Denormalize FieldIDs does not work in Symfony 6.2. This has to do with an array_merge() when denormalizing in the symfony/serializer package in version 6.2. I have created a [pull request](https://github.com/symfony/symfony/pull/49700) for a fix. The pull request has already been reviewed. I guess the fix will be merged soon. > ✅ This bug is fixed in version 6.3.5. ### Create a custom Normalizer The Symfony serializer component provides several [built-in normalizers](https://symfony.com/doc/5.4/components/serializer.html#built-in-normalizers) for transforming data. For example, there is the [DateTimeNormalizer](https://github.com/symfony/symfony/blob/2ed1fef5af37373f448c260c21e0db19b4be8794/src/Symfony/Component/Serializer/Normalizer/DateTimeNormalizer.php) to transform a DateTime object into a date format and a date format into a DateTime object. For my case, I need a normalizer that transforms **FEMALE** into FieldValueID **2** for `$salutation` and a denormalizer that transforms FieldValueID **2** into **FEMALE**. To do this, I create the MappingTableNormalizer (Normalizer and Denormalizer), which implements the [NormalizerInterface](https://github.com/symfony/symfony/blob/5.4/src/Symfony/Component/Serializer/Normalizer/NormalizerInterface.php) and [DenormalizerInterface](https://github.com/symfony/symfony/blob/5.4/src/Symfony/Component/Serializer/Normalizer/DenormalizerInterface.php) interfaces. The serializer calls the `supportsNormalization()` and `supportsDenormalization()` functions of all registered normalizers and denormalizers to determine which normalizer or denormalizer to use to transform an object. A custom type must be created to ensure that the MappingTableNormalizer is applied to the `$salutation` and `$marketingInformation` properties. I have created [StringValue](https://github.com/alaugks/article-serializer/blob/symfony-5.4/app/src/Normalizer/Value/StringValue.php) (`$salutation`) and [BooleanValue](https://github.com/alaugks/article-serializer/blob/symfony-5.4/app/src/Normalizer/Value/BooleanValue.php) (`$marketingInformation`). The `supportsNormalization()` and `supportsDenormalization()` methods check whether the MappingTableNormalizer is responsible for the object. Here is the full implementation of the [MappingTableNormalizer](https://github.com/alaugks/article-serializer/blob/symfony-5.4/app/src/Normalizer/MappingTableNormalizer.php): ```php <?php namespace App\Normalizer; use App\Normalizer\Value\BooleanValue; use App\Normalizer\Value\StringValue; use App\Normalizer\Value\ValueInterface; use Symfony\Component\Serializer\Normalizer\DenormalizerInterface; use Symfony\Component\Serializer\Normalizer\NormalizerInterface; class MappingTableNormalizer implements NormalizerInterface, DenormalizerInterface { public const TABLE = 'mapping_table'; private const SUPPORTED_TYPES = [ StringValue::class, BooleanValue::class ]; public function normalize(mixed $object, string $format = null, array $context = []): ?string { $mappingTable = $this->getMappingTable($context); $key = array_search($object->getValue(), $mappingTable); if($key) { return (string)$key; // Force string } return null; } public function supportsNormalization(mixed $data, string $format = null): bool { return $data instanceof ValueInterface; } public function denormalize($data, $type, $format = null, array $context = array()): mixed { $mappingTable = $this->getMappingTable($context); foreach ($mappingTable as $key => $value) { if ((string)$key === $data) { return new $type($value); } } return new $type(null); } public function supportsDenormalization($data, $type, $format = null): bool { return in_array($type, self::SUPPORTED_TYPES); } private function getMappingTable(array $context): array { if (!isset($context[self::TABLE]) || !is_array($context[self::TABLE])) { throw new \InvalidArgumentException('mapping_table not defined'); } return $context[self::TABLE]; } } ``` #### Attribute #[Context] for the MappingTable The `#[Context]` attribute is used to define the MappingTable on the `$salutation` and `$marketingInformation` properties. ```php <?php namespace App\Dto; use App\Normalizer\MappingTableNormalizer; use App\Normalizer\Value\BooleanValue; use App\Normalizer\Value\StringValue; use Symfony\Component\Serializer\Annotation\Context; use Symfony\Component\Serializer\Annotation\SerializedName; class ContactDto { // Other properties #[Context([MappingTableNormalizer::TABLE => ['1' => 'MALE', '2' => 'FEMALE', '6' => 'DIVERS']])] #[SerializedName("46")] private ?StringValue $salutation = null; #[Context([MappingTableNormalizer::TABLE => ['1' => true, '2' => false]])] #[SerializedName("100674")] private ?BooleanValue $marketingInformation = null; // Other getter and setter public function getSalutation(): ?StringValue { return $this->salutation; } public function setSalutation(?StringValue $salutation): void { $this->salutation = $salutation; } public function getMarketingInformation(): ?BooleanValue { return $this->marketingInformation; } public function setMarketingInformation(?BooleanValue $marketingInformation): void { $this->marketingInformation = $marketingInformation; } } ``` #### Register the custom normalizer The normalizer still needs to be registered if you are not using the [default services.yaml configuration](https://symfony.com/doc/current/service_container.html#service-container-services-load-example). ```yaml services: serializer.normalizer.mapping_table_normalizer: class: 'App\Normalizer\MappingTableNormalizer' tags: ['serializer.normalizer'] ``` #### Skip null values Then there should be no normalization of properties that are null. These properties will also not be included in the normalized array. Fields that are not in the payload will not be updated in CRM. ```yaml framework: serializer: default_context: '%serializer_default_context%' parameters: serializer_default_context: skip_null_values: true ``` #### Normalize the ContactDto to an array ```php <?php use Snowcap\Emarsys\Client; use Symfony\Component\Serializer\Serializer; private Client $client; private Serializer $serializer; $contactDto = new ContactDto(); $contactDto->setSalutation(new StringValue('FEMALE')); $contactDto->setFirstname('Jane'); $contactDto->setLastname('Doe'); $contactDto->setEmail('jane.doe@example.com'); $contactDto->setBirthdate(new \DateTimeImmutable('1989-11-09')); $contactDto->setMarketingInformation(new BooleanValue(true)); // Normalize ContactDto $fields = $this->serializer->normalize($contactDto); /* Array ( [1] => Jane [2] => Doe [3] => jane.doe@example.com [4] => 1989-11-09 [46] => 2 [100674] => 1 ) */ // Create Contact (API-Request) $this->client->createContact($fields) ``` #### Denormalize the array to a ContactDto ```php <?php use Snowcap\Emarsys\Client; use Symfony\Component\Serializer\Serializer; private Client $client; private Serializer $serializer; // Get Contact (API-Request) $response = $this->client->getContact(['3' => 'jane.doe@example.com']); $fields = $response->getData() /* Array ( [1] => Jane [2] => Doe [3] => jane.doe@example.com [4] => 1989-11-09 // other fields [46] => 2 // other fields [100674] => 1 // other fields ) */ // Denormalize Array $conatctDto = $this->serializer->denormalize($fields, ContactDto::class); /* App\Dto\ContactDto Object ( [firstname:App\Dto\ContactDto:private] => Jane [lastname:App\Dto\ContactDto:private] => Doe [email:App\Dto\ContactDto:private] => jane.doe@example.com [birthdate:App\Dto\ContactDto:private] => DateTimeImmutable Object ( [date] => 1989-11-09 13:54:11.000000 [timezone_type] => 3 [timezone] => UTC ) [salutation:App\Dto\ContactDto:private] => App\Normalizer\Value\StringValue Object ( [value:App\Normalizer\Value\StringValue:private] => FEMALE ) [marketingInformation:App\Dto\ContactDto:private] => App\Normalizer\Value\BooleanValue Object ( [value:App\Normalizer\Value\BooleanValue:private] => 1 ) ) */ ``` ## Serializer default service configuration The getters and setters for `$salutation` and `$marketingInformation` have `StringValue` and `BooleanValue` as the type. This is because the ObjectNormalizer and ReflectionExtractor are enabled by default in symfony applications. This ObjectNormalizer read and write in the object with the PropertyAccess component. This means that the ObjectNormalizer can access properties directly and through getters, setters, haters, issers, canners, adders and removers. In the ReflectionExtractor ([PropertyInfo component](https://symfony.com/doc/5.4/components/property_info.html)) it tries to get the [type declaration](https://github.com/symfony/symfony/blob/5.4/src/Symfony/Component/PropertyInfo/Extractor/ReflectionExtractor.php#L142) from the mutator (set, add, remove), accessor (get, is, has, can) or constructor (in that order) based on the name of the property. If this is not possible, it will get the type from the property's type declaration. So the ReflectionExtractor gets the type declaration from `setSalutation()`. If we were to define string as a type hint, `$type` for `supportsDenormalization()` would be of type string. Thus, we can no longer ensure that MappingTableNormalizer is applied to `$salutation`. This is because when the serialiser applies the normaliser to the properties, the property is not disclosed, just the value. In my opinion, this default service configuration cannot be disabled or controlled by configuration parameters. ### PropertyNormalizer and custom PropertyTypeExtractor I am not fan of this Serializer default service configuration, because I would prefer that the getter and setter for `$salutation` and `$marketingInformation` does not have the [ValueInterface](https://github.com/alaugks/article-serializer/blob/symfony-5.4/app/src/Normalizer/Value/ValueInterface.php) as the type declaration. Instead, I want the getters and setters to have the type declaration string or bool: ```php <?php class ContactDto { private ?StringValue $salutation = null; private ?BooleanValue $marketingInformation = null; public function getSalutation(): ?string { return $this->salutation?->getValue(); } public function setSalutation(?string $salutation): void { $this->salutation = new StringValue($salutation); } public function isMarketingInformation(): ?bool { return $this->marketingInformation?->getValue(); } public function setMarketingInformation(?bool $marketingInformation): void { $this->marketingInformation = new BooleanValue($marketingInformation); } } ``` Therefore I need the PropertyNormalizer to read and write directly on the properties and a custom PropertyTypeExtractor to read only the type declaration from the properties. ### Serializer service configuration for my custom requirements ```yaml services: 'App\Service\CrmSerializerService': arguments: - '@crm_serializer' crm_serializer: class: 'Symfony\Component\Serializer\Serializer' arguments: $normalizers: - '@serializer.normalizer.datetime' - '@app.normalizer.mapping_table_normalizer' - '@crm_serializer.property_normalizer' $encoders: [] crm_serializer.property_normalizer: class: 'Symfony\Component\Serializer\Normalizer\PropertyNormalizer' arguments: $nameConverter: '@serializer.name_converter.metadata_aware' $propertyTypeExtractor: '@crm_serializer.reflection_extractor' $defaultContext: '%serializer_default_context%' crm_serializer.reflection_extractor: class: 'Symfony\Component\PropertyInfo\PropertyInfoExtractor' arguments: $typeExtractors: - '@app.service.property_info.property_type_extractor' app.service.property_info.property_type_extractor: class: 'App\Service\PropertyInfo\PropertyTypeExtractor' app.normalizer.mapping_table_normalizer: class: 'App\Normalizer\MappingTableNormalizer' ``` I use the PropertyNormalizer instead of ObjectNormalizer, because it only reads and writes the value from the property. Instead of the ReflectionExtractor I use a custom [PropertyTypeExtractor](https://github.com/alaugks/article-serializer/tree/symfony-5.4-property-normalizer/app/src/Reflection/PropertyTypeExtractor.php) (PropertyTypeExtractorInterface) which reads the type declaration only from the property. The AdvancedNameConverterInterface is also needed to convert the name of the property (`salutation` <-> `46`). The $defaultContext is the serializer configuration with skip null values. Then I populate the serializer with the required objects: * Normalizer * DateTimeNormalizer * MappingTableNormalizer * PropertyNormalizer * Encoders * Encoders are not needed at all, because only normalization and denormalization are done. For the service configuration I use named arguments, because the other arguments are taken care of by the autowire and the default serializer service configuration can be used here. I also have an example as an custum serializer object ([CrmSerializer](https://github.com/alaugks/article-serializer/tree/symfony-5.4-property-normalizer/app/src/Serializer/CrmSerializer.php)) that you can look at on github. ```php <?php use App\Dto\ContactDto; use Symfony\Component\Serializer\Serializer; use Symfony\Component\Serializer\SerializerInterface; class CrmSerializerService { private Serializer $serializer; // Only Symfony Serializer public function __construct(SerializerInterface $serializer) { $this->serializer = $serializer; } public function normalize(ContactDto $contactDto): array { return $this->serializer->normalize($contactDto); } public function denormalize(array $data): ContactDto { return $this->serializer->denormalize($data, ContactDto::class); } } $contactDto = new ContactDto(); $contactDto->setSalutation('FEMALE'); /* other setter */ $contactDto->setMarketingInformation(true); // Normalize ContactDto $fields = $this->crmSerializerService->normalize($contactDto); /* Array ( [1] => Jane [2] => Doe [3] => jane.doe@example.com [4] => 1989-11-09 [46] => 2 [100674] => 1 ) */ ``` ## Conclusion The Symfony Serializer component is a powerful tool. I have many ideas on how to improve and simplify existing implementations with custom normalisers, especially in existing projects. ## Full examples * [Custom attribute implementation](https://github.com/alaugks/article-php-attribute-emarsys-example) * [Symfony Serializer with default service configuration](https://github.com/alaugks/article-serializer/tree/symfony-5.4) * [Symfony Serializer with custom service configuration (PropertyNormalizer)](https://github.com/alaugks/article-serializer/tree/symfony-5.4-property-normalizer) ## Updates * Series name defined (May 5th 2023) * Update series name (May 8th 2023) * Add anchor for a deeplink (May 7th 2023) * Add bugfix note. (Nov 10th 2023) * Fix broken links (Dez 30th 2023)
alaugks
1,424,810
The future of Code Testing and Debugging is here
Artificial intelligence (AI) is revolutionizing the way we test and debug code. The use of AI in code...
0
2023-04-03T16:59:30
https://dev.to/ananddas/the-future-of-code-testing-and-debugging-is-here-3l2b
debug, codequality, future
Artificial intelligence (AI) is revolutionizing the way we test and debug code. The use of AI in code testing and debugging can help developers to write more efficient and secure code, while also reducing the time and resources required to test and debug code. In this post, we will explore the ways in which AI is being used in code testing and debugging, the benefits of using AI in this process, and the challenges that must be overcome to fully leverage the power of AI in code testing and debugging. ### How AI can help AI reduces complexity, making software development a more streamlined process that is easier to manage, quicker to deploy, and more secure. Debugging and testing code are critical steps in software development. Developers use code testing to make sure their code works as intended and to identify and fix any bugs. For sure AI can help, but sometimes people wonder if we even will need developers anymore. Don't worry, AI won't take over the world and make all of us redundant. Instead, it'll just make our jobs easier, so we can spend more time enjoying a cup of coffee or two! Traditionally, code testing and debugging were done manually. To ensure that the code works as expected, developers write test cases to test different scenarios. When the test cases are executed manually, the code is checked to see if it works as expected. An error or bug needs to be fixed manually by the developer by identifying the cause and making the necessary changes. Processes like this can be time-consuming and resource-intensive, and they can also be error-prone. Many developers joke about 90% of their time being spent in debugging. These challenges can be overcome with AI-powered code testing and debugging. Artificial intelligence can identify potential bugs or errors and suggest fixes quickly and accurately. In addition to saving time and resources, this reduces human errors when testing and debugging code. A key benefit of AI in code testing and debugging is that it can help make the software development process more efficient. Developers can spend more time on new features and improvements because AI-based systems test and debug code faster and better than humans. AI-based systems can also identify potential problems before they become problems, reducing the amount of bugs and errors in a final product. In addition, security vulnerabilities can be identified and fixed with AI-based systems, so attacks and data breaches can be prevented. Furthermore, AI-based systems can detect and respond to security incidents more quickly, which can help minimize their damage. There are several types of AI-based systems you can use to test and debug your code. A common way to find bugs or errors in code is to use machine learning algorithms. Using code samples and test results, these algorithms identify potential issues in new code based on learning from huge datasets. Another way to understand and analyze code is to use natural language processing (NLP). With NLP-based systems, you can understand the meaning of code and find bugs, etc. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1em236htfsr2ysdgrpjn.png) Here's an example of how natural language processing (NLP) can be used in code testing and debugging: ### Example: Developer's query: "Why am I getting a 'null pointer exception' on line 45 of my `main.java` file?" **1. Input: **The developer inputs the query "Why am I getting a 'null pointer exception' on line 45 of my `main.java `file?" into the NLP system. **2. NLP Processing:** The NLP system processes the input and converts it into a structured representation, such as a parse tree or a semantic representation. Extracting the following information: Error type: null pointer exception File name: `main.java` Line number: 45 **3. Searching and Analysis:** The NLP system uses this information to search the codebase and relevant documentation for relevant information and potential solutions to the problem described in the query. **4. Output:** The NLP system presents the developer with relevant information, such as code snippets or documentation that may help to resolve the issue, or suggests specific modifications to the code that could fix the bug. This example shows how NLP can be used to understand a developer's query about an error in their code, extract relevant information from the query, search for related information in the codebase and documentation, and provide the developer with useful information or suggestions for resolving the issue. The third way to test and debug code is with genetic algorithms. In genetic algorithms, code samples are tested and the best ones are used to create new code samples. This genetic evolution process can be used to find the best solution to a problem or to optimize code. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y4e6y1ppn7xjzf5crs90.png) As an example, let's say a developer is developing a website for an e-commerce store. The developer has written code for the website and wants to make sure it works. The AI-based system might find that there's a problem with the checkout code. The system might suggest a fix, such as adding more validation to the code. To make sure the problem has been fixed, the developer can implement the suggested fix and run the code through the AI-based system again. For example, let's say that the AI-based system identified an error when validating the customer's credit card information. It might suggest the following code to fix it: ``` if (!creditCardNumber.matches("[0-9]{16}")) { throw new IllegalArgumentException("Invalid credit card number"); } ``` The code checks if the credit card number entered by the customer is 16 digits long and only contains numbers. Exceptions are thrown if the information does not meet these criteria, and the customer is prompted to enter a valid credit card number. While AI-based systems are great for identifying and fixing bugs and errors, they're not always perfect. It's possible that the AI-based system won't be able to spot a problem or will suggest an incorrect fix. It's important for the developer to use their experience and knowledge to troubleshoot the problem. ### There are a number of AI-based tools currently available for code testing and debugging. Some popular options include: - **DeepCode:** This tool uses machine learning to analyze code and find bugs. It also suggests fixes for any issues it finds. This tool can be integrated with popular development environments like Eclipse, IntelliJ, and Visual Studio Code to make it more accessible to developers. - **CodeScene:** This tool uses AI to analyze code and identify potential issues, such as complex or duplicated code. It also provides insights into the overall health and maintainability of a codebase. CodeScene can be integrated with popular version control systems such as GitHub, GitLab and Bitbucket, making it easy for developers to use. - **Bito:** Bito is an AI platform that uses the same models as ChatGPT to help developers write code, test cases, check security, explain concepts, etc. Bito's AI-powered code testing and debugging features include automatic test case generation, automated code review and security scanning, and natural language explanations of code concepts. This AI-based tool can be integrated with various development environments and can be used for various programming languages. These AI-based tools can be a great addition to a developer's toolbox and can help them to write efficient and secure code while also reducing the time and resources required for testing and debugging. In summary, AI-based systems are transforming the way we test and debug code. They can speed up and improve the security of the software development process by identifying and resolving bugs and errors quickly. However, it's important to remember that AI systems aren't always perfect, so developers should use their own experience when troubleshooting.
ananddas
1,424,843
Let's expect change and reload!
It's often that we create a service that is supposed to change an attribute on an ActiveRecord...
0
2023-04-03T17:44:09
https://dev.to/szymonlipka/lets-expect-change-and-reload-1dhp
ruby, rails, testing, webdev
It's often that we create a service that is supposed to change an attribute on an ActiveRecord object. Testing such a service is tricky at times. ## Expect change Let's say we have an incorrect service that is supposed to update a name of a user in a database, which is looking like this: ```ruby class UpdateUserName def initialize(user) @user = user end def call # TODO: Will be done in close future, until 2200 end private attr_reader :user end ``` And we have a spec that is supposed to test it: ```ruby require 'rails_helper' RSpec.describe UpdateUserName do it "should update user name" do user = create(:user) described_class.new(user).call expect(user.name).to eq "funny name" end end ``` What happens when we run the spec? ![Passing spec](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dutxx86iifg5r5j73fmz.png) ![Wtf](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/txzilqvtic8tolkarjbe.png) Why has it passed you shall ask. The name of the user was always "funny name" although the call of service did not change anything, the spec passed. We shall rewrite it to something like this: ```ruby require 'rails_helper' RSpec.describe UpdateUserName do it "should update user name" do user = create(:user) expect { described_class.new(user).call }.to change(user, :name).to "funny name" end end ``` ![Failing spec](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/67zyhakrr9ldeutdjfbv.png) Awesome! We've discovered an incorrect implementation with a spec! ## Reload an object Going forward with the same example of spec: ```ruby require 'rails_helper' RSpec.describe UpdateUserName do it "should update user name" do user = create(:user, name: "not so funny name") expect { described_class.new(user).call }.to change(user, :name).to "funny name" end end ``` but we have upgraded our service and it is looking like this: ```ruby class UpdateUserName def initialize(user) @user = user end def call user.name = "funny name" end private attr_reader :user end ``` What happens if I run the test? ![Passing spec](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q44v13czwdxil4l26r5s.png) Is the service implemented correctly? In this particular case, we expect it to persist changes to database, so it is not correct, but spec is passing. Let me fix the spec: ```ruby require 'rails_helper' RSpec.describe UpdateUserName do it "should update user name" do user = create(:user, name: "not so funny name") expect { described_class.new(user).call }.to change { user.reload.name }.to "funny name" end end ``` ![Failing spec](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rv1zhji417vs1nx8igvh.png) So now we've discovered incorrect implementation again! Let me fix the service: ```ruby class UpdateUserName def initialize(user) @user = user end def call user.name = "funny name" user.save end private attr_reader :user end ``` ![Passing specs](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l14z4ic1vl5tuwa3hzk0.png) ![Victory](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pdmzbk52xuf7n74o0mru.png) ## Conclusion Usually, when we want to test services, we should expect a change of an attribute and reload it to see if the change really happened or maybe it is only on the object passed.
szymonlipka
1,425,068
What is SAML and how SAML authentication works
What is SAML Security Assertion Markup Language (SAML) is an XML-based standard protocol...
22,478
2023-04-03T20:43:22
https://ssojet.com/blog/what-is-saml-and-how-saml-authentication-works/
saml, sso, tutorial, webdev
## What is SAML Security Assertion Markup Language (SAML) is an XML-based standard protocol for exchanging authentication data between two parties. SAML is designed to enable Single Sign- On (SSO) across different applications and systems that belong to the same organization or consortium. SAML allows a user to log in once and then access multiple applications or services without having to log in again for each application or service, this is exactly SSO. SAML is based on the concept of a trust relationship between the identity provider (IdP) and the service provider (SP). The IdP is responsible for authenticating the user and providing the necessary identity information in the form of SAML assertions to the SP. SP uses the SAML assertions to grant or deny access to resources. SAML is famous in enterprise environments, online service providers and government agencies. It is most popula SSO protocols. The SAML standard is maintained by the Organization for the Advancement of Structured Information Standards (OASIS), and it is continually evolving to meet the changing security and privacy requirements of modern internet based applications. ## Benefits of SAML **Secure**: SAML allows for secure transfer of authentication and authorization data between parties and make sure that user identity and access information is confidential. SAML is designed by keeping in mind security requirement of enterprises and regulated industry, that’s why it’s highly secure protocol. **SSO**: Using SAML organizations can implement SSO for their multiple applications means users can access multiple web applications and services without login multiple time. it solves the problems of multiple credentials for multiple applications which belongs to one organizations. Scalability: SAML supports a wide range of authentication and authorization scenarios and use cases. This makes SAML highly scalable and adaptable to variety of business requirements. It can fit in most of the industry with it’s flexibility and security. **Interoperability**: SAML is a popular SSO Protocol which means it can be used with different vendor’s applications and systems. It’s specificiations are well defined and give great flexibility without doing customization to it, that’s make sure that cross oganizations applications are compatible with each other. **Save cost**: SAML reduce cost of managing user’s authnetications and access to multiple applications and services, Login once reduce processsing of number of authentications on multiple applications. **Enhanced User Experience**: SSO always make user experience better, if organization have multiple applications and if user doesn’t require to singup as well login multiple time. Compliance: SAML is well designed for enterprises requirements, it has all security and privacy scenerios which make it’s compliant protocol. ## SAML terminology **Identity Provider (IdP)**: Identity Provider is authenticate users and generte SAML assertions that contain data about user identity and access related. **Service Provider (SP)**: Service provider is application that users want to access after successfull authentication by Identity Provider. Service Provider accept SAML assertion sent by Identity Provider. **SAML Assertion**: A SAML assertion is an XML document that contains data of user’s identity (ID and Atrributes) and access, also metadata of the assertion itself, such as its validity period, public key and the IdP that issued it. **SAML Protocol**: A set of rules and methods for exchanging SAML assertions between the Identity provider and the Service Provider. In January 2001, OASIS Security Services Technical Committee (SSTC) convened for the first time with mandate of creating an XML framework to facilitate the exchange of authentication and authorization information. **Attribute**: An attribute is a use profile related fields, such as name, email address, or group. It is paert of a SAML assertion. Attributes are used by the SP to identify identify user and provide access according to it. **NameID**: A NameID is a unique identifier that is assigned to a user by the IdP and included in a SAML assertion. NameID is used by the SP to identify the user across different applications. **Metadata**: Metadata is information about a identity Provider or Service Provider, SP require IdP’s meta data and IdP require SP’s metadata to establish trust between both. Metadata includes information about the SP or IdP’s endpoints (assertion consumer service URL, SLO URL etc.), certificate, audience and other relevant details. **Subject**: Subject refers to user on whose behalf the SAML assertion has been generated, it contains NameID XML tag also. Single Logout (SLO): A process that enables a user to log out of all web applications or services that use SAML authentication with a single action. **Binding**: Method for transmitting SAML messages between an IdP and an SP, such as HTTP Redirect, HTTP POST, or SOAP. ## SAML flows Here is The general steps of creating a SAML assertion and consumption involves the following steps: **User Attempt to Access Restriucted Resources**: User attempts to access a service provider (SP) application that requires authentication. SP redirects the user to the Identity Provider (IDP) for authentication. **IdP Authentication**: IdP authenticates the user in this step if user’s session doesn’t exist. IdP can authanticate using a various methods example username and password, two-factor authentication, or smart card authentication. **Assertion Creation**: Once the user is authenticated, IdP creates a SAML assertion that contains data about user and authentication status. IdP Sign the assertion using IdP Proivate key to ensure its authenticity and integrity. **Assertion Delivery**: IdP sends the SAML assertion to the SP via the user’s browser, using either the HTTP POST or HTTP Redirect binding. IdP Sends assertion in XML format. **Assertion Validation**: SP receives the SAML assertion and validates it by verifying the signature, checking the expiration date, and verifying that the assertion is intended for the SP. **Attribute Extraction**: Once the SAML assertion is validated, SP extracts user attributes such as name, email, and group. **Session Creation**: SP creates a session for user, allowing user to access SP application. There are two types flows in SAML, these are IdP-Intiaited and SP-Initiated flows. ## IdP initiated SAML IdP-initiated flow is a scenario where the user is first authenticated by the Identity Provider (IDP) and then redirected to a Service Provider (SP) application without the user having to initiate the request. The process involves the following steps: **IdP Authentication**: User try to access the specific SP application, IdP authenticates the user in this step if user’s session doesn’t exist. IdP can authanticate using a various methods example username and password, two-factor authentication, or smart card authentication. **Assertion Creation**: Once the user is authenticated, IdP creates a SAML assertion that contains data about user and authentication status. IdP Sign the assertion using IdP Proivate key to ensure its authenticity and integrity. **Assertion Delivery**: IdP sends the SAML assertion to the SP via the user’s browser, using either the HTTP POST or HTTP Redirect binding. IdP Sends assertion in XML format. **Assertion Validation**: SP receives the SAML assertion and validates it by verifying the signature, checking the expiration date, and verifying that the assertion is intended for the SP. **Attribute Extraction**: Once the SAML assertion is validated, SP extracts user attributes such as name, email, and group. **Session Creation**: SP creates a session for user, allowing user to access SP application. In the IdP-initiated flow, the user is first authenticated by IdP, and request is initiated by the IdP, which then sends SAML assertion to SP. This flow is typically used in situations where user is on IdP portal and use want to access SP directly. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hfcn4dsbjhf86jsxl3v8.png) ## SP-Intiated SAML SP-initiated flow is a scenario where user initiates the request to access a Service Provider (SP) application and is then redirected to the Identity Provider (IDP) for authentication. The process involves the following steps: User Attempt to Access Restriucted Resources: User attempts to access a service provider (SP) application that requires authentication. **SP Request:** The SP determines that user needs to be authenticated and sends a SAML request to the IdP, requesting user’s authentication and authorization information,. **SP Request validation**: IdP receives the SAML request and validate and verify by signature. **IdP Authentication**: IdP authenticates the user in this step if user’s session doesn’t exist. IdP can authanticate using a various methods example username and password, two-factor authentication, or smart card authentication. **Assertion Creation**: Once the user is authenticated, IdP creates a SAML assertion that contains data about user and authentication status. IdP Sign the assertion using IdP Proivate key to ensure its authenticity and integrity. **Assertion Delivery**: IdP sends the SAML assertion to the SP via the user’s browser, using either the HTTP POST or HTTP Redirect binding. IdP Sends assertion in XML format. **Assertion Validation**: SP receives the SAML assertion and validates it by verifying the signature, checking the expiration date, and verifying that the assertion is intended for the SP. **Attribute Extraction**: Once the SAML assertion is validated, SP extracts user attributes such as name, email, and group. **Session Creation**: SP creates a session for user, allowing user to access SP application. In the SP-initiated flow, the user initiates the request to access the SP application, and the SP sends a SAML request to the IDP for authentication and authorization. This flow is typically used in situations where the user needs to access a specific resource or application directly. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n8e7wmxkfpd2ass0eyty.png) ## SAML Use cases **Workforce SSO** SAML is very popular into Workforce SSO, All the Workforce SSO providers support SAML so it can be integrated with internal tools either SaaS or on-prem. Using workfoce SSO companies can control their employees accesses from a single dashboard, onboarding, management and offbording. As SAML is well defined protocol so it’s highly secure and flexible which fits in enterprise ecosystem for identity use case. All the enterprises and mid-sized businesses use Workforce SSO. **B2B SaaS SSO** When we say that all the Enterprise and mid-sized use Workforce SSO means all B2B SaaS solution who deal or want to deal in this segment means they require to integrate SAML so their customer’s Workforce SSO can be integrated with their system. All the B2B SaaS platform these days supports integration of Workforce SSO. **Example SAML Response: ** ``` <samlp:Response ID="_257f9d9e9fa14962c0803903a6ccad931245264310738" IssueInstant="2009-06-17T18:45:10.738Z" Version="2.0"> <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity"> https://www.salesforce.com </saml:Issuer> <samlp:Status> <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/> </samlp:Status> <saml:Assertion ID="_3c39bc0fe7b13769cab2f6f45eba801b1245264310738" IssueInstant="2009-06-17T18:45:10.738Z" Version="2.0"> <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity"> https://www.salesforce.com </saml:Issuer> <saml:Signature> <saml:SignedInfo> <saml:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <saml:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <saml:Reference URI="#_3c39bc0fe7b13769cab2f6f45eba801b1245264310738"> <saml:Transforms> <saml:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped- signature"/> <saml:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <ec:InclusiveNamespaces PrefixList="ds saml xs"/> </saml:Transform> </saml:Transforms> <saml:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <saml:DigestValue>vzR9Hfp8d16576tEDeq/zhpmLoo= </saml:DigestValue> </saml:Reference> </saml:SignedInfo> <saml:SignatureValue> AzID5hhJeJlG2llUDvZswNUrlrPtR7S37QYH2W+Un1n8c6kTC Xr/lihEKPcA2PZt86eBntFBVDWTRlh/W3yUgGOqQBJMFOVbhK M/CbLHbBUVT5TcxIqvsNvIFdjIGNkf1W0SBqRKZOJ6tzxCcLo 9dXqAyAUkqDpX5+AyltwrdCPNmncUM4dtRPjI05CL1rRaGeyX 3kkqOL8p0vjm0fazU5tCAJLbYuYgU1LivPSahWNcpvRSlCI4e Pn2oiVDyrcc4et12inPMTc2lGIWWWWJyHOPSiXRSkEAIwQVjf Qm5cpli44Pv8FCrdGWpEE0yXsPBvDkM9jIzwCYGG2fKaLBag== </saml:SignatureValue> <saml:KeyInfo> <saml:X509Data> <saml:X509Certificate> MIIEATCCAumgAwIBAgIBBTANBgkqhkiG9w0BAQ0FADCBgzELM [Certificate truncated for readability...] </saml:X509Certificate> </saml:X509Data> </saml:KeyInfo> </saml:Signature> <saml:Subject> <saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified"> saml01@salesforce.com </saml:NameID> <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"> <saml:SubjectConfirmationData NotOnOrAfter="2009-06-17T18:50:10.738Z" Recipient="https://login.salesforce.com"/> </saml:SubjectConfirmation> </saml:Subject> <saml:Conditions NotBefore="2009-06-17T18:45:10.738Z" NotOnOrAfter="2009-06- 17T18:50:10.738Z"> <saml:AudienceRestriction> <saml:Audience>https://saml.salesforce.com</saml:Audience> </saml:AudienceRestriction> </saml:Conditions> <saml:AuthnStatement AuthnInstant="2009-06-17T18:45:10.738Z"> <saml:AuthnContext> <saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified </saml:AuthnContextClassRef> </saml:AuthnContext> </saml:AuthnStatement> <saml:AttributeStatement> <saml:Attribute Name="portal_id"> <saml:AttributeValue xsi:type="xs:anyType">060D00000000SHZ </saml:AttributeValue> </saml:Attribute> <saml:Attribute Name="organization_id"> <saml:AttributeValue xsi:type="xs:anyType">00DD0000000F7L5 </saml:AttributeValue> </saml:Attribute> <saml:Attribute Name="ssostartpage" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified"> <saml:AttributeValue xsi:type="xs:anyType"> http://www.salesforce.com/security/saml/saml20-gen.jsp </saml:AttributeValue> </saml:Attribute> <saml:Attribute Name="logouturl" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri"> <saml:AttributeValue xsi:type="xs:string"> http://www.salesforce.com/security/del_auth/SsoLogoutPage.html </saml:AttributeValue> </saml:Attribute> </saml:AttributeStatement> </saml:Assertion> </samlp:Response> ``` ## Conclusion SAML solves the Security and User experience problems with greater flexibility, it is defacto solution when we think about the identity exchange between two parties. SAML’s strength is it’s well define specification which make this fir for most of the use canse of Identity Federation and SSO. SAML is not that popular in B2C applications, JWT, OAuth and OIDC are well known protocols into B2C.
devsso
1,425,120
The AWS Academy Cloud Architecting - Capstone Project
The AWS Academy Cloud Architecting Capstone Project was all about designing and implementing a...
0
2023-04-04T12:22:50
https://dev.to/efat25/the-aws-academy-cloud-architecting-capstone-project-492f
aws, cloud
The AWS Academy Cloud Architecting Capstone Project was all about designing and implementing a cloud-based solution using Amazon Web Services to solve a particular business problem. This included developing an architectural plan, deploying and configuring the required AWS services, and implementing the solution using industry best practices. Additionally, I made sure that this project related to cost optimisation, by selecting and making use of the most efficient computing resources when initialising processes, (as a budgeting precaution, of course), which could always be scaled up in case of business growth. I followed a simple procedure in order to discover the issues and carry out the required tasks. --- ## Inspecting the architecture In this initial phase, I just wanted to have a look at the environment - what AWS had already provided us, as well as any guesses on what was missing from the scenario. These are some of the things I decided to do before starting: - Inspect the VPC. - Inspect the Subnets. - Inspect the Security Groups. - Inspect the Instances. ## The Cloud 9 IDE Shortly after creating an AWS Cloud9 environment, I used the following command to get the ".zip" file which contains the PHP and image files for the website of the organisation which was then extracted. `wget <link of the zip file>` ![wget](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n72wg8bglzmj2rnbig3c.png)**Figure 1** ## The LAMP web server stack on Linux The following commands were used to install the LAMP stack: `sudo yum -y update` `sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2` `sudo yum install -y httpd mariadb-server` `sudo systemctl start httpd` `sudo systemctl enable httpd` `sudo systemctl is-enabled httpd` This stack is essential for us to successfully deliver the website in a simple yet, stable way! LAMP stands for Linux, Apache, MySQL, and PHP. Together, they provide a proven set of software for delivering high-performance web applications. Each component contributes essential capabilities to the stack. After installing the stack, I simply: - Opened port 80 from the security group of the Cloud9 EC2 instance - Got the cloud9 EC2 public instance IP address and tested that I could access the website ![Port80enable](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/glgzjjpuvvem9tjlhq12.png)**Figure 2** ## Creating a MySQL RDS database instance First of all, I crated an AWS RDS subnet group in the private subnets in zones us-east-1a and us-east-1b. ![Subnet Group](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xy9opx24c2sbnbhrxlu4.png)**Figure 3** Then I proceeded to create an AWS RDS database with the following specifications: -Databasetype: MySQL -Template: Dev/Test -DBinstanceidentifier: Example -DB instance size: db.t3.micro -Storage type: General Purpose (SSD) -Allocatedstorage: 20GiB -Storageautoscaling: Enabled -Standbyinstance: Enabled -Virtualprivatecloud: ExampleVPC -Databaseauthenticationmethod: Passwordauthentication -Initialdatabasename: exampledb -Enhancedmonitoring: Disabled ## Creating an Application Load Balancer An Application Load Balancer is a requirement, so I created one using the following criteria: -Create target group -Launch Web Instances in the private subnet ## Importing the data into the RDS database Used the `wget <SQL dump file link>` command on Cloud9 to get the file with the sample data, connected and imported the data into the RDS database using: `mysql -u admin -p --host <rds-endpoint>` `mysql -u admin -p exampledb --host <rds-endpoint> < Countrydatadump.sql` ## Parameters Store Configuration Added the following parameters to the Parameter Store and set the correct values: **/example/endpoint** **/example/username** **/example/password** **/example/database exampledb** ## Creating a Launch Template and an Autoscaling Group The final steps of this project consisted of: - Modifying the IAM role of the instance created by Cloud9 to enable query on the website - Created an Image of the instance (AMI) - Modified Launch Template to use the recently created AMI - Using the Launch Template with the correct AMI ID for the Autoscaling Group creation This allowed me to connect to the website by entering the Load Balancer's endpoint, it queried the data from the RDS database successfully too (Check out my design for this scenario which sums up the architecture). ![AWS Architecture](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a2ln8u46dp7xqhcs2qzc.png)**Figure 4** --- In conclusion, the AWS Academy Cloud Architecting 2.x - Capstone Project allowed me to develop the understanding of some concepts about creating a solution in a potentially real-life scenario. This project improved my overall confidence and knowledge about cloud environments, since in order to create a fully-functioning architecture there must be crucial factors to consider. Lastly, I would recommend this project to everyone that is trying to commence their journey on the Cloud, because not only does this project challenge you to come up with solutions whenever there is an issue or whenever you are stuck, but it gives you some substantial hands-on experience and a taste of architecting a realistic case.
efat25
1,425,138
Considerations before creating a hybrid infrastructure with AWS
What is a Hybrid Architecture? A hybrid architecture combines computing resources, including local...
0
2023-04-03T22:58:49
https://dev.to/aws-builders/considerations-before-creating-a-hybrid-infrastructure-with-aws-4j2f
**What is a Hybrid Architecture?** A **hybrid architecture** **combines computing resources**, **_including local infrastructure_** **and _cloud-based services_**. This is typically done by companies that **want to leverage the benefits of cloud computing while still maintaining control** over specific digital **data** **or applications** that they prefer to keep **on-premises**. _“The tricky part in making a hybrid car wasn´t sticking a battery and an electric motor into a petrol-powered car. Getting the two systems to work seamlessly and harmoniously was the critical innovation.”_ Gregor Hohpe - Cloud Strategy: A decision-based approach to Successful Cloud Migration **Types of hybrid setups** Gregor, also created a [great article](https://architectelevator.com/cloud/hybrid-cloud/#hybrid-cloud-ways-to-slice-the-elephant) pointing different scenarios for hybrid architectures. 8 types of scenarios are identified: ![hybrid cloud scenarios](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lhybmdi90d2c9ew0zna4.png) Let´s pick the **Workload Demand strategy**: _Companies can benefit from the cloud's elasticity to increase the capacity of their services at a burst, when is needed. Another benefit is the Cloud billing model, you pay for what you use, when you use it_ **Example** Imagine the scenario where you are the solutions architect in a company that sells online tickets for the last concert of a famous group, let´s say Rammstein. ![Exercise description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7i9sfso5cwlrt2li79o3.png) You are going to receive so many requests when the tickets are on sale, it will look like you are receiving a DDoS attack. You are in charge, nothing can fail, the reputation of the company is at your hand and you don´t want that your server room looks like this: ![This is fine](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k1scycjn7noyl6fei1p5.jpg) If **you build a proper hybrid architecture**, _you can overcome any overload problem and avoid any disaster or chance to offer a bad service for your end customers_. The following image is a simplified version of a hybrid architecture solution for our example → ![Hybrid Architecture](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wc1xywezkdfrd14jzh3i.png) In this architecture, we extend and distribute the application between the different EC2 instances and the on-premise hardware using the [Direct Connect](https://aws.amazon.com/directconnect/?nc1=h_ls) service. These EC2 instances are part of autoscaling groups in different AZs that will scale out at a burst based on our defined rules of HW utilization. Thanks to the elasticity of the cloud, this design will scale out and scale in once you are sold out with the tickets. If, for example, you would have used VPN instead of the Direct Connect service, you may end up having synchronization issues if there is high latency between the on-premises and the cloud. How can you keep communication almost in real time with a bad latency?. In the end, you will have customers who will have a bad experience and this will negatively impact your image as an architect and your employer. This scenario could be avoided Not all hybrid architectures require the exact requirements; for this one, low latency is a must. **Considerations** **What options do I have to create a hybrid cloud infrastructure?** VPN Over the internet ![VPN](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ldwj61li5cwev1osa0zm.png) _In a hybrid-cloud scenario, VPN is the fastest way to achieve the goal, but there are some downsides if you have to rely on the solution:_ - The connection is Encrypted, but it is no private - DDoS risk - Unpredictable latency - Limited throughput – up to 1.25 Gbps - (It can scale with the use of a transit gateway) - Low setup costs but high egress traffic costs after a certain amount of data - No end-to-end SLAs **AWS Direct Connect** ![AWS Direct Connect](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pdkcxjla1yjmrw1nm198.png) Direct Connect is a private way to connect your on-premises infrastructure with a fiber optic connection to AWS inside a data center. This solution is not the fastest/cheapest to deploy; it is more complicated to design, but it provides some advantages like: - Extra security (connection outside of the public internet) but not encrypted by default, (possibility of [MACsec encryption](https://docs.aws.amazon.com/directconnect/latest/UserGuide/MACsec.html)) - Lowest possible latency - High throughput – from 50Mbps up to 100Gbps - Cost-effective solution after a certain amount of data - Enterprise-grade SLA **What to choose: Direct Connect or VPN?** This will depend on the company's needs; SLA, latency, bandwidth, and time to deploy are some factors that will help you make the final decision. [![Direct Connect vs VPN](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gmwr3kprl07kx6qzm3ju.jpg)](https://docs.aws.amazon.com/whitepapers/latest/hybrid-connectivity/connectivity-type-selection-summary.html) **Conclusion** **Planning and implementing a network design** that meets your business needs and requirements **is essential** to ensure a successful hybrid cloud deployment in AWS. _Not doing a proper analysis can negatively impact the business in all aspects._ **Networking is the key to success**: _Strong networking, seamless integration_
luzma_1
1,425,423
Coding a Port Scanner with Python
Port scanning is a way for determining which ports on a network device are open, whether it's a...
0
2023-04-04T05:34:51
https://dev.to/jsquared/coding-a-port-scanner-with-python-5he7
ethicalhacking, security, cybersecurity, penetrationtesting
Port scanning is a way for determining which ports on a network device are open, whether it's a server, a router, or a regular machine. To simply put it, a port scanner is just a script or a program that is designed to probe a host for open ports. In this blog, I will show you step-by-step how to code a simple port scanner using the pre-installed socket library. The idea of making the port scanner is to connect to a host (it could be a website, server, or any device which is connected to a network/ internet) through a list of ports. If the scanner establishes a connection, then that means the port is open. **DISCLAIMER: THIS IS ONLY FOR EDUCATIONAL PURPOSES ONLY. DO NOT USE THIS ON A HOST THAT YOU DO NOT HAVE PERMISSION TO TEST. PORT SCANNING IS NOT ILLEGAL UNLESS IT IS USED TO GAIN UNAUTHORIZED ACCESS OR BREACH PRIVACY.** First things first, if you want to print in colors, you will need to install colorama (this is completely optional): ``` pip 3 install colorama ``` With that out of the way, now we can actually start coding the scanner. First, let's import the `socket` module: ``` import socket # for connecting to the host from colorama import init, Fore # adding some colors (optional) init() GREEN = Fore.GREEN RESET = Fore.RESET GRAY = Fore.LIGHTBLACK_EX ``` _**The socket module is a module already built in the Python standard library, so you don't need to install it.**_ `colorama` is used later when the program prints the ports that are open or closed (again this is optional) Next, let's create a function that will be used to decide whether a port is open or not: ``` def is_port_open(host, port): #determines whether the host has the port open # creates a new socket s = socket.socket() try: # tries to connect to host using that port s.connect((host, port)) # make a timeout if you want it a little faster (means less accuracy) # s.settimeout(0.2) <-- if you want to add a timeout except: # cannot connect (port is closed) and returns false return False else: # the connection is established (port is open) return True ``` the `s.connect((host,port))` function attempts to connect the socket to a remote address using the `(host,port)` tuple (Tuples are used to store multiple items in a single variable), it will bring up an exception when it fails to connect to the host, so that is why we put that code into a try-expcept block so when the exception is brought up, it tells us that the port is closed (otherwise it is open). Lastly, we can use the function we just made above and repeat it over a number of ports: ``` # asks user to enter a port host = input("Enter the host:") # repeat over ports, from 1 to 1024 for port in range(1, 1024): if is_port_open(host, port): print(f"{GREEN}[+] {host}:{port} is open {RESET}") #prints green text for open ports else: print(f"{GRAY}[!] {host}:{port} is closed {RESET}", end="\r") #prints gray text for closed ports ``` This part of the code will scan all ports from 1 to 1024. You can freely change the range if you so choose, but keep in mind that if you increase the range it will take longer to complete scanning. ## Potential Issues Upon running the code, you will notice that the script isn't the fastest. You can change this by adding a timeout of 200 milliseconds (using `settimeout(0.2)`. Keep in mind that this will reduce the accuracy of the scanning, especially if you have high latency. If you want, the full source code is on [Github](https://github.com/sleepyrob0t/simple-portscanner-python).
jsquared
1,425,558
Project 2 Update - 4/4
Update Overall in terms of the project we have made progress with setting everything up,...
0
2023-04-04T10:01:48
https://dev.to/adeo123/project-2-update-44-54m
# Update - Overall in terms of the project we have made progress with setting everything up, we have also got the card working so that the the variables on the card can be changed through the constructor instead of hard coded values, we are working on implementing the search feature, but are facing some difficulties in doing so. # Q/A 1. We have not gotten the search function resulting in rendering of the cards setup, however we will get it done in the next couple of days. 2. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0k68cqgts0nh98jjreu8.png) 3.An example of a micro-frontend being used in the real world could be on YouTube, where you can see the ratio of likes to dislikes on video. The approach with micro-service architecture would allow us to work on different parts of the companies website individually, allowing for the website to be slowly updated and changed. # Questions - In terms of the search, when a card name is searched, should all other cards except the named one be hidden, or should the card which has been searched only be highlighted?
adeo123
1,425,652
Django
Secret key In Django, the secret key is a string of random characters used for...
0
2023-04-04T10:34:03
https://dev.to/__saravanan__/django-2l5f
## Secret key In Django, the secret key is a string of random characters used for cryptographic signing and protection against attacks such as session hijacking, cross-site request forgery, and other malicious activities. In a new Django project, a secret key is automatically generated and stored in the settings.py file. The key is secret and it should not be shared with others, as anyone with access to it could potentially impersonate site users or modify data on the site. A new secret key can be generated using django-secret-key or any other application. It can be saved as an environment variable to prevent it from getting uploaded from the settings file through VCS. Cloud-specific tools can also be used to save it. Using separate settings files for development and production can also be done. ## Default Django apps There are more apps than the default Django apps given in settings. They can be found in [src](https://github.com/django/django/tree/main/django/contrib). They are - `django.contrib.admin`: Provides a web administrative interface for managing the Django project's data. - `django.contrib.auth`: Provides user authentication and authorization mechanisms. - `django.contrib.contenttypes`: Provides a framework for associating metadata with models. - `django.contrib.sessions`: Provides session management functionality. - `django.contrib.messages`: Provides a way to display one-time messages to users. - `django.contrib.staticfiles`: Provides a framework for managing static files. - `django.contrib.humanize`: Provides a set of template filters for humanizing data. - `django.contrib.redirects`: Provides a way to redirect URLs. - `django.contrib.sitemaps`: Provides a framework for generating sitemaps. - `django.contrib.sites`: Provides a way to manage multiple sites using a single Django installation. - `django.contrib.admindocs`: Provides a way to automatically generate documentation for the project's models. - `django.contrib.postgres`: Provides support for PostgreSQL-specific functionality. - `django.contrib.gis`: Provides support for geographic data. - `django.contrib.syndication`: Provides a framework for generating RSS and Atom feeds. - `django.contrib.webdesign`: Provides a set of template tags for generating dummy data. ## Middleware and different kinds of middleware Middleware in Django is a component that sits between the web server and the view and provides a way to process requests and responses in a modular way. - Process Request Middleware: This middleware is executed at the beginning of the request cycle and can be used to perform tasks such as authentication, setting up the request, or modifying the request object. - View Middleware: This middleware is executed just before the view function is called and can be used to modify the view's context or perform additional processing on the request. - Template Middleware: This middleware is executed during the rendering of the template and can be used to add additional variables or processing to the template context. - Process Response Middleware: This middleware is executed at the end of the request cycle and can be used to modify the response object or perform any final processing before the response is sent back to the client. ## CSRF CSRF attacks let anyone use another person's website account without their permission. Django can stop this attack with its built-in protection. Django checks for a secret code in each form submission, so anyone needs to know the secret code to trick the website. This secret code is user-specific and stored in cookies. When using HTTPS, Django checks that the form is coming from the same place as the website. Using HTTPS helps make things more secure. The csrf_exempt decorator must be used only when it is necessary. ## XSS XSS attacks are when someone injects harmful scripts into a website that can affect other people's browsers. Django templates can help stop these attacks. Django templates can protect against certain dangerous characters, but not all. ## ClickJacking Clickjacking is when a bad website puts another website inside a frame, tricking people into doing things they didn't mean to do. Django has a way to protect against this called X-Frame-Options middleware, which can stop a website from being shown inside a frame in some browsers. ## WSGI WSGI stands for Web Server Gateway Interface. It is a specification that defines how a web server communicates with a Python web application. In Django, WSGI is used to allow a web server to interact with a Django application. It acts as a bridge between the two, allowing the webserver to send requests to the Django application and receive responses. The WSGI specification provides a standard interface for web servers and Python web applications to communicate with each other. ## Models ## ondelete `on_delete` is a parameter that can be used when defining a foreign key relationship in Django models. It specifies what should happen when the referenced object is deleted. `on_delete=CASCADE` is one of the options available for the on_delete parameter. It specifies that when the referenced object is deleted, all objects that have a foreign key relationship to it should also be deleted. ## Fields and Validators A model field represents a database column and defines the type of data that can be stored in that column. Validators are functions that validate the data entered into a field according to some predefined rules. ## Module and Class A module is a file containing Python code that can be imported and used in other Python files or modules. A module can contain functions, variables, classes, and other objects. A class is a blueprint for creating objects that define a set of properties and methods that the objects will have. ## Django ORM in shell Django's Object-Relational Mapping (ORM) provides functionality to interact with a database using Python code instead of SQL queries. To use it in the shell import the model in the shell and use ORM functions on it. ## ORM to SQL in Django shell The ORM can be converted into SQL using `.query` from the queryset. ```python queryset = random_name.objects.filter(random_val=10) print(queryset.query) ``` ## Aggregation and Annotation Aggregate calculates values for the entire queryset. Annotate calculates summary values for each item in the queryset. Aggregate are functions such as `Sum()`, `Avg()` etc. ## Migration file A migration file is a script of instructions to modify the database schema. It is changed when a model is changed. It is needed to maintain schema in alignment with the models. `makemigrations` is used to generate the migration file and `migrate` is used to apply the changes to the database. ## SQL Transactions SQL transactions are a way of grouping together a set of database operations so that they can be executed as a single atomic unit. A transaction allows the performing of multiple database operations as a single, consistent unit, either all succeeding or no change. It prevents incomplete execution of queries. ## Atomic transaction Atomic transactions are used in Django to ensure all is completed or no change. It is the same as an SQL transaction. It is then done using SQL transactions depending upon the database used.
__saravanan__
1,425,699
Türkisches iptv kaufen
Sie möchten türkisches IPTV kaufen ? gehen Sie auf dort können Sie sich eine 24 stunden iptv...
0
2023-04-04T11:38:36
https://dev.to/smartiptv/turkisches-iptv-kaufen-3h7a
turkey, iptv
Sie möchten türkisches IPTV kaufen ? gehen Sie auf [](https://iptv25.net) dort können Sie sich eine 24 stunden iptv testline für Türkei holen. Wir haben viele Türkische Sender über IPTV. Testen Sie heute noch unser IPTV es ist sehr stabil für Türkei, sehr viele auswanderer schauen unser Türkisches IPTV
smartiptv
1,425,764
Method Overriding and Overloading in Java
Hello everyone 🤘🏼 In this article, we'll be taking a look at the concepts of method overloading and...
0
2023-04-05T00:00:00
https://dev.to/baytendev/method-overriding-and-overloading-in-java-47cm
java, beginners, programming, algorithms
Hello everyone 🤘🏼 In this article, we'll be taking a look at the concepts of method **overloading** and **overriding** in Java, which I believe are often forgotten and confused by beginners (at least I was 😁). Before delving into these two concepts in detail, I'd like to briefly mention them. Method overriding is simply changing the functionality of a method without changing its signature, while method overloading is redefining a method with a different signature. > These two definitions may not make much sense yet because I have only mentioned them briefly and in a simplistic manner. Although this section consists of only two sentences that may seem meaningless for now, by the end of the article, they will have the power to summarize the entire article. 🙂 Before we dive into the topic fully, there is one tiny concept we need to learn and understand: **The Method Signature**. Method signatures consist of the name of a method and its method parameters. As shown in the image below, the signature of the method named `justFun` consists of its name "justFun" and its parameters ``int num1``, ``int num2``. ![](https://i.hizliresim.com/9zxpi7n.jpg) I believe we have clarified this simple and short concept. Now, let's dive into the main concepts of this article. 🚀 # **Method Overriding** Method overriding, in simple terms, refers to methods that **have the same signature but perform different tasks**. How does this situation arise? Let's assume that we have a class named `Animal` with a method named `eat()`. Now, I have created another class named `Dog`, and I want it to inherit from the `Animal` class. When we perform the inheritance process, we would like to use the ``eat()`` method inherited from the super class in a way that is suitable for our subclass. Therefore, **we change its functionality**. In other words, we override the ``eat()`` method.🙂 Let's give an example! ```java class Animal{ public void eat(){System.out.println("eating...");} } //Overriding way class Dog extends Animal{ public void eat(){System.out.println("eating bread...");} } class Cat extends Animal{ public void eat(){System.out.println("eating fish...");} } ``` As we saw in the example, I did not touch the method signatures in any way. I did not change the method name or send a new parameter to that method... **I just played with the internal dynamics of the method and added a completely different functionality.** Well, you may have a question like this in your mind🤔: > ### ***"What about the access modifiers of the methods we override? Do they have to be the same?"*** Our answer to this question will be **no**. They do not have to be the same, **but the access modifier of the overriding method must be the same or less restrictive than the original method.** ![](https://i.hizliresim.com/nzibmem.jpg) According to the restriction diagram in the image, if a ``protected`` method is to be overridden, **its access modifier cannot be default or private**. It can either remain ``protected`` or be made ``public``. In short, when overriding a method, **the access modifier in the subclass must be the same as or less restrictive than that in the superclass.** Another question that might come up is❓, > ### "Do the return types of the methods we override have to be the same? Can't I specify a different return type?" Our answer to this question **will be No** 🙂. When overriding a method, **the return type of the method must remain the same, along with the method signature**. This is primarily based on Java's principle of type safety. If a subclass could use a different return type while overriding methods of the superclass, it could lead to errors in places where the superclass is used. However, there is an exception to this rule where **the return type of the overridden method can be a co-variant return type**. For example, suppose the return type of a superclass is ``Animal``, and the return type of the subclass is ``Cat``. While overriding the superclass method, the subclass can specify the return type as ``Cat``. This way, the object returned by the subclass will not be of type ``Animal``, but of a more specific subtype, which is ``Cat``. Another question that could arise is, > ### "Can I override every method?" It may be the most fundamental question to ask. 🙂 Our answer to this question, unfortunately, will still be No. 😁 If the method to be overridden is: - ***final*** - ***static*** - ***private*** then we cannot override these methods. This is because **the nature of the override operation is based on inheritance**. Methods with these three keywords cannot be inherited, so they cannot be overridden either. However, sometimes a subclass can redefine a static method of the superclass. Some of us might have seen this situation before. This is not an override but is called **Method Hiding.** If we summarize the process of overriding; - After the override process, the method signature remains the same (method name and parameters must stay the same), only the functionality of the method changes. - The return type of the overridden method should not change or it should be a co-variant return type. - The access modifier of the overridden method should be the same or less restrictive than the original method. - ``private``, ``static``, ``final`` methods cannot be overridden. Now that we have gained a general understanding and logic about overriding, let's take a look at the concept of overloading. # **Method Overloading** Method Overloading is **the redefinition of a method with the same name but different parameters**. This means that the parameter section of the method signature is changed. This allows for the creation of different methods that perform the same task but with different parameter types, which can be called independently. Let's try to clarify the situation with some short code examples, ```java public class Calculator { public int summation(int num1, int num2) { return num1 + num2; } public int summation(int num1, int num2, int num3) { return num1 + num2 + num3; } public static void main(String[] args) { Calculator calc = new Calculator(); int result1 = calc.summation(2, 3); int result2 = calc.summation(2, 3, 4); System.out.println("Summation of 2 and 3: " + result1); System.out.println("Summation of 2, 3 and 4: " + result2); } } ``` In the example above, we defined two different methods with the name ``summation()`` inside the ``Calculator`` class, where the first method takes **two integer parameters**, and the second method takes **three integer parameters**. In the main method, we call both methods with different parameters and print the result to the screen. Let's take a look at another example to reinforce our understanding. ```java public class HelloWorld { public void greet() { System.out.println("Hello, world!"); } public void greet(String name) { System.out.println("Hello, " + name + "!"); } public static void main(String[] args) { HelloWorld helloWorld = new HelloWorld(); helloWorld.greet(); helloWorld.greet("John"); } } ``` In the example above, the ``HelloWorld`` class defines two different methods named ``greet()``. The first method is a simple greeting method that takes **no parameters** and prints the message *"Hello, world!"*. The second method takes **a parameter of type String** and uses it to create a customized greeting message. Both methods are called in the main method and their results are printed to the console. Okey, now that we have gone through the examples, I think we have some understanding about method overloading. Let's delve into some details about overloading using the same questions we used for understanding override. Then our first question is as follows🎇 > ### "When overriding a method, we had to carefully choose the Access Modifier. That is, we had to either use the same Access Modifier as the previous method or a less restrictive one. Is this also true for Overloading?" Our answer should be no 🙂 The logic here is different from overriding. Access modifier selection was important for overriding because inheritance played a leading role. However, in overloading, **we have no constraints and we are completely free to choose any modifier we want.** As in the example below, ```java public int sum(int a, int b, int c) { return a + b + c; } protected void sum() { System.out.print("Nothing to sum"); } ``` And for our another question, > ### "The return types of the methods we override must be the same or a co-variant type. How does this work in Overloading?" Actually, we have already demonstrated this situation in the example we provided earlier. The return type is not important in Overloading. It can be the same or different. > ### "Can I overload every method?" The answer to this question should be yes 🙂 There is no problem with overloading a method that is ``private``, ``static``, or ``final``. Let's reinforce this situation with the example below. ```java public class Example { public static void doSomething(int x) { System.out.println("doSomething with int: " + x); } public static void doSomething(double x) { System.out.println("doSomething with double: " + x); } public final void doSomething(String x) { System.out.println("doSomething with String: " + x); } } ``` If we summarize the overloading process in short points; - After the overloading process, **the method signature changes (method name remains the same, method parameters change).** - The return type of the overloaded method can **remain the same or be different.** - The access modifier of the overloaded method can remain the same or be different. ``private``, ``static``, ``final`` methods can also be overloaded, there is no restriction. If we've made it this far understanding, I believe we have an idea about the concepts of Overloading and Overriding. This has been a compilation of what I've learned and researched about Override and Overload. Thank you for taking the time to read this far. Happy Coding! 🤞🏼
baytendev
1,425,834
Ternary Operators in under 200 words!
Have you ever wanted to write an if-else statement in one line? Welcome to Ternary Operators! The...
0
2023-04-10T04:11:19
https://dev.to/connor-ve/ternary-operators-in-under-200-words-2e6l
java, python, programming, tutorial
Have you ever wanted to write an if-else statement in one line? Welcome to Ternary Operators! The basic format of a Ternary Operator follows the pseudo code below : ``` (conditional) ? true-value : false-value ``` This simple concept allows a user to pass a conditional and replace that line of code with either the true or false value. This can either return a function or even a value if needed. For example : ``` x = (conditional) ? true-value : false-value ``` This will set the value of x to either the true value if the conditional is true and vice versa if the conditional is false. In each language, ternary operators have slightly different syntax in relation to the `?` and `:` but let's review what the syntax looks like in both Python and Java below: ## Python Implementation ```python x = 'hello' if val < 4 else 'goodbye' ``` ## Java Implementation ```java String x = (val < 4) ? "hello" : "goodbye"; ``` Now go out there and start using Ternary Operators where you see fit! If you are unaware whether your language of choice uses them, please check out this [list](https://en.wikipedia.org/wiki/Ternary_conditional_operator#Programming_languages_without_the_conditional_operator)!
connor-ve
1,425,866
Notes from competing in my first CTF
Last weekend, I competed in the National Cyber League (NCL), my first cybersecurity CTF. I wrote down notes about the tools I used in the challenges and wanted to share in case anyone is curious to know how CTFs work.
0
2023-04-04T15:54:51
https://charliegerard.dev/blog/competing-cybersecurity-ctf-ncl
cybersecurity, ctf, security
--- title: Notes from competing in my first CTF published: true description: Last weekend, I competed in the National Cyber League (NCL), my first cybersecurity CTF. I wrote down notes about the tools I used in the challenges and wanted to share in case anyone is curious to know how CTFs work. tags: cybersecurity, ctf, security cover_image: https://res.cloudinary.com/devdevcharlie/image/upload/v1680571456/Group_49_lxzeso.png canonical_url: https://charliegerard.dev/blog/competing-cybersecurity-ctf-ncl --- Last weekend, I competed in the [National Cyber League](https://nationalcyberleague.org/) (NCL), my first cybersecurity CTF open to students in the US. I only started my Bachelor's degree in cybersecurity a month ago but I wanted to give it a try anyway. I had a great time, learnt a lot and wanted to share some of my notes. Unfortunately, I'm not allowed to go into too much detail about the challenges and solutions but I still wanted to share some tools I used. This CTF is broken down into 9 categories, each with multiple challenges to go through, rated from easy to hard. To avoid having people brute force the answers until they get the correct one, each time you submit an incorrect answer, your accuracy score decreases so you have to choose what you submit wisely. In the end, here's my score report below. I ranked in the top 6% nationally 🎉 so I'm excited to see how I do next time with more experience! ![1610/3000 points with an accuracy score of 76.9% and a 69% rate of completion.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/08dtyx4jc8muloknfr8i.png) The categories include: - [Open source intelligence (OSINT)](#osint) - [Cryptography](#cryptography) - [Password Cracking](#password-cracking) - [Log analysis](#log-analysis) - [Network Traffic Analysis](#network-traffic-analysis) - [Forensics](#forensics) - [Scanning & Reconnaissance](#scanning-reconnaissance) - [Enumeration & Exploitation](#enumeration-and-exploitation) - [Web application exploitation](#web-app-exploitation) ## <a name="osint">Open-Source Intelligence (OSINT)</a> This section is usually related to being able to understand or find data without much context, or using notations you might not know, with very little information. For example, deciphering messages or being given a number that looks totally random and figuring out what it refers to. Here are some of the tools I used: - https://opennav.com/search - https://emvlab.org/mrz/ - https://www.dcode.fr/chiffres-symboles - Google image search - https://ipinfo.io/ - https://www.smartconversion.com/unit_conversion/IP_Address_Converter.aspx ## <a name="cryptography">Cryptography</a> This section focuses on encrypting and decrpyting messages or files. You are often not told how they are encrypted so you have to figure out first which encryption method to use to decrypt them. For this section, I mostly used [CyberChef](https://gchq.github.io/CyberChef/) For example, you could get asked to decrypt the message `aGVsbG8gd29ybGQ=`. Using CyberChef, you can try to decrypt it using different methods. If you've decrypted messages before, you might recognize that it is encrypted using Base64, so it decrypts to `hello world`. Some challenges also relate to finding information in files of different formats. For challenges including files, I used commands such as `file` and also tools like PGP with `gnupg` or OpenSSL (`openssl`). ## <a name="password-cracking">Password cracking</a> This section is pretty self-explanatory, you're given a bunch or encrypted passwords and have to figure out how to decrypt them. For this, I downloaded wordlists such as the [rockyou wordlist](https://github.com/brannondorsey/naive-hashcat/releases/download/data/rockyou.txt) and used tools such as [Hashcat](https://hashcat.net/hashcat/) and [John the ripper](https://www.openwall.com/john/). For example, if you wanted to decode the password `5f4dcc3b5aa765d61d8327deb882cf99`, you would run the following hashcat command. ``` hashcat -m 0 -a 0 5f4dcc3b5aa765d61d8327deb882cf99 <path to your wordlist> ``` ## <a name="web-app-exploitation">Web Application Exploitation</a> In this section, you have to find ways to attack vulnerable websites. I mainly used the browser's developer tools, wrote some custom code and tried [Burpsuite](https://portswigger.net/burp) to intercept and modify requests. ## <a name="enumeration-and-exploitation">Enumeration and exploitation</a> This section usually involves programs written in different programming languages. To solve the challenges, you might have to figure out how to run them to find the flag or answer some questions that will test your understanding of the code written. The tools used for this section vary a lot depending on the code samples you get. You might get something in Python, JavaScript, Go, PowerShell, Assembly, etc so you have to be comfortable figuring things out. ## <a name="forensics">Forensics</a> This section focuses on finding things in different types of files. It could involve understanding how to deal with corrupted files, being able to understand information in some config files in a format you've never worked with, or figuring out the right tool to use to extract data. ## <a name="scanning-reconnaissance">Scanning & Reconnaissance</a> This section is the one I am the least experienced in. It involved scanning for open ports, or finding domains connected to a server, etc. I mainly used `nmap` to scan for ports and `gobuster` to find potential subdomains. ## <a name="log-analysis">Log Analysis</a> In this section, the challenges give you different kinds of log files to analyse. I mainly used `awk` to filter through the lines and extract the information I needed. An example of command is display only the first element on each line of a log file would be ``` awk '{print $1}' access.log ``` and to sort them, remove duplicates and count the number of entries, it would be something like this: ``` awk '{print $1}' access.log | sort | uniq | wc -l ``` ## <a name="network-traffic-analysis">Network traffic analysis</a> In this section, you are given files with network packets and you have to analyse them to find specific information. I mainly used [Wireshark](https://www.wireshark.org/) and [aircrack-ng](https://www.aircrack-ng.org/) --- Overall it was an intense weekend but I'm happy with how much I did and learnt. I was a bit worried about participating in a CTF before because I thought I wouldn't be able to do anything considering I have little experience in cybersecurity, but was surprised with how much I was able to solve by researching on the spot and going through the practise game a few weeks before. I'm definitely excited to learn more!
devdevcharlie
1,425,972
12 Factors: Revisiting the 7th Factor - Port Binding
Welcome to my blog! Hello everyone! In this article, we will continue our series on the 12 factors...
0
2023-04-04T15:50:19
https://dev.to/luizsfer/12-factors-revisiting-the-7th-factor-port-binding-l8f
beginners, programming, devops, cloud
Welcome to my blog! Hello everyone! In this article, we will continue our series on the [12 factors](https://12factor.net/) for the development of modern applications, inspired by the books of the legendary Martin Fowler. If you missed our previous articles, feel free to check out the [other factors](https://luizferreira.dev/categories/12-factors/). Today, we will cover the seventh factor: ## Port Binding The seventh factor states that the application should communicate with the outside world through a bound port. This means that the application must be able to accept incoming connections and communicate with other services and components through this port. ### Why is Port Binding Important? Binding a port ensures that the application is easy to deploy, configure, and integrate with other services and components. In addition, port binding allows the application to run in different environments and platforms without the need to change the source code. ### Key Principles for Port Binding 1. **Be platform-agnostic:** Your application should be able to communicate with other services and components, regardless of the platform or environment they are running on. 2. **Use standard protocols:** Use standard protocols, such as HTTP, to ensure compatibility and facilitate integration with other services and components. 3. **Flexible configuration:** Allow the bound port to be easily configured, whether through environment variables, configuration files, or command-line arguments. 4. **Treat the port as a scarce resource:** The application must be able to handle the possibility that the desired port is already in use and be able to find an alternative port if necessary. ### Examples and Tools Here are some examples of tools and technologies that can help implement port binding in your application: 1. **Express.js:** Express.js is a minimalist framework for Node.js that makes it easy to create web applications and APIs. It allows for simple and efficient port binding. 2. **Flask:** Flask is a micro-framework for Python that allows for the rapid and easy creation of web applications and APIs. It also supports port binding. 3. **Apache:** Apache is a widely-used web server that allows for port binding and flexible configuration of web applications and APIs. 4. **Nginx:** Nginx is a high-performance web server and reverse proxy that supports port binding and is easy to configure. ## Stay Tuned In the next article, we will cover the eighth factor of the 12 factors. Stay tuned and don't miss the next part of this informative series! If you enjoyed this article, please share it with your colleagues and friends on social media. Also, don't forget to leave a comment below with your questions,
luizsfer
1,426,047
Rice University Boot Camp
April (week 1) Started the pre-course studies this week. So far, no problems. A lot of what I have...
0
2023-04-04T17:12:39
https://dev.to/douglasmarsalis/rice-university-boot-camp-20hn
April (week 1) Started the pre-course studies this week. So far, no problems. A lot of what I have studied from CodeCademy, FreeCodeCamp and other sites has so far been helpful. I also started studying from the Head First JavaScript Programming book for practice on JS. I want to have a basic understanding of that before I start my course at Rice University on May 8th. The course will last for 6 months. I will be writing about that course, what I have learned and what problems that I encounter.
douglasmarsalis
1,426,307
Using Variables in CSS makes it easy, try to learn the basics through my post...
Think about this if a client came to you or you had a project working on it in a company, and you...
0
2023-04-04T21:22:22
https://dev.to/marwanahmed77/using-variables-in-css-makes-it-easy-try-to-learn-the-basics-through-my-post-2i6
css, frontend, webdev, variables
Think about this if a client came to you or you had a project working on it in a company, and you were required to create a website and use CSS in it, and you wrote, for example, 6000 lines of CSS code, and after you handed over the project, the client came to you and said, “This color is not the best thing for me, and we want to change it.” Is it better to make each element a color or a specific one, and then sit around this element in its selector inside CSS and try to think of what color you were using or what it was called by any naming system, whether RGB or Hex or whatever, or is it better to have a way to unite the colors In the whole site and use them as you want? Of course, the second solution is better and faster, but how can we apply it?? - There is something called "CSS Custom Properties", or as an abbreviation of the name we call it "CSS Variables", so how can I use it? :root { --paragraphs-color: #FF0000; --p-fontSize: 1.2em; } P { color: var(--paragraphs-color); font-size: var(--p-fontSize); } The above code means, in short, that I define a variable named paragraphs-color that holds value, which is the color red, and I define a variable named p-fontSize that holds a value that is 1.2em, which is equal to 19.2px. I have it on the site, and I call them inside each paragraph, and this makes me forget the idea that I might forget the color that I used in the paragraphs before, and it makes it easier for me. All you have to do if you forgot is to go back to the root or the original and see from it the styles that you initialized, this applies to all You can change the elements and selectors that you have on the site to styles and call them anywhere you want... I hope that the information will be useful to you if it is still the first time you know it, and if it is not the first time, then I hope that I have added something new to you.
marwanahmed77
1,426,700
How to cultivate goods connections
Birds of a feather flock together. In nature, animals are mostly found in...
0
2023-04-05T02:37:43
https://dev.to/daniellimae/how-to-cultivate-goods-connections-58ji
# Birds of a feather flock together. In nature, animals are mostly found in herds/groups. Wolves in Packs, fish in shoals, cows at cattle ... And Developers at Communities . Actually, create groups to show your importance for then or to help you to get better positions are the natural way of human. The benefits of being part of a community or group of developers cannot be ignored. By collaborating with others in your field, you can gain valuable insights and knowledge that you may not have been able to obtain on your own. # What is your position at the murmuration ? ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q1hr84cwh7y7a4xecudj.png) In a flock of birds during flight ( murmuration ), each bird has a distinct position. The lead bird at the head of the V formation takes on the brunt of the wind resistance, creating a slipstream for the others to follow and fly more efficiently. Similarly, in every community, there is usually a key individual ( or organization/product ) that plays a crucial role in "breaking the wind" and paving the way for other members to succeed. This may be in the form of providing resources, guidance, or support to help others achieve their goals. Just as the lead bird in a flock sets the pace and direction for the rest of the group, these community leaders can inspire and empower their fellow members to reach their full potential. ### 🦅 Your journey can be significantly eased by having a supportive group accompanying you. 🦅 In addition, developer communities can also provide opportunities for mentorship and career development. More experienced developers can provide guidance and advice to those who are just starting out in the field, helping them to grow and advance in their careers. # Assisting others ![Hand holding a bird](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8vdtoeo4f5vp78mz1y6j.png) If you have a hard issue that GPT is not helping you for some reason and you just got stuck, you can [ask in those communities for the solution](https://dev.to/daniellimae/is-it-time-to-ask-help-or-should-i-try-a-little-more--5h6p), or search if someone already had the same issue that you, and find the possibilities. One effective approach to learning is to teach or explain concepts that you already know, or think you know. Assisting others can be a form of self-care that may have more benefits than you might realize. #Where to find this communities ? 💻 Online forums: Search for online forums or discussion boards related to your area of interest or expertise. Examples include Stack Overflow, Reddit's programming subreddits, and GitHub's community forums. 🤝 Meetup groups: Look for local Meetup groups that focus on your preferred programming languages or technologies. Meetup.com is a good place to start. 🎉 Conferences and events: Attend developer conferences or events in your area or region. These events can provide an excellent opportunity to network with other professionals and learn about new developments in your field. 📱 Social media: Join groups or follow hashtags related to your area of interest on social media platforms such as [Twitter](https://twitter.com/daniellimae), LinkedIn. 📚 Online courses: Participate in online courses or tutorials and engage with the community associated with that course or platform. This can help you connect with other learners and professionals who share your interests. # Get jobs Being involved in a developer community can help you find a job by giving you the chance to network with other professionals, access exclusive job boards and resources, develop new skills, and gain valuable references and recommendations from experienced developers. You can see more about jobs here : https://akinncar.substack.com/p/how-to-get-better-job-opportunities ![aa](https://i.kym-cdn.com/photos/images/newsfeed/001/858/649/b83.gif) --- Hope this post can be helpful! For some feedback or more content, follow me on [twitter](https://twitter.com/daniellimae) or [github](https://github.com/bolodissenoura)
daniellimae
1,426,716
Kubernetes 101, part VI, daemonsets
For most use cases, deploying core business apps in Kubernetes using Deployments for stateless...
21,979
2023-04-05T03:46:25
https://dev.to/leandronsp/kubernetes-101-part-vi-daemonsets-1ph0
kubernetes, docker
For most use cases, deploying **core business apps** in Kubernetes using [Deployments](https://dev.to/leandronsp/kubernetes-101-part-iv-deployments-20m3) for stateless applications and [StatefulSets](https://dev.to/leandronsp/kubernetes-101-part-v-statefulsets-5dob) for stateful applications is good enough. Not rare, we need to deploy components that will not perform the core business work but **will support the core business** instead. Core business apps need _observability_: **application metrics**, latency, CPU-load, etc. Furthermore, core business apps need to _tell how things are going on_, in other words they need a **logging architecture**. --- ## When default logging is not enough Once we deploy the main core business workload in Kubernetes, wen can check the logs by going through each Pod manually. It can be cumbersome. Kubernetes provide `kubectl logs` which helps a lot and, by adding a bit of bash script and creativity, we can rapidly check logs of all Pods in the cluster. But we have to provide a better developer experience (DX) to our team, so only providing `kubectl logs` might be not enough for some cases. --- ## A potential logging solution How about **collecting and concentrating all logs in a single place**? What if we had a **single Pod in every Node** responsible for collecting logs and sending them to a common place where developers could _easily fetch the logs_ of the cluster? In this scenario, every Node would run a single Pod _for collecting logs_. Any time a new Node is created, some kind of "daemon controller" would make sure that a new Pod is scheduled to the new node. Thus, all Nodes would collect logs. The picture below illustrates this potential solution: ![collecting logs in every node](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qyyrib8u22mwztlt1pcf.png) _DaemonSets for the rescue_. --- ## DaemonSet The Kubernetes [DaemonSet object](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) brings a DaemonSet controller that watches for Nodes creation/deletion and works to make sure every Node will have a single Pod replica of the DaemonSet. [Log collectors](https://en.wikipedia.org/wiki/Log_management) are a perfect fit for this solution. Let's create a _very dead simple log collector_ just using DaemonSet, Linux and creativity, nothing more. The YAML file looks like the following: ```yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: log-collector spec: selector: matchLabels: app: log-collector template: metadata: labels: app: log-collector spec: containers: - name: log-collector image: busybox command: ["/bin/sh", "-c", "while true; do find /var/log/pods -name '*.log' -print0 | xargs -0 cat >> /logs/all-pods.log; sleep 5; done"] volumeMounts: - name: all-logs mountPath: /logs - name: var-log mountPath: /var/log/pods - name: var-containers mountPath: /var/lib/docker/containers volumes: - name: all-logs hostPath: path: /logs - name: var-log hostPath: path: /var/log/pods - name: var-containers hostPath: path: /var/lib/docker/containers ``` Some highlights: * there's no multiple replicas like in Deployments, only a single Pod running on every Node * In Kubernetes with Docker, by default, all logs are sent to `/var/log/pods` via `/var/lib/docker/containers`. This is located in every Node * We mount volumes for those `/var/*` locations so we can watch for changes in these folders and send them to a common single location * In this DaemonSet, we configure to send all logs to `/logs/app-pods.log`, then mounting back the volume in the host After deploying, in the host, check the logs: ```bash $ tail -f /logs/app-pods.log {"log":"2023/04/05 02:29:34 [notice] 1#1: using the \"epoll\" event method\n","stream":"stderr","time":"2023-04-05T02:29:34.687797577Z"} {"log":"2023/04/05 02:29:34 [notice] 1#1: nginx/1.23.4\n","stream":"stderr","time":"2023-04-05T02:29:34.687806202Z"} {"log":"2023/04/05 02:29:34 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) \n","stream":"stderr","time":"2023-04-05T02:29:34.687807994Z"} {"log":"2023/04/05 02:29:34 [notice] 1#1: OS: Linux 5.15.68-0-virt\n","stream":"stderr","time":"2023-04-05T02:29:34.687809452Z"} {"log":"2023/04/05 02:29:34 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576\n","stream":"stderr","time":"2023-04-05T02:29:34.687810744Z"} {"log":"2023/04/05 02:29:34 [notice] 1#1: start worker processes\n","stream":"stderr","time":"2023-04-05T02:29:34.687811994Z"} {"log":"2023/04/05 02:29:34 [notice] 1#1: start worker process 29\n","stream":"stderr","time":"2023-04-05T02:29:34.687842494Z"} {"log":"2023/04/05 02:29:34 [notice] 1#1: start worker process 30\n","stream":"stderr","time":"2023-04-05T02:29:34.68784791Z"} {"log":"2023/04/05 02:29:34 [notice] 1#1: start worker process 31\n","stream":"stderr","time":"2023-04-05T02:29:34.687900494Z"} {"log":"2023/04/05 02:29:34 [notice] 1#1: start worker process 32\n","stream":"stderr","time":"2023-04-05T02:29:34.687971452Z"} ``` **Yay!** _How cool is that?_ --- ## Professionalism is all Of course, in production, this dead simple log collector won't scale accordingly. Instead, we can use tooling like [fluentd](https://www.fluentd.org/), [logstash](https://www.elastic.co/logstash/) and similar to do a more robust and scalable work. --- ## Wrapping Up Today we learned the importance of structuring and **collecting logs** of our applications, no matter where they are deployed. In Kubernetes, life's a bit easier because it's a **cluster of containers** and as such, we employ a special controller called **DaemonSet** that will make sure we have a log collector Pod _running in every Node_. Don't miss the next posts where we'll talk about Jobs and CronJobs. _Cheers!_
leandronsp
1,426,718
AI (de)generated design =)
Oh, sure I will buy 1 billion credits after such perfect test result! stockimg.ai, 1 credit is not...
0
2023-04-05T03:37:14
https://dev.to/mnsrff/ai-degenerated-design--26f9
Oh, sure I will buy 1 billion credits after such perfect test result! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0fh80hcasfgi9hlv6y5.png) stockimg.ai, 1 credit is not enough to test!
mnsrff
1,426,790
How To Import Local Files In Golang In 4 Easy Steps
If you're a Golang developer, you may have found yourself in need of importing local files into your...
0
2023-04-05T06:12:02
https://dev.to/coder9/how-to-import-local-files-in-golang-in-4-easy-steps-32gi
go, abotwrotethis
If you're a Golang developer, you may have found yourself in need of importing local files into your project. It's a common situation that can be handled easily with just a few simple steps. In this article, we will show you how to import local files in Golang in 4 easy steps. ## Step 1: Create a new package The first step in importing local files in Golang is to create a new package. A package is a collection of related Go source files that are compiled together. To create a new package, open a text editor and create a new directory. Inside the directory, create a new Go file and save it with a .go extension. ```go // main.go package myPackage import "fmt" func Hello() { fmt.Println("Hello from myPackage") } ``` In this example, we have created a new package called myPackage. The package contains a single Hello function that prints a message to the console. We have also imported the "fmt" package which will be used in the Hello function. ## Step 2: Build the package Once you have created your package, the next step is to build it. Building your package compiles all the source files in your package into a single binary file that can be executed. To build your package, open a terminal and navigate to the directory containing your package. Run the following command: ```bash go build . ``` The "." in the command tells Go to build the current directory. If the build is successful, a new binary file will be created in the current directory. ## Step 3: Import the package With your package built, the next step is to import it into your main program. To import your package, add the following code to your main program: ```go // main.go package main import "path/to/myPackage" func main() { myPackage.Hello() } ``` In this example, we have imported the myPackage package and called the Hello function. The path/to/myPackage should be replaced with the actual path to your package. ## Step 4: Run the program The final step is to run your program. Open a terminal and navigate to the directory containing your main program. Run the following command: ```bash go run main.go ``` The Go compiler will compile your main program along with your package and run your program. If everything is working correctly, you should see the message "Hello from myPackage" printed to the console. ## Conclusion Importing local files in Golang is a simple process that can be completed in just a few easy steps. By creating a new package, building it, importing it into your main program, and running your program, you can quickly and easily add functionality to your Golang applications. Keep these steps in mind next time you need to import local files in Golang, and you'll be up and running in no time!
coder9
1,426,944
Different models in computer programming
There are several models in computer programming, which provide a structure and methodology for...
0
2023-04-05T09:45:22
https://dev.to/akshays81992169/different-models-in-computer-programming-7l9
computerscience, programming
There are several models in computer programming, which provide a structure and methodology for designing and implementing software applications. Some of the commonly used models are: **Waterfall model:** This is a sequential model that follows a linear and sequential approach to software development. It consists of five phases: requirements, design, implementation, testing, and maintenance. The waterfall model is a linear sequential approach to software development, where the development process is divided into distinct phases. Each phase must be completed before the next one can begin, and there is no going back to a previous phase once it is completed. The five phases of the waterfall model are: **1. Requirements gathering:** In this phase, the requirements for the software are gathered and documented. This involves interviewing stakeholders and end-users to understand their needs and expectations. **2. Design:** In this phase, the software design is developed based on the requirements gathered in the previous phase. This includes creating a detailed system architecture, data flow diagrams and user interface mockups. **3. Implementation:** In this phase, the software is developed based on the design created in the previous phase. This involves coding, testing, and debugging. **4. Testing:** In this phase, the software is tested to ensure that it meets the requirements and functions as expected. This includes unit testing, integration testing, and system testing. **5. Maintenance:** In this phase, the software is deployed and maintained in the production environment. This includes ongoing support, bug fixes, and updates. **Agile model:** This is an iterative computer network model that focuses on the rapid development and delivery of working software. It emphasizes flexibility and collaboration among the development team and stakeholders. The Agile model is an iterative and incremental approach to software development that emphasizes flexibility, collaboration, and continuous improvement. Unlike the waterfall model, the Agile model does not rely on a rigid and sequential process with well-defined phases. Instead, it focuses on delivering working software in small, frequent increments, while adapting to changing requirements and feedback. The Agile model is based on four core values: - Individuals and interactions over processes and tools - Working software over comprehensive documentation - Customer collaboration over contract negotiation - Responding to change by following a plan **Prototype model:** This model involves creating an early version of the software, which is used to gather feedback and refine the design before the final implementation. The Prototype model is a software development model that involves creating a prototype and supports the [delegation event model in java](https://www.codingninjas.com/codestudio/library/delegation-event-model-in-java), or an early version, of the software to be developed. The goal of the Prototype model is to enable users and stakeholders to interact with and provide feedback on the software early in the development process so that any necessary changes can be made before the final version is developed. The Prototype model typically involves the following phases: **1. Requirements gathering:** In this phase, the development team works with the stakeholders to gather the requirements for the software to be developed. This may involve creating user stories, use cases, and other documentation to capture the functional and non-functional requirements of the system. **2. Prototype design:** In this phase, the development team designs and develops a prototype of the software. The prototype may be a simplified version of the final system, or it may include only a subset of the features to be included in the final system. **3. Prototype review:** In this phase, the stakeholders review the prototype and provide feedback on its functionality, usability, and other aspects. The feedback is used to refine the design of the prototype and make any necessary changes to the requirements. **4. Final product design and development:** Based on the feedback received during the prototype review, the development team designs and develops the final version of the software. **Spiral model:** This is an iterative model that combines elements of the waterfall model and the prototype model. It consists of four phases: planning, risk analysis, development and testing, and evaluation. The Spiral computer network model is a software development model that emphasizes risk management throughout the software development process. The model consists of a series of iterative cycles, or spirals, that allow for continuous evaluation and improvement of the software being developed. The Spiral model typically involves the following phases: **1. Planning:** In this phase, the development team identifies the objectives of the project, as well as the risks and constraints associated with the project. The team also defines the deliverables for each iteration of the spiral. **2. Risk analysis:** In this phase, the development team identifies and analyzes the risks associated with the project. The team also identifies potential solutions to these risks. **3. Engineering:** In this phase, the development team creates a prototype of the software, based on the objectives and deliverables defined in the planning phase. **4. Evaluation:** In this phase, the stakeholders evaluate the prototype and provide feedback on its functionality and usability. The feedback is used to refine the design of the software. **V model:** This is a variant of the waterfall model that emphasizes the importance of testing and verification at each stage of development. It consists of four stages: requirements, design, implementation, and testing. The V model is a software development model that is based on the Waterfall model and compatible with the delegation event model in java. The model emphasizes the importance of testing throughout the software development process, and it places a particular emphasis on the relationship between testing and requirements. The V model consists of a series of stages, with each stage being associated with a corresponding testing stage. The stages of the V model typically include the following: **1. Requirements gathering:** In this stage, the requirements for the software are gathered and documented. The requirements serve as the basis for the development process, and they are used to create the test cases for the testing stages of the model. **2. Design:** In this stage, the high-level design of the software is created. The design serves as the blueprint for the development process, and it is used to create the test cases for the testing stages of the model. **3. Implementation:** In this stage, the software is developed based on the requirements and design created in the previous stages. The implementation stage includes coding, unit testing, and integration testing. **4. Testing:** In this stage, the software is tested to ensure that it meets the requirements and design. The testing stage includes system testing, acceptance testing, and user testing. Each of these models has its strengths and weaknesses, and the choice of model depends on the specific requirements and constraints of the project.
akshays81992169
1,427,003
The Pros and Cons of In-House vs. Outsourcing Software Development
Introduction Software development has become critical to any business in today's fast-paced digital...
0
2023-04-05T10:27:39
https://dev.to/datarecove95829/the-pros-and-cons-of-in-house-vs-outsourcing-software-development-3fkn
webdev, programming, react
**Introduction** Software development has become critical to any business in today's fast-paced digital world. Whether it's building a website or developing a complex software application, the decision to handle software development in-house or outsource it to a third-party provider is a crucial one that can significantly impact a company's success. Here we will explore the pros and cons of in-house vs. outsourcing software development. ## Pros and Cons of In-House Software Development Regarding software development, companies have two options: in-house development or outsourcing. Each option has its own set of advantages and disadvantages. This article will explore each approach's pros and cons to help you decide which is right for your business. ## In-House Software Development [In-house software development](https://technanosoft.com/blog/in-house-software-development) refers to developing software within the company using the company's resources. This approach involves hiring a team of developers and providing them with the necessary resources to develop software. Here are the pros and cons of in-house software development: **Pros:** Control: With in-house software development, the company has complete control over the software development process. It allows for greater flexibility and the ability to make changes quickly. **Intellectual Property**: The company retains full ownership of the intellectual property created during software development. **Knowledge Retention:** The company retains knowledge and expertise in-house, making it easier to maintain and update the software in the future. **Collaboration:** In-house software development teams have the advantage of working closely with other departments in the company, leading to better collaboration and more innovative ideas. **Cons:** **Cost:** In-house software development can be expensive, as it requires hiring a team of developers, providing them with salaries and benefits, and investing in hardware and software. **Limited Expertise:** In-house teams may have limited expertise in certain areas, leading to delays and reduced software quality. **Time-Consuming:** In-house software development can be time-consuming, as it requires building and maintaining a team and providing ongoing training and support. **Recruiting and Retention:** Finding and retaining top talent can be challenging, as there is a high demand for skilled developers. ## Outsourcing Software Development Outsourcing software development refers to hiring an external company to develop software for the company. This approach involves hiring a third-party company specializing in software development to create the software. Here are the pros and cons of outsourcing software development: Pros: **Cost-Effective:** Outsourcing software development can be cost-effective, eliminating the need to hire and maintain an in-house team. **Access to Expertise:** Outsourcing provides access to a wider range of expertise, which can lead to higher-quality software. **Reduced Time-to-Market:** Outsourcing can help reduce time-to-market, as the third-party company can provide additional resources and expertise to speed up the development process. **Scalability:** Outsourcing provides scalability, as the third-party company can provide additional resources. Cons: **Loss of Control:** Outsourcing software development can lead to losing control over the development process, as the company relies on a third-party company to create the software. **Communication Barriers:** Communication can be challenging when working with an external company, particularly if there are language or cultural differences. **Security Risks:** Outsourcing software development can pose security risks, particularly if the third-party company has access to sensitive information. **Intellectual Property Issues:** Outsourcing can lead to intellectual property issues if the third-party company claims ownership of the intellectual property created during development. Conclusion In conclusion, both in-house and outsourcing software development have their own set of advantages and disadvantages. When deciding which approach to take, it's important to consider factors such as cost, expertise, control, and time-to-market. Ultimately, the decision will depend on your company's needs and goals.
datarecove95829
1,427,018
Azure services for .NET developers.
Hi, I am Arun Kumar Palani, Senior software engineer in Luxoft &amp; Microsoft certified solution...
0
2023-04-05T11:12:38
https://dev.to/arunkumar2331996/azure-services-for-developers-3o97
webdev, programming, azure, dotnet
Hi, I am Arun Kumar Palani, Senior software engineer in Luxoft & Microsoft certified solution Architect - Associate level. Let's discuss 6 Azure services, developers might use day to day life if the project is completely developed in Microsoft platform. Before going to discuss in detail let’s see some short introduction of them in 2 lines. **<u>1.Azure Key vault</u>** – It is a place where we can keep our secrets. Mostly connection strings and external API secret keys are maintained here. **<u>2.Azure Dev-ops </u>**– A simplified solution for maintaining source code, creating and maintaining CI/CD pipelines, release management, a place for maintaining artifacts and it also has test plan. **<u>3.Azure API management</u>** – Acts as an entry gate in front of API’s and mostly used for API gateway implementation without much coding. **<u>4.Azure Storage services</u>** – Mostly used for storing files and information in different azure storage services. **<u>5.Azure SQL:</u>** Similar to on- premises database but offers more flexibility in terms of backup/restore and also in high availability/Auto scaling feature. **<u>6.Azure App Service/VM:</u>** App service is a PAAS and VM is IAAS both are used for hosting web applications. VM’s have a lot of advantages other than hosting. **<u>Note: </u>**This article is intended for those who have an interest in learning azure conceptually. It will give a clear idea and the exact usage of the service. If you’re new to Azure, please read it twice to get a clear picture of it. This is not a technical article this will just cover the overall view about all the 6 services. **<u>1. What is an Azure Key vault? </u>** Azure Key Vault is a cloud-based service that allows you to securely store and manage cryptographic keys, secrets and certificates used in your applications and services. It is a centralized service where you can store and manage all the sensitive data of your application instead of storing it locally, thus improving the security of your applications. The Azure Key Vault service provides robust access control, auditing, and monitoring capabilities to ensure that your keys are being accessed only by authorized personnel. It also allows you to generate and manage encryption keys for your applications, making it easier to encrypt/decrypt data and keep it secure. **<u>Advantages of Azure Key vault:</u>** **<u>a. Enhanced Security:</u>** With Azure Key Vault, you can store and manage all your cryptographic assets in a secure, centralized location. This helps to reduce the risk of keys being compromised or lost due to insecure storage or sharing practices. **<u>b. Simplified Key Management:</u>** Azure Key Vault provides a simple, easy-to-use interface for managing your encryption keys, secrets and certificates, which makes it easier for developers to implement secure key management practices. **<u>c. Integration with Azure Services:</u>** Key Vault is tightly integrated with other Azure services like Azure Functions and Azure App Service, enabling developers to easily add encryption and decryption features to their applications without having to worry about key management. **<u>d. Easy Compliance:</u>** Azure Key Vault supports compliance with various industry standards. **<u>e. Bring Your Own Key Support:</u>** Azure Key Vault supports use cases where you want to manage keys on-premises while still utilizing Azure's many platform services. **<u>f. Monitoring and Auditing Capabilities:</u>** Key Vault provides robust monitoring, logging, and auditing capabilities that enable teams to track usage, access, and changes to cryptographic keys, certificates, and secrets. **<u>Disadvantages:</u>** **<u>a. Cost:</u>** Using the Azure Key Vault service can add to your overall cloud expenses, especially if you have large amounts of keys, secrets, or certificates to manage. **<u>b. Learning Curve:</u>** Azure Key Vault can be complex for new users and may require some time to learn and become familiar with its capabilities and integration with other Azure services. **<u>c. Limited Programming Language Support:</u>** Although Azure Key Vault provides SDKs for multiple programming languages like .NET, Java, Python, etc., its functionality may not be fully supported by all programming languages and frameworks, which could limit its usage in some situations. **<u>d. Dependency on Azure Infrastructure:</u>** Since Azure Key Vault is a cloud-based service, it requires an active internet connection and depends on Azure's infrastructure, which can lead to issues if there are any service disruptions or outages. **<u>2. What is Azure DevOps?</u>** Azure DevOps is a collection of services that allow software development teams to plan, build, test, and deploy software. It includes Azure Boards, Azure Repos, Azure Artifacts, Azure Test Plans, and Azure Pipelines. These tools work together seamlessly to provide a comprehensive DevOps solution for teams of any size. **<u>What Azure DevOps Provides?</u>** **<u>a. Azure Boards:</u>** Like Jira\HP ALM, where we can create and manage user stories/bugs. We can also create branches from the user story where the commit and branch are linked with the parent user story for easy tracking. Azure Boards also includes built-in analytics features that allow teams to track metrics such as lead time, cycle time, and throughput, which can be useful for identifying bottlenecks and areas for improvement. We can visualize our data from the azure board and track dependencies. **<u>b. Azure Repos:</u>** Like GitHub\Bitbucket\ GitLab\ TFS, where we can push our changes and we can maintain versioning. We can add people from our team and collaborate on code, track the changes, and roll back it whenever necessary. It provides options for code review and search option for code with specific pieces of snippet. **<u>c. Azure Artifacts:</u>** Like JFrog\team city, where we can maintain our NuGet package for the organization includes NuGet, NPM, Maven packages. With the help of artifacts, we can fetch easily build our pipeline. Additionally, Azure Artifacts includes package management capabilities, which allow teams to manage dependencies, ensure that packages are up to date, and roll back to previous versions if necessary. **<u>d. Azure Test-Plan:</u>** Like Q-test, TestRail. With Azure test plan, we can configure manual testing and exploratory testing plans. It also integrates with Azure Pipelines, making it easy to automate tests and ensure the software is tested before deployment. We can make a track of all the test cases and we can automate it before going to deployment. So that our application is fully tested before release. From the developer’s side, we can see the test failure and root cause of the error. **<u>e. Azure Pipelines:</u>** Like Jenkins. With Azure Pipelines, teams can automate their software delivery process, from building and testing to deploying to multiple environments. It supports CI\CD which allows teams to build the application, test and deploy the code to environments when it is merged with repository. The team can use Azure Pipelines to automate the application's build, test, and deployment process and release it to different environments whenever necessary. We can create a release trigger either manual or automatic by setting the condition to trigger. **<u>What is the advantage of using Azure DevOps?</u>** 1. Easy integration. 2. Cost effective solution. 3. Single sign on using Azure AD. 4. A complete package not depending on different software for each step. Implementing a DevOps strategy is essential for software development teams to deliver software quickly and efficiently. Azure DevOps provides this feature with minimal cost, i.e.) cost is calculated on a user basis. **<u>How to deploy a .NET application using Azure DevOps?</u>** Step 1: Create an Azure DevOps feature from the azure portal. Step 2: Create a simple Web API project and configure it to Azure repository. Step 3: Create an App server or Virtual machine to host the application. Step 4: Create a release pipeline. Once the build is available from the pipeline, we can deploy the build to the app service/ VM in the continuous way. Step 5: What are the steps while deploying our application to app service? 1.Build the environment - Ubuntu\Windows etc. 2.Restore dependencies from the NuGet. 3.Build your project. 4.Run the test cases (Unit\Integration tests). 5.Collect code coverage if required. 6.Publish artifacts to azure pipelines. 7.Publish to NuGet feed. 8.Deploy to web app (app server or VM). **<u>3. Azure API Management</u>** It is a cloud-based platform that allows developers to expose, publish, and manage APIs. The service provides a robust set of tools for managing APIs, controlling access to APIs, and monitoring traffic. Within Azure API Management, developers can create new APIs or import existing ones, add restrictions on how the API may be consumed, and configure policies to apply when requests are received. "Basically, we do the same thing in API gateway pattern. For this we might use the Ocelot NuGet package. The same service is provided by Google Apigee and in AWS as well". **<u>Key Features of Azure API Management:</u>** **<u>API Gateway:</u>** At the core of Azure API Management is an API gateway, which acts as a reverse proxy through which requests are routed to the appropriate backend services. The gateway provides protocol translation, message transformation, and other communication functions to enable seamless integration between client applications and backend services. **<u>API Definition:</u>** With Azure API Management, developers can define APIs using the API Specification (formerly known as Swagger). The specification allows developers to describe the operation of their APIs in a machine-readable format, making it easier for clients to discover and consume the API. **<u>a. Security:</u>** APIs are often resources that need to be guarded against unauthorized access. To this end, Azure API Management provides a range of security features including authentication, authorization, and encryption. Developers can require API consumers to authenticate themselves before accessing an API. Access can also be controlled by roles, allowing fine-grained control over which users can access which APIs. **<u>b. Rate Limiting:</u>** To prevent abuse of APIs, Azure API Management allows developers to apply rate limits. Limits can be applied based on the number of calls allowed per user, per minute or any other defined period. By setting up rate limits, developers can ensure that their APIs are not subjected to excessive load, which could affect system performance. **<u>c. Analytics:</u>** API data analytics are essential for understanding how your APIs are being used, where performance bottlenecks may exist, and what changes can be made to improve them. With Azure API Management, developers have access to detailed data analysis and reporting tools that provide real-time insights into the performance and usage of their APIs. This feature includes tracking metrics such as call count, response codes, latency, and other relevant indicators. **<u>d. Developer Portal:</u>** A developer portal is a self-service interface that is used by developers to discover, learn, test, and get support for the APIs they use. Developers can browse documentation, test endpoints, download code samples, and subscribe to new APIs from a single location. With Azure API Management, developers can build fully customized developer portals that include all the necessary information related to their APIs. Personalization options allow for branding and customization so that the developer portal can match the look and feel of your website or application. **<u>Creating APIs with Azure API Management</u>** The process of creating APIs with Azure API Management typically involves the following steps: Define the backend service – you may have existing services, or you can create new ones that return the data or perform the operations the API will expose. Create a new API – either by specifying the API specification directly or by importing an existing API definition. Configure API policies – policies can be used to modify the behavior of your API, add additional security, or allow efficient caching. **<u>a. Creating Inbound rules:</u>** This is the place where the incoming request and its IP address is filtered, and it will check whether these URL is allowed for communication. Ex) we can directly CORS in API gateway itself rather configuring in the code. **<u>b. Creating outbound rules:</u>** Here we can set our custom response headers and send the response back to the user. If there is any modification, we need here we can do that as well. **<u>c. Backend policies:</u>** Here we can configure backend URL so that it can safely accessed with the help of inbound and out bound policy. **<u>d. Issue API keys</u>** – Issuing authorization keys is a technique used to track the usage of your API and maintain control over who is allowed to use it. **<u>e. Configure usage plans</u>** – usage plans allow you to specify rates at which developers can use your API under free, paid, or custom tier models. **<u>f. Publish the API</u>** – Once the API has been thoroughly tested and reviewed, developers can publish it to make it publicly available. Azure API Management is a powerful tool that enables developers to create, manage, and publish APIs quickly and efficiently. The platform comes equipped with a host of features that enable secure and reliable API management, including developer-friendly portals and comprehensive analytics capabilities. **<u>4. Azure storage service:</u>** It offers a flexible and scalable solution for storing unstructured data, such as text or binary data, in the cloud. The service supports various types of data including blobs, files, queues, tables, and disks. Azure Storage is designed to provide a highly available and durable storage infrastructure across multiple regions. It provides automatic replication of data within the same region and enables geo-replication across different regions. This ensures that data is always available even in case of regional disasters. Additionally, Azure Storage also provides multiple layers of security to protect data, including encryption at rest and in transit. Let's take a closer look at some of the key features and services offered by Azure Storage: **<u>a. Blob Storage</u>** Azure Blob Storage is a scalable and secure object storage service that allows you to store large amounts of unstructured data, such as text or binary data, in the cloud. It provides support for several APIs including REST, .NET, Java, Python, and Node.js. It also provides three tiers of storage: hot, cool, and archive, allowing you to choose the storage option based on the frequency of access to the data. **<u>b. File Storage</u>** Azure File Storage provides fully managed file shares in the cloud. It allows you to create file shares that can be accessed from anywhere in the world using standard SMB 3.0 protocol. It allows you to store and share files with ease, enabling you to easily migrate your existing applications to the cloud. **<u>c. Queue Storage</u>** Azure Queue Storage is used to transmit messages between components of distributed systems. It provides a simple messaging service that allows you to send and receive messages between different components of your application. It is ideal for asynchronous communication and can be used to build reliable systems, decoupled systems, and scalable applications. **<u>d. Table Storage</u>** Azure Table Storage provides NoSQL-like storage capabilities for unstructured data. It allows you to store large amounts of structured, non-relational data in the cloud, and supports features such as secondary indexes and queries. **<u>e. Disk Storage</u>** Azure Disk Storage provides persistent block-level storage for virtual machines (VMs) running in Azure. With disk storage, you can attach disks to your VMs, providing them with durable, low-latency storage. You can also use managed disks, which provide simplified management of storage accounts. In addition to these services, Azure Storage also provides several other features that make it a versatile and flexible storage solution: **<u>a. Azure Data Lake Storage Gen2:</u>** Azure Data Lake Storage Gen2 is a fully managed service that provides scalable, secure, and cost-effective storage for big data analytic workloads. It combines the power of a Hadoop file system with Azure Blob Storage to provide a common namespace for both structured and unstructured data. **<u>b. Azure Backup:</u>** Azure Backup provides reliable and scalable backup solutions for your data in the cloud. It enables you to protect your data by backing up files, folders, or entire VMs to Azure Storage. Azure Backup also provides support for backup of on-premises data to the cloud, making it a comprehensive backup solution. **<u>c. Azure Site Recovery:</u>** Azure Site Recovery provides disaster recovery solutions for your applications and workloads. It enables you to automate the replication of VMs and physical servers to the cloud, providing you with a seamless failover and failback experience in case of any disasters. It also provides support for real-time replication to minimize data loss. **<u>d. Azure StorSimple:</u>** Azure StorSimple provides a hybrid cloud storage solution that enables you to tier cold data to the cloud while keeping hot data on-premises. This helps reduce storage costs as you only pay for the data that you store in the cloud. Additionally, StorSimple provides intelligent caching and data management capabilities to optimize storage performance. **<u>e. Azure Data Box:</u>** Azure Data Box is a physical device that allows you to move large amounts of data to and from Azure Storage securely and quickly. It is ideal for scenarios where transferring large datasets over the network is not feasible. Data Box supports various storage options, including Blob Storage, File Storage, and Disk Storage. **<u>5. Azure SQL service:</u>** It provides a highly scalable and flexible solution for managing relational data in the cloud. The service provides various features, including automatic scaling, high availability, security, and backup and recovery mechanisms. Azure SQL provides support for several deployment options, including single databases, managed instances, and elastic pools. It also supports several programming languages and development tools, such as .NET, Java, Python, and Node.js, making it easy to develop and deploy applications in the cloud. Let's take a closer look at some of the key features and services offered by Azure SQL: **<u>1. Database-as-a-service:</u>** Azure SQL is a fully managed database-as-a-service (DBaaS) that eliminates the need for managing infrastructure and hardware. The service takes care of database maintenance tasks such as patching, tuning, backups, and monitoring, freeing up your time to focus on application development. **<u>2. Automatic Scaling:</u>** Azure SQL provides automatic scaling capabilities, allowing you to automatically adjust the resources allocated to your database based on the workload demands. This ensures that your database can handle peak loads without compromising on performance. **<u>3. High Availability:</u>** Azure SQL provides high availability through its built-in replication technology. The service automatically replicates your database across multiple regions, ensuring that your application remains available even in case of regional disasters. Additionally, Azure SQL provides transparent failover mechanisms, so that your application can continue running seamlessly in case of any disruptions. **<u>4. Security:</u>** Azure SQL provides several layers of security to protect your data, including encryption in transit and at rest, firewall rules, identity and access management, and threat detection. It also provides compliance with industry-specific regulations such as HIPAA, PCI DSS, and GDPR. **<u>5. Backup and Recovery:</u>** Azure SQL provides automated backups and point-in-time restore capabilities, ensuring that your data is protected against accidental deletion, corruption, or other types of data loss. In addition to these features, Azure SQL also provides several other services that make it a versatile and flexible database solution: **<u>a. Single Databases:</u>** Azure SQL supports single databases, which are fully managed databases that can be quickly created and managed in the cloud. You can choose from various database sizes based on your requirements and pay only for the resources that you use. **<u>b. Managed Instances:</u>** Azure SQL also provides managed instances, which are fully managed instances of SQL Server in the cloud. Managed instances provide full SQL Server compatibility, enabling you to easily migrate your existing applications to the cloud. **<u>c. Elastic Pools: </u>** Azure SQL supports elastic pools, which allow you to share resources among multiple databases with different usage patterns. This enables you to reduce costs by optimizing resource utilization across multiple databases. **<u>d. Azure Synapse Analytics (formerly SQL Data Warehouse):</u>** Azure Synapse Analytics is a cloud-based analytics service that allows you to analyze large amounts of data in the cloud using a combination of SQL queries and big data technologies such as Apache Spark. It provides integration with Azure Machine Learning, making it easy to build predictive models and integrate them into your data analysis workflows. **<u>e. Azure Database Migration Service:</u>** Azure SQL also provides a migration service that allows you to migrate your on-premises databases to Azure SQL easily. The service supports various database types, including SQL Server, MySQL, PostgreSQL, and MongoDB. **<u>6. Azure App Service and VM:</u>** Azure App Service is a fully managed Platform-as-a-Service (PaaS) offering that allows you to quickly build, deploy, and scale web applications, mobile backends, and RESTful APIs. It supports various programming languages, including .NET, Java, Node.js, PHP, Python, and Ruby. With Azure App Service, you can easily create and manage web apps, API apps, mobile apps, and logic apps using a single integrated solution. The service provides several features such as automatic scaling, high availability, continuous deployment, and DevOps integration, making it easy to maintain your applications and achieve faster time-to-market. On the other hand, Azure Virtual Machines (VM) is an Infrastructure-as-a-Service (IaaS) offering that allows you to set up and run virtual machines in the cloud. You can choose from a wide range of pre-configured VM images or create your custom image based on your requirements. This gives you complete control over the operating system, applications, and configuration of your virtual machines. Let's take a closer look at the key features and services offered by Azure App Service and virtual machines: **<u>1. Azure App Service: </u>** Azure App Service provides several features that make it a versatile and flexible platform for developing and deploying web applications and mobile backends. **<u>a. Web Apps:</u>** Azure Web Apps allow you to easily deploy and manage web applications using a single integrated solution. It supports various frameworks and technologies, including ASP.NET, Node.js, Python, and PHP. You can deploy your applications using FTP, Git, or continuous deployment tools such as Azure DevOps, GitHub, or Bitbucket. **<u>b. Mobile Apps:</u>** Azure Mobile Apps provide a scalable and secure backend for mobile applications. It enables you to build cross-platform mobile applications using Xamarin or Apache Cordova and authenticate users using popular identity providers such as Facebook, Google, and Twitter. **<u>c. Logic Apps:</u>** Azure Logic Apps provide a low-code framework for building workflows and integrations between different applications and services. It supports various connectors to integrate with popular SaaS applications such as Salesforce, Dynamics 365, and Office 365. **<u>2. Virtual Machines:</u>** Azure Virtual Machines provide several features that make it a versatile and flexible platform for running various workloads in the cloud. **<u>a. Pre-configured VM Images:</u>** Azure provides various pre-configured virtual machine images, allowing you to easily set up your preferred operating system and application stack. It includes images for Windows Server, Linux, SQL Server, Oracle, and much more. **<u>b. Customizable VMs:</u>** You can also create custom virtual machines based on your requirements. This enables you to configure the operating system, applications, and settings according to your specific needs. **<u>c. High-performance Computing (HPC):</u>** Azure provides several options for running HPC workloads, including high-performance computing clusters, low-latency storage, and GPU-enabled virtual machines. **<u>d. Hybrid Cloud:</u>** Azure provides seamless integration with your on-premises infrastructure, allowing you to extend your datacenter to the cloud with a hybrid cloud solution. This enables you to easily migrate your existing applications to the cloud and take advantage of its scalability and flexibility. **Comparison** Let's compare some key differences between Azure App Service and Virtual Machines: **<u>a. Management:</u>** Azure App Service is a fully managed service that eliminates the need for managing infrastructure or hardware. In contrast, Azure Virtual Machines require you to manage the underlying infrastructure, such as operating system updates, security patches, and backups. **<u>b. Scalability:</u>** Azure App Service provides automatic scaling capabilities, allowing you to scale your web apps, API apps, and mobile apps dynamically based on usage patterns. Azure Virtual Machines require manual intervention to scale resources up or down. **<u>c. Cost:</u>** Azure App Service charges based on the number of instances and their size, whereas Azure Virtual Machines charge based on the number of virtual machines, their size, and the amount of storage used. **<u>d. Configuration:</u>** Azure App Service provides a ready-to-use environment for developing and deploying web applications and mobile backends, whereas Azure Virtual Machines require you to set up your infrastructure from scratch. **<u>e. Control:</u>** Azure Virtual Machines provide complete control over the virtual machines' configuration, allowing you to configure hardware resources, operating systems, and applications based on your specific needs. In contrast, Azure App Service provides limited control over the underlying infrastructure. **<u>Conclusion:</u>** Azure services play a major role if we are using Microsoft technologies like C#, it has direct integration with azure cloud PAAS and IAAS. There are lot of advantages if we use azure services in terms of cost, performance, security. Azure services also built in log facilities like application insight where we can the logs if in case of any failures. If you have any doubts and require technical implementation for the same, please comment below. I am happy to provide the solution in my GitHub link on the next post. Stay tuned!!!
arunkumar2331996
1,427,150
Understanding Events in JavaScript
Events are actions or occurrences that happen in the browser, such as a user clicking a button or a...
22,401
2023-04-05T13:40:30
https://makstyle119.medium.com/understanding-events-in-javascript-c9c0ac3e2371
javascript, beginners, programming, makstyle119
**Events** are actions or occurrences that happen in the browser, such as a user clicking a button or a page finishing loading. JavaScript provides a way to handle these events using event listeners. In this blog post, we will discuss events in **JavaScript** and how to handle them using event listeners. What are Events? In JavaScript, an event is an action or occurrence that happens in the browser, such as a user clicking a button or a page finishing loading. These events can be triggered by the user or the browser itself. Examples of events include: - Clicking a button - Hovering over an element - Submitting a form - Scrolling the page - Resizing the window Handling Events with Event Listeners JavaScript provides a way to handle these events using event listeners. An event listener is a function that is executed when an event occurs. The ``addEventListener()`` method is used to attach an event listener to an element. The syntax for the ``addEventListener()`` method is as follows: ```javascript element.addEventListener(event, function, useCapture); ``` Here, ``element`` is the element to which the event listener is attached, ``event`` is the name of the event, ``function`` is the function to be executed when the event occurs, and ``useCapture`` is an optional boolean value that specifies whether to use event capturing or event bubbling. Let's take a look at an example of how to use an event listener to handle a click event: ``` const button = document.getElementById('myButton'); button.addEventListener('click', function() { console.log('Button clicked!'); }); ``` In the above example, we have attached an event listener to a button element using the ``addEventListener()`` method. The function passed as the second argument to the method will be executed when the button is clicked. Types of Events There are many different types of events in JavaScript, including: - Mouse events: ``click``, ``dblclick``, ``mousedown``, ``mouseup``, ``mousemove``, ``mouseover``, ``mouseout`` - Keyboard events: ``keydown``, ``keyup``, ``keypress`` - Form events: ``submit``, ``reset``, ``change``, ``focus``, ``blur`` - Window events: ``load``, ``unload``, ``resize``, ``scroll`` - Media events: ``play``, ``pause``, ``ended`` **Conclusion:** Events are an important part of web development, as they allow us to interact with users and respond to actions in the browser. By understanding how to use event listeners, you can write more interactive and dynamic JavaScript programs.
makstyle119
1,427,182
Resolvendo problemas no HackerRank: Contando Strings repetidas
Olá, seja bem vindo a mais um Resolvendo problemas no HackerRank: No caso de hoje,vamos resolver um...
0
2023-04-05T14:34:50
https://dev.to/altencirsilvajr/resolvendo-problemas-no-hackerrank-contando-strings-repetidas-19d3
javascript, beginners, programming
Olá, seja bem vindo a mais um Resolvendo problemas no HackerRank: No caso de hoje,vamos resolver um problema de campeonato de programação. Vejamos mais informações com a explicação detalhada e sua resolução. **ACM ICPC Team -** Várias pessoas estarão presentes nas Finais Mundiais do ACM-ICPC . Cada um deles pode ser bem versado em vários tópicos. Dada uma lista de tópicos conhecidos por cada participante, apresentados como strings binárias, determine o número máximo de tópicos que uma equipe de 2 pessoas pode conhecer. Cada assunto tem uma coluna na string binária, e um '1' significa que o assunto é conhecido enquanto '0' significa que não é. Determine também o número de equipes que conhecem o número máximo de tópicos. Retorna uma matriz inteira com dois elementos. O primeiro é o número máximo de tópicos conhecidos e o segundo é o número de equipes que conhecem esse número de tópicos. Exemplo: - n= 3 - topics = ['10101','11110','00010'] Vejamos o código a seguir: ``` function acmTeam(topic) { // Write your code here let arr = []; let max = 0; for (let i=0; i < topic.length; i++) { for (let j= i+1; j < topic.length; j++) { let countOne = 0; for (let k=0; k<topic[i].length; k++) { if (topic[i][k] | topic[j][k]) { countOne++; } } if (countOne > max) { max = countOne; } arr.push(countOne); } } return [max, arr.filter(e => e == max).length]; } ``` Este código implementa uma função chamada "acmTeam" que recebe um array de strings chamado "topic". A função tem como objetivo determinar quantas equipes podem ser formadas a partir das strings de "topic" em que cada equipe é composta por duas pessoas e a habilidade de cada equipe é a soma das habilidades individuais de cada membro. Para determinar a habilidade de uma equipe, a função compara as strings dos membros da equipe posição por posição, contando quantas posições possuem um valor "1" em pelo menos uma das strings. O número resultante é a habilidade da equipe. A função armazena o número máximo de posições com valor "1" encontrado ao percorrer todas as equipes possíveis e retorna uma array com dois valores: o número máximo de posições com valor "1" encontrado e a quantidade de equipes que possuem essa habilidade máxima. O código usa três loops for aninhados para comparar todas as equipes possíveis, um loop para percorrer a primeira string, outro para percorrer a segunda string e um terceiro para percorrer cada posição da string e comparar os valores. A variável "arr" armazena o número de posições com valor "1" para cada equipe e a variável "max" armazena o número máximo encontrado. A função retorna um array com os valores de "max" e a quantidade de vezes que ele aparece em "arr". O resultado será: ``` 4 5 10101 11100 11010 00101 input:5 2 ``` Assim, concluímos mais um _Resolvendo problemas no HackerRank:_ até a próxima.
altencirsilvajr
1,427,192
Creating an Effective Data Breach Response Plan: Essential Elements and Best Practices
A data breach can be a nightmare for any organization, causing damage to reputation, customer trust,...
0
2023-04-05T15:01:47
https://dev.to/essertinc/creating-an-effective-data-breach-response-plan-essential-elements-and-best-practices-4anp
A data breach can be a nightmare for any organization, causing damage to reputation, customer trust, and financial losses. A data breach can occur in several ways, including through cyberattacks, employee errors, or physical theft. Therefore, it is essential to have a well-prepared data breach response plan to minimize the damage and ensure a timely and effective response. In this article, we will discuss the essential elements of a [data breach response plan](https://essert.io/rapid-ccpa-compliance-roll-out-managed-privacy-services/). **Incident Response Team:** The first step in preparing a data breach response plan is to establish an incident response team (IRT) consisting of individuals with relevant skills and expertise. The IRT should include representatives from various departments, including IT, legal, public relations, and senior management. The team should have a clear understanding of their roles and responsibilities during a data breach. **Incident Identification and Assessment:** The next step is to identify and assess the incident. This involves determining the scope and nature of the breach, the type of data involved, and the potential impact on the organization and affected individuals. The IRT should take immediate action to contain the breach and prevent further damage. **Notification and Communication:** The IRT should notify the relevant stakeholders, including the data protection authority, affected individuals, and other third parties, such as insurers or law enforcement agencies, as required by law. The notification should be clear, concise, and provide details of the incident, including the type of data involved, the potential impact, and the measures taken to mitigate the damage. **Investigation and Remediation:** Once the incident is contained, the IRT should conduct a thorough investigation to determine the cause of the breach and identify any vulnerabilities in the organization's security infrastructure. The IRT should also take appropriate measures to remediate the damage and prevent similar incidents from occurring in the future. **Review and Update:** After the incident is resolved, the IRT should review and update the data breach response plan based on lessons learned. The review should include an assessment of the effectiveness of the plan and the IRT's response to the incident. The IRT should also update the plan to reflect any changes in the organization's operations or security infrastructure. **Conclusion** In conclusion, a data breach response plan is essential for any organization that handles personal data. By preparing a well-structured plan and establishing an incident response team, organizations can minimize the damage caused by a data breach and ensure a timely and effective response. A data breach response plan should include incident identification and assessment, notification and communication, investigation and remediation, and review and update. Regularly reviewing and updating the plan is critical to maintaining its effectiveness in responding to data breaches.
essertinc
1,427,460
Thoughts on BloombergGPT and Domain Specific LLMs
Bloomberg announced BloombergGPT which looks incredible (direct link to the paper). I think this is a...
0
2023-04-05T19:39:04
https://dev.to/reaminated/thoughts-on-bloomberggpt-and-domain-specific-llms-1mon
ai, discuss, productivity, chatgpt
[Bloomberg](https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/) announced BloombergGPT which looks incredible ([direct link to the paper](https://arxiv.org/abs/2303.17564)). I think this is a glimpse into the future of LLM models - the idea of domain-specific models, as suggested in the paper, to optimise processes and output. What will be interesting to see in the finance/trading field is how democratised data insight through LLM, and the ease of access to a larger number of people, would affect the alpha. On the one hand, having raw data analysed differently by different companies means that some companies may identify insights others haven't and could potentially trade on them, thus outperforming the market. On the other hand, it now potentially means that pressure is put on the rest of the trading pipeline, something that LLMs may not necessarily be able to do (?) (e.g. how quickly can your systems trade on the data based on your preferred horizons, how well can you update and run backtesters to work efficiently with LLM data, what's the optimal parameterization of a portfolio to trade with etc...) I think this is a microcosm of the bigger effects of domain specific LLMs - they'll put pressure, and create new job roles and remits, on not just the data side that LLMs produce but the rest of the technical and business pipelines to optimise their functions to capitalise on the LLM data - e.g. imagine combining trained decision trees to generate specific prompts with LLMs to reduce hallucination or gather hidden insights. LLMs being just one of many forms of AI models, it'll be interesting to see if other forms of AI models are implemented elsewhere in the system's execution path to collaborate with the new LLMs coming out to take full advantage. A lot of focus on data at the moment, which is critical, but the knock on effects on data utilisation research will be interesting to watch.
reaminated
1,427,648
Employment opportunities in Chile for software developers and engineers
Hello everyone! I am researching how to get a job as a software developer in Chile, but so far I...
0
2023-04-05T22:27:13
https://dev.to/krlz/employment-opportunities-in-chile-for-software-developers-and-engineers-3hkn
discuss, career, chile, programming
Hello everyone! I am researching how to get a job as a software developer in Chile, but so far I haven't had any success. I would like to ask for the community's help in obtaining more information and recommendations on how to contact Chilean companies. I have noticed that to be successful in job searching, it is important to have experience and contacts in the Chilean technology sector. I have been recommended to participate in online communities and in-person events to establish contacts. I have found that DevsChile and Meetup Santiago are good options, but there may be more. Chile is one of the most prosperous and stable countries in Latin America, with a growing economy and many multinational companies and startups that I admire. This means that there are many opportunities for software developers and engineers. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vb69e118s3dussgve5r2.jpg) The information and communication technology (ICT) sector in Chile has grown at an average annual rate of 10.6% over the last five years, according to the Chilean government. Additionally, the government is investing in the technology industry and supporting startups with various initiatives, which means that there are many opportunities for software developers and engineers. Santiago is the main technology center in Chile, with many multinational companies and startups. There are also many universities and educational institutions that focus on education and training in technology. Furthermore, there is a growing community of technology companies and startups in important cities such as Concepción and Valparaíso. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u3qx1mbyvo7wr0rrjkbv.png) Lately, I have been focusing on job searching on these 4 websites: - Laborum - Trabajando - Computrabajo - Getonbrd On these websites, I was able to search for jobs by location, industry, and experience level, as well as set up alerts to receive notifications of new jobs. Despite having over 10 years of work experience in the industry, I have not been able to establish contact with some companies, which is strange to me. Therefore, I would appreciate any additional help or information the community can provide to improve my job search in the technology sector in Chile. Here are some questions I would like help answering: 1. **What are the most valued skills and experiences by employers in Chile for a software developer?** 2. **What other communities and events are useful for connecting with other software developers and engineers in Chile?** 3. **What other tools and resources can be useful for finding jobs in the technology sector in Chile**? Although I am Chilean and have dual nationality, I know that my experience in the country is limited. Therefore, any additional information or recommendation you can provide will be very helpful in achieving my goal of getting a job as a software developer in Chile. **Useful resources:** - LinkedIn: https://www.linkedin.com/ - Laborum: https://www.laborum.cl/ - Trabajando: https://www.trabajando.cl/ - Computrabajo: https://www.computrabajo.cl/ - DevsChile: https://www.linkedin.com/groups/1991226/ - Meetup Santiago: https://www.meetup.com/es-ES/topics/software-dev/cl/santiago/ - Chilean embassy website: https://chile.gob.cl/washington/en/ - Chilean immigration website: https://www.extranjeria.gob.cl/ - Startup Chile: https://www.startupchile.org/ - InvestChile: https://investchile.gob.cl/
krlz
1,428,029
How To Automate Calendar Using Selenium WebDriver For Testing?
Quite often while buying a movie ticket online or filling up a form, you’d come across calendars to...
0
2023-04-06T07:56:49
https://www.lambdatest.com/blog/how-to-automate-calendar-using-selenium-webdriver-for-testing/
webdev, selenium, tutorial, testing
Quite often while buying a movie ticket online or filling up a form, you’d come across calendars to pick dates and even time. These calendars make the whole process of picking dates much easier and interactive and play a crucial role in enhancing user experience. It would be surprising to find websites, especially in the travel and hospitality domain, where they don’t make use of the date picker calendar so that their customers can schedule their travel. Since there are different types of calendar controls and their behaviour can vary across different web browsers, it becomes important to exhaustively automate calendar using [Selenium WebDriver.](https://www.lambdatest.com/blog/selenium-webdriver-tutorial-with-examples/?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=blog) The most popular date picker controls are jQuery calendar (or jQuery date picker) and Kendo calendar (or Kendo date picker). Even if you have not used calendar controls for your project, it is important to have know-how about these standard controls and mechanisms to automate calendar using Selenium WebDriver, here’s a document to help you know [What Is Selenium?](https://www.lambdatest.com/selenium?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=webpage) In this Selenium [WebDriver tutorial](https://www.lambdatest.com/learning-hub/webdriver?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=learning_hub) for testing, I will take a deep dive to automate calendar using Selenium WebDriver for popular calendar controls. You can skip to any sub-section depending on your understanding about calendar controls. Below are the sub-topics covered this Selenium testing tutorial: ## Types Of Calendar Controls A calendar control (date picker control) allows the user to select a date easily. Usually, it appears as an input field in an [HTML form](https://www.lambdatest.com/form-features?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=webpage). Since the date selected by the user might be needed later, it is important to maintain the format. This is also why HTML forms are more widely used than entering date in a text-box. There are two popular types of Calendar controls you’d need to automate calendar using Selenium WebDriver: * **jQuery Calendar **— The jQuery calendar is a part of the open-source project at JS Foundation (previously called jQuery Foundation). There are a number of other elements like user interface interactions, widgets, effects, etc. that are also built on top of jQuery JavaScript Library. * **Kendo Calendar **— Kendo Calendar is developed by Telerik.com. It is not an open-source project. Using Kendo UI, developers can build JavaScript apps faster. Kendo and jQuery Calendars work on all major web browsers. But there are a few exceptions- Kendo on IE works on IE7 (and above) whereas jQuery works on IE8 (and above). Both these controls are responsive and have mobile browser compatibility. Now perform live interactive [jQuery testing](https://www.lambdatest.com/testing-cloud/jquery-testing?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=webpage) of your jQuery Mobile websites on LambdaTest. ## How To Automate Calendar In Selenium WebDriver For Automation Testing? The challenge in [Selenium test automation](https://www.lambdatest.com/selenium-automation?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=webpage) of calendar controls is that the presentation of information can vary from calendar to calendar. In some calendar controls, the months & years are shown in a drop-down whereas in few of them, the months & years can be changed using navigation controls i.e. previous and next buttons. Some date picker controls also have time alongside date & time. This makes [automated browser testing](https://www.lambdatest.com/?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=webpage) of calendar controls challenging as the test implementation will need to be tweaked as per the appearance and style of the control. This section of Selenium testing tutorial focuses on the implementation to automate calendar using Selenium WebDriver: **_This [Playwright browser testing](https://www.lambdatest.com/blog/playwright-framework/?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=blog) tutorial will guide you through the setup of the Playwright framework, which will enable you to write end-to-end tests for your future projects._** ## Handling ‘JQuery Calendar’ in an IFRAME There are many scenarios where you would want to place the Calendar control inside an iFrame. In such cases, before performing any operation on the date picker, you have to first switch to that iFrame. Once inside the iFrame, you should perform the following operations: Step 1: Click on the Calendar Control to open the same. Step 2: Find the Year drop-down control and select the required year from that drop-down. Step 3: Find the Month drop-down control and select the required month from that drop-down. Step 4: Once year and month is selected, locate the corresponding date by navigating through the Date table. I’ll use jQuery’s date picker demo URL as the test URL to demonstrate how to automate calendar using Selenium WebDriver when the calendar is inside the iFrame. In this Selenium testing tutorial I’ll be using Python’s unittest framework to automate calendar using Selenium WebDriver Shown below is the complete Selenium test automation implementation to automate calendar using Selenium WebDriver inside an iFrame. **FileName — 1_Selenium_Calendar_iFrame_Test.py** #Selenium testing tutorial to automate calendar using Selenium WebDriver inside an iFrame. import unittest import time from selenium import webdriver from selenium.webdriver.support.select import Select # The Date Range picker is from https://jqueryui.com/datepicker/#date-range expected_from_date_str = '01/20/2020' expected_to_date_str = '02/26/2020' expected_fr_date = '20' expected_to_date = '26' test_url = 'https://jqueryui.com/datepicker/#date-range' class CalendarControlTest(unittest.TestCase): def setUp(self): self.driver = webdriver.Chrome() self.driver.maximize_window() def test_calendar_control_range_(self): driver = self.driver driver.get(test_url) time.sleep(5) frame = driver.find_element_by_xpath("//*[@id='content']/iframe") driver.switch_to.frame(frame) ################################# Steps for the From Date ############################ from_dp = driver.find_element_by_xpath("//input[@id='from']") from_dp.click() time.sleep(5) from_month = driver.find_element_by_xpath("//div/select[@class='ui-datepicker-month']") selected_from_month = Select(from_month) selected_from_month.select_by_visible_text("Jan") time.sleep(5) # from_day = driver.find_element_by_xpath("//table/tbody/tr/td/a[text()='20']") from_day = driver.find_element_by_xpath("//td[not(contains(@class,'ui-datepicker-month'))]/a[text()='" + expected_fr_date + "']") from_day.click() time.sleep(10) ################################# Steps for the To Date ############################ # The same steps like the ones in From Month are repeated except that now the operations # are performed on a different web element. to_dp = driver.find_element_by_xpath("//input[@id='to']") to_dp.click() time.sleep(5) to_month = driver.find_element_by_xpath("//div/select[@class='ui-datepicker-month']") selected_to_month=Select(to_month) selected_to_month.select_by_visible_text("Feb") time.sleep(5) # day_to=driver.find_element_by_xpath("//table/tbody/tr/td/a[text()='26']") to_day = driver.find_element_by_xpath("//td[not(contains(@class,'ui-datepicker-month'))]/a[text()='" + expected_to_date + "']") to_day.click() time.sleep(10) ################################# Verify whether the values are as expected ############################ selected_from_date_str = from_dp.get_attribute('value') self.assertEqual(selected_from_date_str, expected_from_date_str) selected_to_date_str = to_dp.get_attribute('value') self.assertEqual(selected_to_date_str, expected_to_date_str) print("Unit Test of jQuery Calendar passed") def tearDown(self): self.driver.close() self.driver.quit() if __name__ == "__main__": unittest.main() **_It’s crucial to debug websites for Safari before pushing them live. In this article, we look at how to debug websites using [developer tools for safari](https://www.lambdatest.com/blog/debug-websites-using-safari-developer-tools/?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=blog)._** ## Code WalkThrough In the above example for this Selenium testing tutorial, the calendar is not a multi-date calendar i.e. every month will show a maximum of 31 days. There is no repetition of date values from different months. ![](https://cdn-images-1.medium.com/max/2000/0*bJLL4EJKwlim6KfK.png) The Inspect Tool of the web browser i.e. Chrome is used to get the XPath of the iFrame. ![](https://cdn-images-1.medium.com/max/2674/0*aFt7ExUVjsY-bRk4.png) Step 1 — The iFrame is identified using the XPath locator. We then switch to iFrame that contains the Calendar event. def test_calendar_control_range_(self): driver = self.driver driver.get(test_url) time.sleep(5) frame = driver.find_element_by_xpath("//*[@id='content']/iframe") driver.switch_to.frame(frame) As this date picker provides date range functionality, the steps implemented for ‘from’ are also executed for ‘to’ in the date picker. The only difference is that the locators used for ‘from’ and ‘to’ are different. Step 2 — The ‘from’ picker is located using its XPath. Once it is located, a Click operation is performed to open the Calendar. ![](https://cdn-images-1.medium.com/max/2686/0*pZXtDZdHej9mKjdq.png) The Select() operation in Python is used to select the target month i.e. Jan in our example. from_dp = driver.find_element_by_xpath("//input[@id='from']") from_dp.click() time.sleep(5) from_month = driver.find_element_by_xpath("//div/select[@class='ui-datepicker-month']") selected_from_month = Select(from_month) selected_from_month.select_by_visible_text("Jan") time.sleep(5) Step 3 — As we are already present in the target month, the final step is to select the target date i.e. 20th in the example. We have used dynamic XPath along with contains() method to exclude ‘similar dates’ for the next month. Though such a scenario will not come in this Selenium testing tutorial example, the implementation is a more foolproof mechanism to automate calendar using Selenium WebDriver. # from_day = driver.find_element_by_xpath("//table/tbody/tr/td/a[text()='20']") from_day = driver.find_element_by_xpath("//td[not(contains(@class,'ui-datepicker-month'))]/a[text()='" + expected_fr_date + "']") from_day.click() time.sleep(10) The same steps are repeated for the ‘to’ element of the date picker. Shown below in this Selenium testing tutorial is the output snapshot that demonstrates handling of JQuery Calendar Control in an IFRAME. ![](https://cdn-images-1.medium.com/max/2000/0*ARj0jISZ5dPx2seN.png) **_Looking to enhance your web testing capabilities? Look no further than our top-of-the-line platform for [testing web](https://www.lambdatest.com/?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=webpage) applications. With our advanced tools for manual and automated cross-browser testing, you can seamlessly test your web application across over 3000 browsers online._** ## Handling ‘JQuery Calendar’ With Dates From Multiple Months As the title suggests, the JQuery Calendar used in this case can have dates of multiple months displayed in it i.e. When March is displayed in the Calendar, along with 31 days of March, there is a possibility that days of April are also displayed. If you have to select 1st January, incorrect implementation may lead to selection of 1st April as it is also displayed in the Calendar. ![](https://cdn-images-1.medium.com/max/2000/0*BYnTJOheEUOozHRL.png) To demonstrate how to automate calendar using Selenium WebDriver when the calendar is displaying dates from multiple months, we use jQuery’s date picker with ‘multiple months’ demo URL as the test URL. Below are the test cases requirements to Selenium testing tutorial to automate calendar using Selenium WebDriver: 1. Navigate to https://jqueryui.com/resources/demos/datepicker/other-months.html 2. Locate the datepicker element and perform a click on the same 3. Click on the ‘next’ button in the Calendar control till the expected year & month are found. 4. Locate the entry in the calendar matching the ‘date’ and select the same to complete the test i.e. 04/01/2020 Below is the complete implementation on how to automate calendar using Selenium WebDriver when multiple months are displayed in the calendar. **FileName — 2_Selenium_Calendar_Multiple_Dates_Test.py** import unittest import time from selenium import webdriver from selenium.webdriver.support.select import Select # jQuery to select multiple dates target_year = "2020" target_month ="April" target_date = "1" space = " " expected_month_year_val = '04/01/2020' test_url = 'http://jqueryui.com/resources/demos/datepicker/other-months.html' class CalendarControlTest(unittest.TestCase): def setUp(self): self.driver = webdriver.Chrome() self.driver.maximize_window() def test_calendar_control_range_(self): driver = self.driver driver.get(test_url) time.sleep(5) datepicker_val = driver.find_element_by_id('datepicker') datepicker_val.click() target_month_year_string = target_month + space + target_year elem_selected_year = driver.find_element_by_class_name("ui-datepicker-year") selected_year_string = elem_selected_year.get_attribute("innerHTML") elem_selected_month = driver.find_element_by_class_name("ui-datepicker-month") selected_month_string = elem_selected_month.get_attribute("innerHTML") # Concatenate selected month and year strings selected_month_year_string = selected_month_string + selected_year_string previous_button_xpath = "//*[@id='ui-datepicker-div']/div/a[1]" next_button_xpath = "//*[@id='ui-datepicker-div']/div/a[2]" # Navigate through the calendar to go to the required year # and than the required month while (selected_month_year_string != target_month_year_string): if (((int(target_year)) < int(selected_year_string))): # Click the next button previous_click = driver.find_element_by_xpath(previous_button_xpath) previous_click.click() else: next_click = driver.find_element_by_xpath(next_button_xpath) next_click.click() elem_selected_year = driver.find_element_by_class_name("ui-datepicker-year") selected_year_string = elem_selected_year.get_attribute("innerHTML") elem_selected_month = driver.find_element_by_class_name("ui-datepicker-month") selected_month_string = elem_selected_month.get_attribute("innerHTML") # Compute the final day-month-year string selected_month_year_string = selected_month_string + space + selected_year_string elem_date = driver.find_element_by_xpath("//td[not(contains(@class,'ui-datepicker-other-month'))]/a[text()='" + target_date + "']") elem_date.click() time.sleep(10) # Check the selected month, date, and year from the Calendar selected_month_year_val = datepicker_val.get_attribute('value') print(selected_month_year_val) self.assertEqual(selected_month_year_val, expected_month_year_val) print("Unit Test of jQuery Calendar passed") def tearDown(self): self.driver.close() self.driver.quit() if __name__ == "__main__": unittest.main() **_It’s crucial to debug websites for Safari before pushing them live. In this article, we look at how to debug websites using [dev tools in safari](https://www.lambdatest.com/blog/debug-websites-using-safari-developer-tools/?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=blog)._** ## Code WalkThrough To get started with the implementation of this Selenium test automation example, we first create the Chrome WebDriver instance. Step 1 — Once the WebDriver instance is created, the test URL is opened. class CalendarControlTest(unittest.TestCase): def setUp(self): self.driver = webdriver.Chrome() self.driver.maximize_window() def test_calendar_control_range_(self): driver = self.driver driver.get(test_url) time.sleep(5) Step 2 — Locate the datepicker element using its XPath and perform a click to open the Calendar control. datepicker_val = driver.find_element_by_id(‘datepicker’) datepicker_val.click() Step 3 — As the year & date selected can be from the future, next button on the date picker has to be clicked till the expected value has reached. ![](https://cdn-images-1.medium.com/max/2000/0*-Ic-MjvmdFqywopE.png) The datepicker-year and datepicker-month elements are located using class name locator. As these are label controls, the text content of these elements are read using innerHTML property. elem_selected_year = driver.find_element_by_class_name("ui-datepicker-year") selected_year_string = elem_selected_year.get_attribute("innerHTML") elem_selected_month = driver.find_element_by_class_name("ui-datepicker-month") selected_month_string = elem_selected_month.get_attribute("innerHTML") # Concatenate selected month and year strings selected_month_year_string = selected_month_string + selected_year_string previous_button_xpath = "//*[@id='ui-datepicker-div']/div/a[1]" next_button_xpath = "//*[@id='ui-datepicker-div']/div/a[2]" Step 4 — To select the target date from the calendar, the next or previous button on the calendar control is clicked till there is a match of the selected year & month with the expected year & month. # Navigate through the calendar to go to the required year # and than the required month while (selected_month_year_string != target_month_year_string): if (((int(target_year)) < int(selected_year_string))): # Click the next button previous_click = driver.find_element_by_xpath(previous_button_xpath) previous_click.click() else: next_click = driver.find_element_by_xpath(next_button_xpath) next_click.click() elem_selected_year = driver.find_element_by_class_name("ui-datepicker-year") selected_year_string = elem_selected_year.get_attribute("innerHTML") elem_selected_month = driver.find_element_by_class_name("ui-datepicker-month") selected_month_string = elem_selected_month.get_attribute("innerHTML") selected_month_year_string = selected_month_string + space + selected_year_string Step 5 — Now that the required year and month have been identified, dynamic XPath along with contains() method is used to select the required ‘date’ from the date picker control. Assert is raised if there is no match between the selected date from the Calendar and the expected date. elem_date = driver.find_element_by_xpath("//td[not(contains(@class,'ui-datepicker-other-month'))]/a[text()='" + target_date + "']") elem_date.click() time.sleep(10) # Check the selected month, date, and year from the Calendar selected_month_year_val = datepicker_val.get_attribute('value') print(selected_month_year_val) self.assertEqual(selected_month_year_val, expected_month_year_val) Shown below is the output snapshot of the Selenium test automation example that demonstrated how to automate calendar using Selenium WebDriver when multiple months are present in the control. ![](https://cdn-images-1.medium.com/max/2000/0*wgv3gOhIHGCkCzIw.png) ## Handling Kendo Calendar in Selenium test automation Kendo Calendar is a popular date time picker used for websites/web applications. The only downside of using Kendo Calendar is that it would not work in old versions of Internet Explorer (IE). If your web application requires uniformity in the behavior of calendar controls across different browsers and operating systems, you should use the jQuery Calendar as you would encounter far less cross [browser compatibility testing](https://www.lambdatest.com/feature?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=webpage) issues with jQuery Calendar. This section of Selenium testing tutorial demonstrates how to automate calendar using Selenium WebDriver with Kendo Calendar. Below are the test case requirements: 1. Navigate to https://demos.telerik.com/kendo-ui/datetimepicker/index 2. Locate the datepicker element and perform a click on the same 3. Click on the ‘next’ button in the Calendar control till the expected year & month are found. 4. Locate the entry in the calendar matching the ‘date’ and select the same to complete the test i.e. 02/24/2024 Below is the complete implementation on how to automate calendar using Selenium WebDriver when Kendo Calendar is used. **FileName — 3_KendoUI_Selenium_Calendar_Test.py** # Selenium testing tutorial to automate calendar using Selenium WebDriver import unittest import time from selenium import webdriver from selenium.webdriver.support.select import Select from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC target_year = "2024" target_month ="February" target_date = "20" space = " " test_url = 'https://demos.telerik.com/kendo-ui/datetimepicker/index' class CalendarControlTest(unittest.TestCase): def setUp(self): self.driver = webdriver.Chrome() self.driver.maximize_window() def test_calendar_control_range_(self): driver = self.driver driver.get(test_url) time.sleep(5) target_month_year_string = target_month + space + target_year elem_datepicker = driver.find_element_by_css_selector(".k-icon.k-i-calendar") elem_datepicker.click() time.sleep(5) WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CLASS_NAME, "k-content"))) elem_table = driver.find_element_by_class_name("k-content") # Locators for the Previous and Next Buttons elem_previous_button_class_name = "k-nav-prev" elem_next_button_class_name = "k-nav-next" # Locator for Month and Year Selected label elem_month_year_class_name = "k-nav-fast" elem_selected_year = driver.find_element_by_class_name(elem_month_year_class_name) selected_month_year_string = elem_selected_year.get_attribute("innerHTML") while (selected_month_year_string != target_month_year_string): next_click = driver.find_element_by_class_name(elem_next_button_class_name) next_click.click() time.sleep(2) elem_selected_year = driver.find_element_by_class_name(elem_month_year_class_name) selected_month_year_string = elem_selected_year.get_attribute("innerHTML") # print(selected_month_year_string) # Added a sleep to check the output time.sleep(5) for row in elem_table.find_elements_by_xpath("//tr"): for cell in row.find_elements_by_xpath("td"): if (cell.text == target_date): req_date = driver.find_element_by_link_text(cell.text) req_date.click() break time.sleep(5) # Since we are here and all the necessary fields are selected, the test has passed. print("Unit Test of jQuery Calendar passed") def tearDown(self): self.driver.close() self.driver.quit() if __name__ == "__main__": unittest.main() ## Code WalkThrough Chrome WebDriver instance is created and the URL under test is opened on the browser. Once this basic step is complete, subsequent steps to locate the element and selecting the expected date is performed. Step 1 — The Date Picker is located using [CSS Selector](https://www.lambdatest.com/blog/how-pro-testers-use-css-selectors-in-selenium-automation-scripts/?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=blog). Once the same is located, click operation is performed to open the Kendo Calendar. ![](https://cdn-images-1.medium.com/max/2000/0*VBqhi43LylGhU7F4.png) An explicit wait of 10 seconds is added till the presence of the element with class-name of **k-content**. It is the Kendo Calendar on which subsequent operations will be performed. ![](https://cdn-images-1.medium.com/max/2000/0*f5tR1XCSX_5sX3-D.png) elem_datepicker = driver.find_element_by_css_selector(".k-icon.k-i-calendar") elem_datepicker.click() time.sleep(5) WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CLASS_NAME, "k-content"))) elem_table = driver.find_element_by_class_name("k-content") Step 2 — For demonstration, we select the date which is in the future. The next button on the Kendo Calendar has to be clicked till the required year & month are found. ![](https://cdn-images-1.medium.com/max/2694/0*aGEfypAwkRoYSwE5.png) The content of the label that displays the month & year on the Kendo Calendar is accessed using the innerHTML property. ![](https://cdn-images-1.medium.com/max/2710/0*OF9JgjwPfjE-dJl0.png) # Locators for the Previous and Next Buttons to automate calendar using Selenium WebDriver elem_previous_button_class_name = "k-nav-prev" elem_next_button_class_name = "k-nav-next" # Locator for Month and Year Selected label elem_month_year_class_name = "k-nav-fast" elem_selected_year = driver.find_element_by_class_name(elem_month_year_class_name) selected_month_year_string = elem_selected_year.get_attribute("innerHTML") Step 3 — To select the target year & month from the calendar, the next button is pressed till the expected year & month are found. while (selected_month_year_string != target_month_year_string): next_click = driver.find_element_by_class_name(elem_next_button_class_name) next_click.click() time.sleep(2) elem_selected_year = driver.find_element_by_class_name(elem_month_year_class_name) selected_month_year_string = elem_selected_year.get_attribute("innerHTML") Step 4 — We traverse through the table that contains the calendar entries and perform a search for the requested date. The content in each cell i.e. is compared against the requested date. ![](https://cdn-images-1.medium.com/max/2698/0*gfJ9dXsoGnOzcwVk.png) **_Inspect web elements to help developers and testers to debug UI flaws or make modifications in HTML or CSS files. Learn [how to inspect element on MacBook](https://www.lambdatest.com/software-testing-questions/how-to-inspect-on-macbook?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=stq)._** Once the search is successful, a link in the corresponding cell is clicked to select the date to automate calendar using Selenium WebDriver. for row in elem_table.find_elements_by_xpath("//tr"): for cell in row.find_elements_by_xpath("td"): if (cell.text == target_date): req_date = driver.find_element_by_link_text(cell.text) req_date.click() break Shown below is the execution snapshot and browser snapshot: ![](https://cdn-images-1.medium.com/max/2000/0*Q4oq_-VUgkidgcyw.png) ![](https://cdn-images-1.medium.com/max/2690/0*-9z3V_93CS9myQy7.png) ## Browser Compatibility Issues With Calendars As your users access your web application using different browsers, OS and devices, a lot of device or browser compatibility issues are bound to come. This is primarily due to the fact that every browser has their own browser engine and it might not support some elements. The major focus of Selenium test automation to automate calendar using Selenium WebDriver is ensuring consistent user-experience across different combinations used to access your web page/web app. Five new input types were added in HTML 5 using which web developers can add calendar controls to their website using native HTML. There is no reliance on any JavaScript or jQuery library. The only downside is different experiences across various as each one has its own mechanism to render the Calendar (date time picker) control to automate calendar using Selenium WebDriver. ![](https://cdn-images-1.medium.com/max/2000/0*fxqwmIj3q5plnEIi.png) As seen in the screenshot from Can I Use, Calendar control input type does not have good cross browser support. Date & Time input types are not supported on any version of Internet Explorer (IE). There are issues with Calendar controls on Safari 3.1 — Safari 12.1, as well as Safari 13. The same is also applicable for a few versions of the Firefox browser. ![](https://cdn-images-1.medium.com/max/2464/0*u412NP7xO964-6Y_.png) A reliable way to handle cross-browser compatible issues with input types like Calendar controls is by making use of the jQuery Calendar. It provides uniformity in widget interface across all the browsers and even works with unsupported browsers like IE and Safari. You can also have a look at our detailed blog on [Handling cross browser compatibility issues with different input types](https://www.lambdatest.com/blog/cross-browser-compatibility-issues-with-form-input-types/?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=blog). {% youtube iuwtf2VvXRI %} ## Using Online Selenium Grid For Exhaustive Verification Of ‘Calendar Controls’ Rather than approaching this problem by implementing a non-scalable approach of housing different combinations of browsers, operating systems, and devices; it is better to perform Selenium test automation on the cloud. Lambdatest’s [cross browser testing](https://www.lambdatest.com/?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=webpage) platform helps you to improve test coverage and better scalability. The existing implementation to automate calendar using Selenium WebDriver requires minimal changes as execution occurs on an online [Selenium Grid](https://www.lambdatest.com/selenium-grid-online?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=webpage) instead of a local machine. As tests can be executed in parallel, you can expect faster turn-around time leading to better productivity. Below are the steps you’d need to use your Selenium test automation script on LambdaTest’s online Selenium Grid: 1. First you’ll need to make sure you have an account on LambdaTest. Make a note of the [user-name and access-key](https://accounts.lambdatest.com/profile?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=webpage) which are required for accessing the remote Selenium Grid on LambdaTest. 2. Select the appropriate plan depending on the test requirements 3. Create browser capabilities using [Desired Capabilities Generator on LambdaTest](https://www.lambdatest.com/capabilities-generator/?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=webpage). 4. Modify the existing implementation to accommodate changes for execution on remote Selenium Grid. You can refer to one of our earlier Selenium testing tutorials on LambdaTest to get started with cross browser testing. **_Looking to streamline your browser testing process? Look no further than our state-of-the-art platform for conducting [browser tests](https://www.lambdatest.com/?utm_source=medium&utm_medium=group&utm_campaign=apr06_bh&utm_term=bh&utm_content=webpage). With our comprehensive tools for manual and automated cross browser testing, you can seamlessly test your web application across a vast range of browsers._** ## It’s a Wrap! In this Selenium testing tutorial, we had a look at the most popular Calendar controls and how to perform Selenium test automation on these Calendar controls. Input types like date time pickers, color pickers, etc. are increasingly being used in web development. There is no thumb rule to automate calendar using Selenium WebDriver as there are different types of Calendar Controls and their behavior might vary from one browser to another. This is where Selenium test automation on the cloud can be instrumental as it lets you verify the test code on numerous combinations of browsers, operating systems, and devices. Do let us know what are some of the challenges that you have faced while trying to automate calendar using Selenium WebDriver in the comment section down below. Do share this article with your peers by retweeting us or by sharing this article on LinkedIn. **Happy Testing!!!** ☺
himanshusheth004
1,428,058
Fun and Creative React Projects to Enhance Your Learning Experience
Are you ready to dive into the world of React but tired of dull tutorials and boring projects? Fear...
0
2023-04-06T08:29:43
https://dev.to/j3rry320/fun-and-creative-react-projects-to-enhance-your-learning-experience-15ja
react, webdev, beginners, javascript
**_Are you ready to dive into the world of React but tired of dull tutorials and boring projects?_** Fear not! We have some witty and fun projects that will help you learn React while keeping you entertained. So grab a cup of coffee, get comfortable, and let's React! ![Fun React Projects](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rgtlwqelhhqvo2jvurpl.png) ## Weather App with a Twist Let's start with a classic project: a weather app. But instead of just displaying the current temperature, why not add some fun by displaying a gif that matches the weather condition? Rainy day? Show a gif of people dancing in the rain. Sunny day? Show a gif of a beach party. You'll learn about integrating APIs and conditional rendering while adding some humor to your project. ## Random Quote Generator Who doesn't love a good quote to brighten up their day? With this project, you'll create a random quote generator that displays a new quote every time the user clicks a button. You'll learn about state management and how to fetch data from an API. Plus, you'll have some inspirational quotes to share with your friends. ## Tic Tac Toe It's time to put your coding skills to the test with a game of Tic Tac Toe. But let's make it interesting by adding a twist: the game board changes color with each move. You'll learn about component composition and state management while having some fun playing the classic game. ## Recipe Finder Are you tired of searching for recipes online and getting lost in a sea of options? With this project, you'll create a recipe finder that allows users to filter recipes by ingredients and dietary restrictions. You'll learn about forms, state management, and conditional rendering while helping users find the perfect recipe for their next meal. ## Movie App with a Surprise Who doesn't love a good movie night? With this project, you'll create a movie app that displays movie recommendations based on the user's preferences. But here's the twist: the app also recommends a snack to go with the movie. You'll learn about integrating APIs and managing states while making movie night even better. ## Emoji Translator Are you tired of sending the same old emojis in your texts and chats? With this project, you'll create an emoji translator that converts text into emojis. You'll learn about state management and how to map data while having some fun with emojis. ## Rock, Paper, Scissors, Lizard, Spock It's time to take the classic game of Rock, Paper, Scissors to the next level with a twist. You'll add two new options to the game: Lizard and Spock. You'll learn about handling events and conditional rendering while having some fun with this nerdy twist on the classic game. In conclusion, these fun projects are a great way to learn different aspects of React. They cover various fundamental concepts such as component composition, state management, and event handling. **_As you build these projects, you'll gain practical experience that will help you become a proficient React developer._** Remember to keep practicing, and you'll soon be building your own exciting projects!
j3rry320
1,428,636
HTML Cheat Sheet: How to implement tables, links, and more
HTML stands for Hyper Text Markup Language. It is the standard markup language for creating web...
0
2023-04-06T18:50:11
https://www.educative.io/blog/html-cheat-sheet
html, cheatsheet, webdev, tutorial
HTML stands for Hyper Text Markup Language. It is the standard **markup language** for creating web pages. It is used alongside technologies like Cascading Style Sheets (CSS) and JavaScript in modern web development. Web developers use HTML all the time, so it's important to be familiar with the common operations and elements. Today, we'll offer a **quick reference** to HTML to speed up your learning and coding. **We'll go over:** * [Basic composition of an HTML webpage](#basic) * [Common HTML elements](#elements) * [Text formatting](#text) * [Links](#links) * [Media elements](#media) * [Lists](#lists) * [Tables and Forms](#tables) * [What to learn next](#next) <br> <a name="basic"></a> ## Basic composition of a page HTML code describes the **structure of a web page**. It consists of a series of elements that are defined by a start tag, the content, and a closing tag. Empty elements do not need opening and closing tags, as there is no content to go in between them. For styling purposes, HTML elements can be classified into **Block-level** and **Inline-level** elements. Block-level elements cause a line break to occur, and they take up space and stack down the page. Inline-level elements, however, only take up space as is necessary. Below is HTML code describing a very simple webpage. ```html <!DOCTYPE html> <html lang="en"> <head> <!-- This is a comment in HTML --> <!-- Elements that contain information about the webpage including the title and links to external resources go here --> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> </head> <body> <!-- Elements that will be displayed go here --> </body> </html> ``` The `<!DOCTYPE html>` declaration defines that this document is an HTML5 document. There is the `<html>` tag at the root level, and it consists of two main elements: the `<head>` and the `<body> `elements. <br> <a name="common"></a> ## Common HTML Elements Below, we discuss the most common HTML elements for creating structure and organizing text. <br> ### Headings The heading tags represent all levels of headings in HTML. It has **six levels**, `<h1>` through `<h6>`, with `<h1>` denoting the most important with the largest font size and weight. ```html <h1>Heading 1</h1> <h2>Heading 2</h2> <h3>Heading 3</h3> <h4>Heading 4</h4> <h5>Heading 5</h5> <h6>Heading 6</h6> ``` Heading tags improve accessibility and help in search engine optimization as information in higher heading levels is given more priority. > **Pro tip:** It is best practice to use one `<h1>` tag per web page. <br> ### Paragraphs The `<p>` tag creates a paragraph. Line breaks and whitespaces are usually ignored when paragraphs are rendered in a browser. ```html <p>Hello, your Educative password has been reset. Please log in to resume learning! </p> ``` <br> ### Content Division Element The `<div>` tag represents a division or section in an HTML document, it serves as a container that can be used to group content so they can be easily styled. This styling can be done inline or by using CSS referencing their class or id attributes. ```html <div>some content!!!</div> ``` <br> ### Line Breaks The `<br />` tag is a good example of an empty element, it is used to enforce a line break within a `<p>` tag. It can be written as a self-closing tag `<br />` or just the opening tag `<br>`. ```html <p>Hello,<br /> your Educative password has been reset.<br />Please login to resume learning!</p> ``` <br> <a name="text"></a> ## Text formatting All HTML elements for formatting texts are an inline-level element. Some tags used in formatting text in HTML include the following. <br> ### Strong text ```html <strong>This text is bold and important.</strong> ``` > <strong>This text is bold and important</strong> Note that you can also use a `<b>` tag here. <br> ### Emphasized Text ```html <em>This text is italicized for emphasis.</em> ``` > <em>This text is italicized for emphasis.</em> You can also use the `<i>` tag. <Br> ### Small Text ```html <small>These words are smaller than</small> the others.< ``` > <small>These words are smaller than</small> the others. <br> ### Large Text ```html <big> These words are larger than </big> the others. ``` > <p><big> These words are larger than </big> the others.</p> <br> ### Highlighted Text ```html <mark> This text is highlighted.</mark> ``` > <mark> This text is highlighted.</mark> <br> ### Strikethrough Text ```html <strike>This is an example of strikethrough text</strike> ``` > <strike>This is an example of strikethrough text</strike> <br> ### Underlined Text ```html <u>This sentence is underlined.</u> ``` > <p><u>This sentence is underlined.</u></p> <br> ### Superscript and Subscript Text ```html An equation with superscript text: X<sup>2</sup>. ``` > <p>An equation with superscript text: X<sup>2</sup>. We can also denote a chemical compound using the subscript tag: ```html CO<sub>2</sub> ``` > CO<sub>2</sub> <br> ### Span tag ```html Span tag can be used to <span style="color:red">style</span> section of a text. ``` > <p>Span tag can be used to <span style="color:red">style</span> sections of a text.</p> <a name="links"></a> ## Links The HTML `<a>` tag defines a hyperlink for navigation. It is an inline-level element. The `href` attribute holds the URL that will be navigated to. ```html This is <a href="url">a link</a> ``` <br> ### Link Targets The target attribute is used to specify the location document/URL would be displayed. Some of the possible options are: * `_blank`: opens the linked document in a new browser window or tab ```html <a href="https://www.educative.io/blog" target="_blank">link</a> ``` * `_self`: opens the link in the same frame ```html <a href="https://www.educative.io" target="_self">link </a> ``` * `_parent:` opens the linked document in the parent frame ```html <a href="https://www.educative.io/learn" target="_parent">link </a> ``` * `_top`: opens the document in the full body of the window ```html <a href="https://www.educative.io/blog" target="_top">link </a> ``` <br> ### Special Links ```html <a href="mailto:email@example.com">Send email</a> <a href="tel:0123492">Call 0123492</a> ``` <br> <a name="media"></a> ## Media Elements You can add media elements to HTML, such as images, videos, and audio. Here's how it's done. <br> ### Video Below, we add a video and its size to our web page. You can provide multiple sources and formats. The browser uses the first available one. ```html <video width="300" height="240" controls> <source src="url" type="video/mp4"> <source src="alternate-url" type="video/ogg"> </video> ``` <br> ### Audio ```html <audio controls> <source src="url" type="audio/ogg"> <source src="alternate-url" type="audio/mpeg"> </audio> ``` <br> ### Image ```html <img src="path/to/image" alt="alternate text" width="480px" height="360px"> ``` <br> <a name="lists"></a> ## Lists There are several kinds of lists we can create with HTML. <br> ### Ordered List `<ol>` defines an ordered list. It is **numbered by default**. The numbering format can be changed using the type attribute. ```html <ol> <li>Water </li> <li>Tea</li> <li>Milk</li> <ol> ``` > <ol><li>Water </li> > <li>Tea</li> > <li>Milk</li><ol> <br> ### Unordered List `<ul>` defines an unordered list. List items would appear bulleted. ```html <ul> <li>Rice</li> <li>Vegetables</li> <li>Butter</li> </ul> ``` > <ul> > <li>Rice</li> > <li>Vegetables</li> > <li>Butter</li></ul> <br> ### Type and Start attributes The `type` attribute is present in both ordered and unordered lists and is used in changing numbering and bullet style. The `start` attribute specifies what number/letter/roman numeral the first item in an ordered list should start its count from. ```html <!-- Unordered List bullet types --> <ul type="square"> <ul type="disc"> <ul type="circle"> <!-- Ordered List numbering styles --> <ol type="1"> <!-- Default-Case Numerals --> <ol type="A"> <!-- Upper-Case Letters --> <ol type="a"> <!-- Lower-Case Letters --> <ol type="I"> <!-- Upper-Case Numerals --> <ol type="i"> <!-- Lower-Case Numerals --> <ol type="1" start="3"> <!-- numbering starts from 3 --> <ol type="A" start="5"> <!-- Letters starts with E --> ``` <br> <a name="tables"></a> ## Tables and Forms Tales are very commonly used in HTML to organize text and numbers. Let's learn how to create tables and add padding. <br> ### Basic table structure ```html <table> <tr> <!-- <th> represent a table heading, <tr> defines a table row --> <th>Name</th> <th>Age</th> <th>City</th> </tr> <tr> <!-- <td> element holds table data--> <td>Bill Jones</td> <td>54</td> <td>Nashville</td> </tr> <tr> <td>Sura Keyser</td> <td>45</td> <td>Berlin</td> </tr> <tr> <td>Sarah Hernandez</td> <td>60</td> <td>Mexico City</td> </tr> </table> ``` ![HTML Tables](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/czhdj316qdb8ufb3uron.png) <br> ### Cellpadding and Cellspacing Attributes Cellpadding and Cellspacing are `<table>` attributes used to adjust the amount of white space in your table cells. ```html <table cellspacing="5" cellpadding="5"> <!--cellspacing attribute defines space between table cells--> <!-- cellpadding represents the distance between cell borders and the content within a cell --> <tr> <th>Name</th> <th>Age</th> <th>City</th> </tr> <tr> <td>Bill Jones</td> <td>54</td> <td>Nashville</td> </tr> <tr> <td>Sura Keyser</td> <td>45</td> <td>Berlin</td> </tr> <tr> <td>Sarah Hernandez</td> <td>60</td> <td>Mexico City</td> </tr> </table> ``` <br> ### Forms An HTML form is used to collect user input. ```html <form action="path/to/register" method="POST"> <input type="text"> <input type="email" placeholder="example@email.com"> <input type="number"> <input type="date"> <input type="checkbox"> <textarea name="" id="" cols="30" rows="10"></textarea> <!--Radio buttons allow selection of only one option --> <input type="radio" id="apples" name="favourite-fruit" value="apples"> <label for="apples">Apples</label><br> <input type="radio" id="oranges" name="favourite-fruit" value="oranges"> <label for="oranges">Oranges</label><br> <input type="radio" id="other" name="favourite-fruit" value="other"> <label for="other">Other</label> <button type="submit">Submit Form</button> </form> ``` ![HTML Forms](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0eg66px1yr3n0bn5fsvm.png) <br> <a name="next "></a> ## What to learn next This document should come in handy whether you are just getting started in your web development career or you need help remembering HTML syntax. The next things to learn are more advanced HTML syntax, such as: * Link to the middle of a web page * Clickable regions within images * Roll-overs * Banner ads If you’re looking to get ramped up on front-end development and learn advanced HTML, check out Educative’s Learning Path **[Become a Front-end Developer](https://www.educative.io/path/become-front-end-developer?eid=5082902844932096)**. This is the perfect place to start your journey as a front-end developer. With no prior knowledge needed, you will gain a mastery of HTML, CSS, and JavaScript in just six curated modules. *Happy learning!* ### Continue reading about HTML on Educative * [HTML Beginner's Tutorial: build a webpage from scratch with HTML](https://www.educative.io/blog/html-beginners-tutorial-build-from-scratch?eid=5082902844932096) * [Animate CSS code: create a panda animation with HTML & CSS](https://www.educative.io/blog/animate-css-code?eid=5082902844932096) * [Crack the top 30+ HTML interview questions and answers](https://www.educative.io/blog/crack-top-html-interview-questions-answers?eid=5082902844932096) ### Start a discussion Were there any more fundamental HTML processes that we didn't include here? Was this article helpful? Let us know in the comments below!
huntereducative
1,428,115
31 berufliches Vorwärtskommen: Der eigentliche Weg – erfolgreiche Leute.
Moin Moin, Leute in deren Leben du, das Eintreten einer beabsichtigten, angestrebten Wirkung...
22,304
2023-04-06T10:21:45
https://dev.to/amustafa16421/31-berufliches-vorwartskommen-der-eigentliche-weg-erfolgreiche-leute-4m48
deutsch, career, austausch, 1min
Moin Moin, Leute in deren Leben du, das Eintreten einer beabsichtigten, angestrebten Wirkung beobachten zu können meinst, lohnen sich genauer von dir unter die Lupe genommen zu werden. In den letzten Beiträgen habe ich anhand von Donald E. Knuth, einige Aspekte der [Karriere-Gestaltung→](https://dev.to/amustafa16421/23-berufliches-vorwartskommen-ein-gerader-berufsweg-wohin-ein-guter-start-dich-fuhren-kann-2idl) hervorgehoben. Die eigentliche Frage ist jedoch, wer inspiriert dich? Wessen Karriere-Weg findest beeindruckend? Von wem meinst du dir ein Paar Scheiben abschneiden zu können? Beste Grüße, Mustafa
amustafa16421
1,428,255
Promises in JavaScript: A Beginner’s Guide
JavaScript is a language that allows you to write asynchronous code, which means that you can perform...
0
2023-04-06T11:33:05
https://dev.to/ftwoli/promises-in-javascript-a-beginners-guide-3hk3
webdev, javascript, beginners, programming
JavaScript is a language that allows you to write asynchronous code, which means that you can perform multiple tasks at the same time without waiting for one to finish before starting another. For example, you can send a request to a server and continue with other operations while waiting for the response. However, asynchronous code can also be challenging to write and understand, especially when you have to deal with nested callbacks, error handling, and complex logic. That’s where promises come in handy. A promise is a JavaScript object that represents the eventual outcome of an asynchronous operation. It can either be fulfilled (resolved) with a value or rejected with a reason (error). A promise lets you write cleaner and more readable code by avoiding callback hell and providing a clear way to handle success and failure scenarios. To create a promise, you use the Promise constructor, which takes a function as an argument. This function receives two parameters: resolve and reject, which are themselves functions that you can call to settle the promise. For example, the following code creates a promise that resolves after 2 seconds with the message “Hello”: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ray34ueifwnx7ygzgau.png) To use a promise, you can call its then method, which takes two callbacks as arguments: one for success and one for failure. These callbacks are called when the promise is settled, either by resolving or rejecting. For example, the following code uses the promise p from above and logs its value or error to the console: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1ftg1sx2xhhciwuuvyvw.png) You can also chain multiple then calls on a promise, creating a sequence of asynchronous operations. Each then call returns a new promise that is resolved with the return value of its callback. This way, you can pass data along the chain and handle errors at any point. For example, the following code chains three then calls on a promise that resolves with a number. Each then call increments the number by one and returns it. The final then call logs the result to the console: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tcnpm18255pw56q42lg4.png) If any of the then callbacks throws an error or returns a rejected promise, the subsequent then callbacks are skipped and the error is passed to the next catch callback in the chain. The catch method is similar to then, but it only takes one callback for handling errors. For example, the following code adds a catch callback to handle any errors that may occur in the previous then callbacks: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oduxz5o8zftt8tgj9un9.png) Finally, you can use the finally method to execute some code regardless of whether the promise is fulfilled or rejected. This is useful for performing some cleanup or final actions after an asynchronous operation. For example, the following code adds a finally callback to log “Done” after the promise is settled: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9k9muvyfuzklaiz8n70a.png) In conclusion, promises are a powerful feature of JavaScript that allow you to write asynchronous code in a more elegant and manageable way. By using promises, you can avoid callback hell, handle errors gracefully, and chain multiple operations together. Promises are also compatible with other modern JavaScript features, such as async/await and generators. If you want to learn more about promises and how to use them effectively, you can check out some of the resources below: - [JavaScript Promises — W3Schools](https://www.w3schools.com/Js/js_promise.asp) - [Promise — JavaScript | MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) - [Promises in JavaScript — Mastering JS](https://masteringjs.io/tutorials/fundamentals/promise) > I hope you enjoyed this article and learned something new. Happy coding!
ftwoli
1,428,363
Web Scraping with Python: A Complete Step-by-Step Guide + Code
Python is one of the most known languages for web scraping due to its simplicity, versatility, and...
22,545
2023-04-06T14:03:24
https://gologin.com/blog/web-scraping-with-python
python, webdev, webscraping, tutorial
Python is one of the most known languages for web scraping due to its simplicity, versatility, and abundance of libraries specifically designed for this purpose. With Python, you can easily create web scrapers that can navigate through websites, extract data, and store it in various formats. It’s especially useful for data scientists, researchers, marketers, and business analysts, and it’s a valuable tool you must add to your skill set. In this article, we’ll show you exactly how to perform web scraping with Python, review some popular tools and libraries, and discuss some practical tips and techniques. ## Overview of Web Scraping and How it Works ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fihjcu6riltem1z06ah4.jpg) Web scraping refers to searching and extracting data from websites using computer programs. The web scraping process involves sending a request to a website and parsing the HTML code to extract the relevant data. This data is then cleaned and structured into a format that can be easily analyzed and used for various purposes. Web scraping has numerous benefits, like: - **Saving time and effort** on manual data collection - **Obtaining data that is not easily accessible** through traditional means - **Gaining valuable insights** into trends and patterns in your industry. Doesn’t that sound super helpful? Let’s dive right in! ### Types of Data That Can Be Extracted From Websites Using Data Scraping You might be wondering — is data scraping limited to textual information only? The answer is no. Data scraping can extract images, videos, and structured data such as tables and lists. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xwyfx39arpqupoovzuop.png) Text data can include product descriptions, customer reviews, and social media posts. Images and videos gathered through data scraping can be used to gather visual data, such as product images or videos of events. Information like product pricing, stock availability, or employee contact information can be extracted from tables and lists. Furthermore, web scraping can extract data from multiple sources to create a comprehensive database. This data can then be analyzed using various tools and techniques, such as data visualization and machine learning algorithms, to identify patterns, trends, and insights. Now, it’s time to learn web scraping so that you can carry out all this cool stuff yourself! ### Overview of Tools and Libraries Available for Web Scraping First, let’s go over the available tools and libraries that can help streamline the process and make web scraping more efficient and effective. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3jbn8cdq9hi6c087v30w.png) #### Beautiful Soup ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b88axmmshs9ucolko6wm.png) With Beautiful Soup, you can easily navigate through website code to find the needed HTML and XML data and extract it into a structured format for further analysis. #### Scrapy ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r7izvg3y6y1tbaszlf21.png) It is a Python framework that provides a complete web scraping solution. Scrapy allows you to crawl and scrape websites easily, including features such as automated data extraction, processing, and storage in various formats. #### Selenium ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e6oilar6ich8ab9ks9gr.png) Selenium is an open-source tool that automates web browsers, allowing you to simulate user behavior and extract data from websites that would be difficult or impossible to access using other tools. Selenium’s flexibility and versatility make it an effective and powerful tool for scraping dynamic pages. #### Octoparse ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uvyq3aceecleo5wl5zaz.png) It is a visual web scraping tool allowing easy point-and-click data extraction and automation into various formats, including CSV, Excel, and JSON. #### ParseHub ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ii5vh12ktmpcoo8q4jqc.png) It is a web scraping tool that provides a web-based and desktop solution for extracting data from websites. With ParseHub, you can easily create scraping projects by selecting the data you want to extract using a point-and-click interface. #### LXML ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wojjip6cqtkncjl361mn.png) Lxml is a powerful and efficient tool that can handle both HTML and XML documents. It can easily navigate complex website structures to extract specific elements like tables, images, or links, or you can create custom filters to extract data based on more complex criteria. In the next section, we’ll show you how to set up your development environment for web scraping. Let’s dive right into the fun stuff! ### How To Set Up a Development Environment for Web Scraping With Python Setting up a development environment for web scraping with Python involves installing the necessary software and libraries and configuring your workspace for efficient data extraction. Here’s how you can do it: #### Step 1. Install Python ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f8n20o4bjjc7zk8q6azw.png) The first step is to install Python on your computer if you don’t already have it. You can download the latest version of Python from the official website and follow the installation instructions. #### Step 2. Install a Text Editor or Integrated Development Environment (IDE) You will need a text editor or an IDE to write Python code. Some popular options include Visual Studio Code, PyCharm, and Sublime Text. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fquk2e8h7ejo3ik384mw.png) #### Step 3. Install the Necessary Libraries Several Python libraries, including Beautiful Soup, Scrapy, and Selenium, are commonly used for web scraping with Python. You can install these libraries using pip, the Python package manager. Open your command prompt or terminal, and type: ``` pip install [library name] ``` To install Beautiful Soup, run the following command: ``` pip3 install beautifulsoup4 ``` Note: You might have to prefix the installation command with sudo if you’re on Linux or macOS. #### Step 4. Install a web driver If you plan to use Selenium for web scraping, you must install a web driver corresponding to your preferred browser (e.g., Chrome, Firefox, or Safari). You can download the appropriate web driver from the official Selenium [website](https://www.selenium.dev/documentation/webdriver/getting_started/install_drivers/) and add it to your system’s [PATH](http://architectryan.com/2018/03/17/add-to-the-path-on-windows-10/). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q0c2ol08zzzjd2f366kh.png) #### (Optional) Step 5. Create a Virtual Environment A virtual environment is recommended to keep your Python environment organized and avoid dependency conflicts. You can create a virtual environment using the [venv](https://docs.python.org/3/library/venv.html) module with Python. That’s it. You have the entire setup to start web scraping with Python right away. It’s time to start coding! ### How to Send HTTP Requests to a Website and Handle Responses using Python The requests library is a popular third-party library that provides an easy-to-use interface for sending HTTP/1.1 requests in Python. Here are the steps to follow: #### Step 1: Install the Requests Library Before you can use the requests library, you need to install it. You can install it using `pip` by running the following command: ``` pip install requests ``` Alternatively, you can also use the following command for virtual environments: ``` pipenv install requests ``` #### Step 2: Import the Requests Module Once the requests library is installed, you can import it into your Python script using the following command: ``` import requests ``` #### Step 3: Send an HTTP Request To send an HTTP request, you can use the requests library’s get(), post(), put(), delete() methods. For example, to send a GET request to a website, you can use the following code: ``` response = requests.get('https://www.example.com') ``` This will send a GET request to https://www.example.com and store the response in the response variable. In case you’re not already aware, here’s what GET, POST, PUT, and DELETE requests mean: - **GET**: Used for requesting data. They’re stored in the browser history and shouldn’t be used for sensitive things. - **POST**: Used for sending data to a server. They’re not stored in the browser history. - **PUT**: Also used for sending data to a server. The only difference is that sending a POST request repeatedly will create data multiple times, which is not the case with PUT. - **DELETE**: Delete the specified data. #### Step 4: Handling the Response The response from the website can be accessed via the response object. You can get the response content, status code, headers, and other details. Here’s an example of how to get the content of the response: ``` content = response.content ``` This will get the content of the response as a byte string. If you want to get the content as a string, use the following code: ``` content = response.text ``` To get the response’s status code, you can use the following code: ``` status_code = response.status_code ``` This will return the status code as an integer. Here’s how you can get the response headers: ``` headers = response.headers ``` That’s a brief overview of sending HTTP requests to a website and handling responses using Python’s requests library. This may seem overwhelming, but once you get the hang of it, it becomes easy. ### Introduction to parsing HTML using Beautiful Soup and extracting data from HTML tags Beautiful Soup is a popular Python library used to pull any data out from HTML and XML files. It first creates a parse tree for parsed pages and then uses these pages to extract data from HTML tags. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8as0qrxfr3ru7qklxodv.jpg) Here is an introduction to parsing HTML using Beautiful Soup and extracting data from HTML tags: #### Step 1: Installing Beautiful Soup Before we can use Beautiful Soup in Python, we need to install it. Beautiful Soup can be installed using the “pip” shell command: ``` pip install beautifulsoup4 ``` #### Step 2: Importing Beautiful Soup Once we have installed Beautiful Soup, we can import it into our Python script using the following code: ``` from bs4 import BeautifulSoup ``` #### Step 3: Parsing HTML Content The next step is to parse the HTML content using Beautiful Soup. This can be done by creating a BeautifulSoup object and passing the HTML content to it. Here’s an example ``` html_content = '<html><head><title>Example</title></head><body><p> This is a sample HTML document.</p></body></html>' soup = BeautifulSoup(html_content, 'html.parser') ``` Here, we have created a BeautifulSoup object called soup by passing the html_content string to the BeautifulSoup constructor. We have also specified the parser to be used as ‘html.parser’. #### Step 4: Extracting Data From HTML Tags Once we have the BeautifulSoup object, we can use it to extract data from HTML tags. For example, to extract the text from the <p> tag in the HTML content, we can use the following code: ``` p_tag = soup.find('p') p_text = p_tag.text print(p_text) ``` This will output the text ‘This is an example HTML document.’ which is the content of the <p> tag in the HTML document. #### Step 5: Using BeautifulSoup Functions We can also extract attributes from HTML tags. For example, to extract the value of the href attribute from the a tag, we can use the following code: ``` a_tag = soup.find('a') href_value = a_tag['href'] print(href_value) ``` This will output the value of the href attribute of the first a tag in the HTML document. Moreover, we can also use the `get_text()` function to retrieve all the text from the HTML document.You can use the following code to get all the text of the HTML document: ``` soup.get_text() ``` Voila! You have learned how to use beautifulSoup with Python to scrape data successfully. There are many other useful functions of BeautifulSoup that you can learn and use to add variations to your data scraper. ### Overview of using regular expressions to extract data from web pages Regular expressions (regex) are powerful for pattern matching and text manipulation. They can be used to extract data from web pages by searching for specific patterns or sequences of characters. We have to use the following regex tokens to extract data. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vevglt03xyvqil8r1zby.png) Here is an overview of using regular expressions to extract data from web pages: #### Step 1: Install the Appropriate Libraries We will need to use requests and BeautifulSoup here as well. We will also import a library called `re`, a built-in Python module for working with Regular Expressions. ``` import requests from bs4 import BeautifulSoup import re ``` #### Step 2: Understanding Regular Expressions Before using regular expressions to extract data from web pages, we need to have a basic understanding of them. They are patterns that are used to match character combinations in strings. They can search, replace, and validate text based on a pattern. #### Step 3: Finding the Pattern To extract data from web pages using regular expressions, we need to first find the pattern we want to match. This can be done by inspecting the HTML source code of the web page and identifying the specific text or HTML tag that we want to extract data from. ``` page = requests.get('https://example.com/') soup = BeautifulSoup(page.content, 'html.parser') content = soup.find_all(class_='product_pod') content = str(content) ``` #### Step 4: Writing the Regular Expression Once we have identified the pattern we want to match, we can write a regular expression to search for it on the web page. Regular expressions are written using a combination of characters and metacharacters that specify what we want to match. For example, to match a phone number on our example web page, we could write the regular expression: ``` number = r'\d{3}-\d{3}-\d{4}' ``` This regular expression matches a pattern that consists of three digits followed by a hyphen, three more digits, another hyphen, and four more digits. This is just a small example of how we can use regular expressions and their combinations to scrape data. You can experiment using more regex tokens and expressions for your data-scraping venture. ### How To Save Extracted Data to a File Now that you have learned to scrape data from websites and XML files, we must be able to save the extracted data in a suitable format. To save extracted data from data scraping to a file such as CSV or JSON in Python, you can follow the following general steps: #### Step 1: Scrape and Organize the Data Use a library or tool to scrape the data you want to save and organize it in a format that can be saved to a file. For example, you might use a dictionary or list to organize the data. #### Step 2: Choose a File Format Decide which file format you want to use to save the data. In this example, we will use CSV and JSON. A CSV (comma-separated values) file is a text file allowing data to be saved in a table format. The JSON data format is a file (.json) format used for data interchange through various forms of technology. #### Step 3: Save Data to a CSV File To save data to a CSV file, you can use the CSV module in Python. Here’s how: ``` import csv # data to be saved data = [ ['Jay', 'Dominic', 25], ['Justin', 'Seam', 30], ['Bob', 'Lans', 40] ] # open a file for writing with open('data.csv', mode='w', newline='') as file: # create a csv writer object writer = csv.writer(file) # write the data to the file writer.writerows(data) ``` #### Step 4: Save Data to a JSON File To save data to a JSON file, you can use the json module in Python. Here’s how: ``` import json # data to be saved data = [ {'name': 'John', 'surname': 'Doe', 'age': 25}, {'name': 'Jane', 'surname': 'Smith', 'age': 30}, {'name': 'Bob', 'surname': 'Johnson', 'age': 40} ] # open a file for writing with open('data.json', mode='w') as file: # write the data to the file json.dump(data, file) ``` In both cases, the code creates a file (if it doesn’t exist) and writes the extracted data in the chosen file format. ## Tips and Best Practices for Developing Robust and Scalable Web Scraping Applications It’s time to streamline the web scraping process. Building a robust and scalable application can save you time, labor, and money. Here are some tips and best practices to keep in mind when developing web scraping applications: **1. Respect Website Terms of Service and Copyright Laws** Before you start scraping a website, read its terms of service and copyright policies. Some websites may prohibit web scraping, and others may require you to credit the source of the data or obtain permission before using it. Ignoring the terms of service or the robots.txt file can result in legal issues or getting blocked by the website’s server. **2. Understand the Website’s Structure and APIs** Understanding a website’s structure and content helps identify data to extract. Tools like Web Scraper can help inspect HTML and find data elements. Website APIs offer structured and legal data access. So, be sure to use them whenever possible for scalability and compliance with ethical and legal standards. **3. Handle Errors Gracefully** When scraping a website, there may be times when the website is down, the connection is lost, or the data is unavailable. Hence, it becomes important to handle all these errors gracefully by adding error handling and retry mechanisms to your code. This will ensure that your application is robust and can handle unexpected situations. One way to handle errors is to use try-catch blocks to catch and handle exceptions. For example, if a scraping request fails, you can retry the request after a certain amount of time or move on to the next request. ``` import requests from bs4 import BeautifulSoup url = "https://example.com" try: response = requests.get(url) soup = BeautifulSoup(response.content, "html.parser") # Code to extract data from the soup object except requests.exceptions.RequestException as e: # Handle exceptions related to the requests module print("An error occurred while making the request:", e) except Exception as e: # Handle all other exceptions print("An error occurred:", e) finally: # Clean up code (if any) that needs to run regardless of whether an exception was raised or not ``` In the above example, we’re using the requests library to request a website, and then using Beautiful Soup to extract data from the HTML content of the response. The try block contains the code that may raise an exception, such as a network error or an error related to HTML content parsing. Logging is also a useful tool for debugging and troubleshooting errors. **4. Test Your Code Thoroughly** When developing a web scraping application, it’s important to test your code thoroughly to ensure it works correctly and efficiently. Use testing frameworks and tools to automate your tests and catch errors early in development. Version control is a system that tracks changes to your code over time. It allows you to keep track of changes, collaborate with others, and revert to previous versions if necessary. Git is a popular version control system widely used in the software development industry. **5. Use Appropriate Web Scraping Tools** As mentioned in the above sections, many web scraping tools exist in the market. Each tool has its strengths and weaknesses, and choosing the best tool for a particular project depends on various factors. These factors include: - the complexity of the website - the amount of data to be scrapped - the desired output format Let’s take an overview of some web scraping tools and their use: - Beautiful Soup is a great choice for simple web scraping projects that require basic HTML parsing. - Scrapy is more suited for complex projects that require advanced data extraction techniques like pagination or handling dynamic content. - Selenium is a powerful tool for scraping dynamic websites that require user interactions; it can be slower and more resource-intensive than other tools. **6. Avoid Detection by Websites** Web scraping can be resource-intensive and overload the website’s server, leading to you being blocked. To work around that, you can try changing proxies and user agents. It’s a hide-and-seek game, hiding from a website’s bot from blocking you. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oh88g034t261793um8v5.png) If you’re wondering what proxies are, here’s a simple answer: They are intermediaries that hide your IP address and provide a new one to the website’s server, making it harder for the server to detect that you are scraping the website. User agents are strings that identify the web browser and operating system used to access the website. By changing the user agent, you can make your scraping requests appear as if they are coming from different browsers or devices, which can help to avoid detection. **7. Manage Your Codebase and Handle Large Volumes of Data** Scraping can generate vast amounts of data. Store data wisely to avoid system overload by considering databases like PostgreSQL, MySQL, or SQLite and cloud storage. Use documentation tools like Sphinx and Pydoc, and linters like Flake8 and PyLint to ensure readability and catch errors. Caching and chunking techniques help reduce website requests and handle large datasets without memory issues. Chunking is breaking up large files or datasets into smaller, more manageable chunks. **8. Introduction to Scraping Websites That Require Authentication** Access acquisition for data scraping refers to obtaining permission or authorization to scrape data from a website or online source. It is important to obtain access because scraping data without permission may be illegal or violate the website’s terms of use. One must have valid authentication to use any data from an online platform and ensure to use it only for ethical and legal purposes. Scraping websites that require authentication can be more challenging than scraping public websites since they require a user to be authenticated and authorized to access certain information. However, there are several ways to authenticate and scrape such websites. ### Web Scraping Frameworks One common method is using web scraping frameworks like Scrapy or Beautiful Soup and libraries like Requests to authenticate and access the website’s content. The authentication process involves submitting a username and password to the website’s login form, usually sent as a POST request. After successful authentication, the scraped data can be extracted from the website’s HTML using parsing libraries like Beautiful Soup or lxml. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f4lf2vd33y2kx2lwumcb.png) ### Web Automation Tools Web automation tools like Selenium or Puppeteer simulate user interactions with the website. These tools can automate the entire authentication process by filling in the login form and clicking the submit button. Once authenticated, the scraper can use web automation tools to navigate the website and extract data. ### Handle Cookies and Session Management Many websites use cookies to manage sessions and maintain user authentication. When scraping websites that require authentication, it’s crucial to handle cookies correctly to maintain the session’s state and avoid being logged out. You can use libraries like Requests Session or Scrapy Cookies Middleware to manage cookies automatically. Whenever a website is visited, a bot automatically stores cookies about what you did and how you used the website. Scrapy cookies middleware can disable the cookies for a while so that data scraping can be achieved successfully. Some websites limit the number of requests you can make in a given time frame, which can result in IP blocking or account suspension. To avoid being rate-limited, you can use techniques like random delays, rotating proxies, and user agent rotation. ### Overview of GoLogin as a powerful anti-detect browser for web scraping We have learned about data scraping, its uses, how to use it, and which tools to use. But there is one more tool that you must be familiar with while scraping data off of the Internet. [GoLogin](https://gologin.com/?utm_source=medium.com&utm_medium=article&utm_campaign=Medium+blog) is a powerful tool for multiple accounts - a privacy browser that can be used for web scraping with Python. It is designed to help users avoid detection while scraping websites by allowing them to rotate IP addresses, browser fingerprints, user agents, etc. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ztkzv8mdt8v1d2p5w5av.png) Here are some features of GoLogin that make it a powerful tool for web scraping: **1. Top Tier Data Protection Technology:** GoLogin uses advanced anti-detect technology to make it difficult for websites to identify the bot-like behavior of web scrapers. **2. Browser Fingerprinting:** This anonymous browser allows users to create and rotate browser fingerprints, which are unique identifiers that websites use to track users. By rotating fingerprints, users can avoid being detected as scrapers. **3. IP Rotation:** GoLogin is a powerful software that allows users to rotate IP addresses, which helps to avoid detection by websites that track IP addresses. **4. Multiple Browser Profiles:** GoLogin acts as a multi-account browser that allows users to create and manage multiple browser profiles, each with its own set of browser fingerprints, IP addresses, and user agents. #### How To Set Up GoLogin and Use Its Proxy Manager and Fingerprint Manager Let’s see how to use its features and functionalities. **Step 1: Download and install GoLogin** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/per3m15mpvnqje33pnwm.png) You can download GoLogin from their website and follow the installation instructions. Click here to download it. **Step 2: Create a new profile** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kv4iqjxc8it2eub7la9p.png) Once you have installed GoLogin, open the application and create a new profile by clicking the “Add profile” button. Give the profile a name and move on to settings tabs. Keep to GoLogin recommended fingerprint settings for best results: even one option changed at your wish may affect your browser anonymity. **Step 3: Configure proxies** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/82ona242v028jhzg4vuu.png) In the profile settings, click on the “Proxy Manager” tab. Here you can add and manage proxies. Click on “Add” to add a new proxy. All common proxy types are supported. You can add proxies manually, mass import them from a file or use GoLogin built-in proxies. The latter ones can be bought and topped up in the top right corner (Buy Proxy button). **Step 4: Use the profile for web scraping with Python** Once you have configured the fingerprint and proxy managers, you can use the profile for web scraping. Go to the “Profiles” tab and select a profile. Click on the “Launch” button to open a new browser window with the profile. Use this browser window for your web scraping activities. By configuring the fingerprint and proxies in GoLogin, you can create a highly customized and secure environment for web scraping. ### Automating web scraping tasks using GoLogin’s API GoLogin’s application programming interface (API) allows you to automate web scraping tasks by providing programmatic access to the features of the GoLogin application. With the API, you can automate tasks such as creating and managing profiles, managing proxies and fingerprints, and launching browser windows. For example, maybe you have an online shopping site where you want to scrape the product information daily. To get things started, you’ll need to get an API token, as described [here](https://github.com/gologinapp/pygologin#usage). Next, download GoLogin’s Python wrapper (or simply download directly from GitHub): ``` git clone https://github.com/gologinapp/pygologin.git ``` Your Python script should be within the same directory as the “gologin.py” file. Below, we’ll break down and explain the sample code. **Step 1: Authenticate your API Key** Connect with GoLogin using the API token you got earlier. Once authenticated, you can start to create profiles and launch browser sessions. ``` import time from selenium import webdriver #Installing selenium is explained in Step 4. under How To Set Up a Development Environment for Web Scraping With Python from selenium.webdriver.chrome.options import Options from gologin import GoLogin from gologin import get_random_port # random_port = get_random_port() # uncomment to use random port gl = GoLogin({ "token": "yU0token", #The API token you generated earlier. "profile_id": "yU0Pr0f1leiD", # "port": random_port }) #See Step 3 for continued code. ``` **Step 2: Create a profile** Use the API to create a new profile with specific configurations, such as fingerprints and proxies. Those will make your browser session appear more human-like. For example, you might want to use a specific browser version, operating system, or language for your profile. ``` from gologin import GoLogin gl = GoLogin({ "token": "yU0token", }) profile_id = gl.create({ "name": 'profile_windows', "os": 'win', #mac for MacOS, and lin for Linux systems. "navigator": { "language": 'en-US', "userAgent": 'random', # Chrome, Mozilla, etc (if you don't want to change, leave it at 'random') "resolution": '1920x1080', "platform": 'Win32', }, 'proxyEnabled': True, # Specify 'false' if you are not using a proxy. 'proxy': { 'mode': 'gologin', 'autoProxyRegion': 'us' # 'host': '', # 'port': '', # 'username': '', # 'password': '', }, "webRTC": { "mode": "alerted", "enabled": True, }, }) profile = gl.getProfile(profile_id) ``` **Step 3: Launch a browser window** This will open up a new browser session with the settings you specified in the profile. You can then interact with the browser window programmatically to scrape data from websites or perform other automated tasks. ``` debugger_address = gl.start() chrome_options = Options() chrome_options.add_experimental_option("debuggerAddress", debugger_address) driver = webdriver.Chrome(executable_path=chrome_driver_path, options=chrome_options) driver.get("THE URL TO BE SCRAPED") #Your code here driver.close() time.sleep(3) gl.stop() ``` **Step 4: Scrape the website** Finally, combine GoLogin’s API with standard tools such as BeautifulSoup or Selenium, and you’ll have a powerful web scraper at your fingertips! Information like news headlines and product descriptions can be extracted and parsed into HTML or XML documents with ease. ## Conclusion And that concludes our step-by-step guide to web scraping with Python! Now that you’ve learned to extract data from websites using Python, the web is your game field. From analyzing competitors’ prices to keeping track of social media mentions of your brand, the possibilities for using web scraping in your business or personal projects are endless. Remember always to respect website owners and their terms of service when scraping data. Happy scraping, and may the data gods smile upon you! Feel free to try out [GoLogin](https://gologin.com/?utm_source=medium.com&utm_medium=article&utm_campaign=Medium+blog), a powerful and professional privacy tool designed to make your web scraping experience easier and more efficient.
gologinapp
1,428,399
Tracking medication dosages made easy with Appwrite’s database
Tracking medication dosages made easy with Appwrite’s database Tracking medication dosages...
0
2023-04-06T14:55:53
https://dev.to/hackmamba/tracking-medication-dosages-made-easy-with-appwrites-database-pde
appwrite, flutter, softwaredevelopment, hackmamba
# Tracking medication dosages made easy with Appwrite’s database Tracking medication dosages is essential for checking the compliance and consistency level in the prescription given by medical personnel (doctors, pharmacists, etc.). Likewise, it’s crucial for patients’ recovery from the illness or medical condition it was prescribed to treat. Appwrite’s database makes creating an efficient database system to track medication dosages easy. By the end of this tutorial, we will have created a fully functional database system for tracking medication dosages. Here is the GitHub [repository](https://github.com/muyiwexy/medication_history_tracker) containing all the code. ## Prerequisites This tutorial requires the reader to satisfy the following: - [Xcode](https://developer.apple.com/xcode/) (with developer account for Mac users). - iOS Simulator, Android Studio, or Chrome web browser to run the application. - An Appwrite instance running on either Docker, [DigitalOcean droplet](https://marketplace.digitalocean.com/apps/appwrite), or [Gitpod](https://gitpod.io/#https://github.com/appwrite/integration-for-gitpod). Check out this [article](https://dev.to/hackmamba/create-a-local-appwrite-instance-in-3-steps-19n9) for the setup. ## Setting up the Appwrite project In this section, we’ll set up an Appwrite project and create a platform. Let’s start by doing the following: - Open your browser and enter the `IP address or hostname`. - Choose `Create Project` option and fill in the `project name and ID`. (ID can be automatically generated.) ![create project](https://paper-attachments.dropboxusercontent.com/s_7604AE644359B8A42B0CA41928E7FA82C1E3181DC0A806189A51D7006C234F5A_1675132209657_Screenshot+258.png) ![create project 2](https://paper-attachments.dropboxusercontent.com/s_7604AE644359B8A42B0CA41928E7FA82C1E3181DC0A806189A51D7006C234F5A_1675132209708_Screenshot+257.png) - Next, head to the databases section, select the `Create Database` option, and fill in the desired database name and ID. ![create database](https://paper-attachments.dropboxusercontent.com/s_7604AE644359B8A42B0CA41928E7FA82C1E3181DC0A806189A51D7006C234F5A_1675133088225_Screenshot+262.png) - Next, we will need to create two collections within the database. The first collection will hold the medication data. Thus, we will create eight attributes: four Boolean (`isTrack``, isInjection``,` `isTablet, isSyrup`), three strings (`medicationname, userID``, i``d`), and one integer (`dosages`). This collection will hold the medication information. The second collection will hold the scheduled medications. Thus, we will need four string attributes (`date, medicationname, userID, id`) and a single Boolean attribute (`isTrack`). ![create collection](https://paper-attachments.dropboxusercontent.com/s_7604AE644359B8A42B0CA41928E7FA82C1E3181DC0A806189A51D7006C234F5A_1675133157888_Screenshot+263.png) - Finally, set the collection level `CRUD` permission for both collections to `Any` and select all the options. ![collection permission](https://paper-attachments.dropboxusercontent.com/s_7604AE644359B8A42B0CA41928E7FA82C1E3181DC0A806189A51D7006C234F5A_1675133182874_Screenshot+264.png) ![collection permission](https://paper-attachments.dropboxusercontent.com/s_7604AE644359B8A42B0CA41928E7FA82C1E3181DC0A806189A51D7006C234F5A_1675133218819_Screenshot+265.png) ## Getting started In this section, we will clone a Flutter UI template, connect our Flutter app to Appwrite, and explain the functionality of our project. **Clone the UI template** This section uses a UI template containing user registration and login code. Let’s clone the [repository](https://github.com/muyiwexy/medication_history_tracker) specified in the prerequisites. Check out the official GitHub [docs](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) to learn more about cloning a repository. ![clone app](https://paper-attachments.dropboxusercontent.com/s_4790BCE1D9901C61F6383518FE2D7DBEBF0E98C8364362B60F9C8BCE4293E832_1670224372635_7aAkFZ2g.png) ![clone app 2](https://paper-attachments.dropboxusercontent.com/s_4790BCE1D9901C61F6383518FE2D7DBEBF0E98C8364362B60F9C8BCE4293E832_1670224398326_fR9HnIBw.png) After cloning the UI to local storage, open it in your preferred code editor and run the command below: ```javascript flutter pub get ``` This command obtains all the dependencies listed in the `pubspec.yaml` file in the current working directory and their transitive dependencies. Next, run the command `flutter run`, and our application should look like the gif below: ![clone result 1](https://paper-attachments.dropboxusercontent.com/s_973FE6248EEDD7529E7A11435E297ADACD4E1B7483EB53A8303EEED9486C6A4E_1677497369211_ezgif.com-video-to-gif+3.gif) ![clone result 2](https://paper-attachments.dropboxusercontent.com/s_973FE6248EEDD7529E7A11435E297ADACD4E1B7483EB53A8303EEED9486C6A4E_1677497337804_ezgif.com-video-to-gif+2.gif) ![clone result 3](https://paper-attachments.dropboxusercontent.com/s_973FE6248EEDD7529E7A11435E297ADACD4E1B7483EB53A8303EEED9486C6A4E_1677497383478_ezgif.com-video-to-gif+1.gif) The gif above shows the UI and functionality of the medication tracker, in which there is a section to add medication and track medication (overdue medication and currently tracking). These functionalities were achieved using Appwrite and `Flutter_local_notification` package. > Note: This program doesn’t use the Appwrite real-time API. Thus, check the resources section for references to its use. The following sections provide a breakdown of how to implement the features shown above. Let’s start by showing how to connect Appwrite to a Flutter application and seed data to the Appwrite database collection. **Installing and** **c****onnecting Appwrite to Flutter** First, install the `Appwrite`, `Flutter_local_notification``,` and the `Provider` package for state management into the app. We can add them to the dependencies section of the `pubspec.yaml` file, like the image below: ![pubspec.yaml](https://paper-attachments.dropboxusercontent.com/s_041BDED8DAB440A4B6CCF96EB23FFFC68E40117CE3B40D930FE7EEC37F7160D4_1675028904318_Screenshot+65.png) Alternatively, we can use a terminal by typing the command below: ```javascript flutter pub add appwrite # and flutter pub add Provider # and flutter pub add flutter_local_notifications ``` Here’s how to connect a Flutter project to Appwrite for Android and iOS devices. **i****OS** First, obtain the `bundle ID` by navigating to the `project.pbxproj` file (`ios > Runner.xcodeproj > project.pbxproj`) and searching for the `PRODUCT_BUNDLE_IDENTIFIER`. Next, head to the `Runner.xcworkspace` folder in the application’s iOS folder in the project directory on [Xcode](https://developer.apple.com/xcode/). To select the runner target, choose the **Runner** project in the Xcode project navigator and find the `Runner target`. Next, select `General and IOS 11.0` in the deployment info section as the target. ![ios](https://paper-attachments.dropboxusercontent.com/s_973FE6248EEDD7529E7A11435E297ADACD4E1B7483EB53A8303EEED9486C6A4E_1677590542497_file.jpeg) **Android** For Android, copy the XML script below and paste it below the activity tag in the `Androidmanifest.xml` file (to find this file, head to `android > app > src > main`). {% embed https://gist.github.com/muyiwexy/f1a0f6c59329d25d37c353f34a4fa9aa %} > Note: change [PROJECT-ID] to the ID you used when creating the Appwrite project. We will also need to set up a platform within the Appwrite console. Follow the steps below to do so. - Within the Appwrite console, select `Create Platform` and choose `Flutter` for the platform type. - Specify the operating system: in this case, Android. - Finally, provide the application and package names (found in the app-level `build.gradle` file). ![create platform 1](https://paper-attachments.dropboxusercontent.com/s_7604AE644359B8A42B0CA41928E7FA82C1E3181DC0A806189A51D7006C234F5A_1675132261190_Screenshot+259.png) ![create platform 2](https://paper-attachments.dropboxusercontent.com/s_7604AE644359B8A42B0CA41928E7FA82C1E3181DC0A806189A51D7006C234F5A_1675132261213_Screenshot+260.png) ![create platform 3](https://paper-attachments.dropboxusercontent.com/s_7604AE644359B8A42B0CA41928E7FA82C1E3181DC0A806189A51D7006C234F5A_1675132261236_Screenshot+255.png) **Creating the** **p****roviders** Since we will use the provider package for state management, here are the `changeNotifier` classes to be utilized: 1. **UserRegProvider** {% embed https://gist.github.com/muyiwexy/e04941c12fada2b445ab3e7771262dad %} 2. **AppProvider class** {% embed https://gist.github.com/muyiwexy/b3f79d1b111af77abcfb072da017f265 %} 3. **ReminderProvider** {% embed https://gist.github.com/muyiwexy/d8094e2a2eccc123d23822d501c8d00b %} {% embed https://gist.github.com/muyiwexy/3595a09e953abe9ffac3a0878d7a2cf8 %} We used queries for the `userID` in the lists in case there is a need to add a super admin who will have access to all the items in the list. Also, we queried the `isTrack` attribute in the `ReminderProvider` to segregate the currently tracked processes to prevent a rerun in the instance when we run our program again. Lastly, using JSON serialization, we can map the list of documents from Appwrite to a class and get our items in a list. Thus, we have created three model classes for each `ChangeNotifier` class. Next, let’s head to the `app_constants.dart` file to store important constants within a class, such as the `projectID`, `database`, `collection ID,` and so on. ```javascript class Appconstants { static const String projectid = "<projectID>"; static const String endpoint = "<Hostname or IP address>"; static const String dbID = "<database ID>"; static const String collectionID = "<collection ID 1>"; static const String collectionID2 = "<Collection ID 2>"; } ``` **Adding** **m****edication details** This project uses three Notifier classes — `UserRegProvider`, `AppProvider`, `ReminderProvider` — to map out the user registration, medication document, and reminder document object. Thus, declare the `MultiProvider` class inside the `runApp` function of the `main.dart` file. This `MultiProvider` class will assist us in building a tree of providers like this: ```javascript void main() { WidgetsFlutterBinding.ensureInitialized(); runApp(MultiProvider( providers: [ ChangeNotifierProvider( create: (context) => UserRegProvider(), lazy: false, ), ChangeNotifierProvider( create: (context) => AppProvider(), lazy: false, ), ChangeNotifierProvider( create: (context) => ReminderProvider(), lazy: false, ), ], child: const MyApp(), )); } ``` This allows us to use methods, setters, and getters from the provider classes around our project via the `Consumer` widget or by calling the provider class. First, use the `routerclass.dart` to create navigation routes between the `Add Medication Information` and `Track Medication information` pages. Thus, in the `Add Medication Information` page (`register_medication.dart` file), we have a `stateful` widget in which the body property of the `scaffold` widget takes a condition. This condition checks whether `state1` (`UserRegProvider`) via the consumer is true; it should show a `circularloader` widget, and if it is false, it should show the `appcontainer` widget. This condition is to prevent the UI from loading before the provider. If that happens, we will get a null error, as some UI properties vary depending on the provider. The `appcontainer` widget contains a column of the logged-in username (retrieved from the `UserRegProvider` class) and a form field consisting of a string text field, integer text field checkboxes, and a submit button. We can add the medication information to our database collection created earlier with this form field, and its implementation is in the `[**medicine_details.dart**](https://gist.github.com/muyiwexy/5d6b638092d6472d39423f807ca09010)` file. **Tracking the** **m****edication** To track the medication, we will use the Appwrite database collection to track our medication by attaching a Boolean value to each tracked medication. We will then use the `flutter_local_notifications` package to schedule a notification for each tracked medication. Thus, in the `Track Medication Information` page (`track_medication.dart` file), we have a column of a button and a list area (divided into a row of Overdue and Currently tracking). When we click the button, it shows an `alertdialog` containing a checklist of all the medications. This checklist allows checking only one item at a time, and once we check an item, it shows a dropdown list to select the date form field to schedule the medication. If a date or time before the current date is added, and we click done, then the item will be added to the overdue section. However, if a date is after the current date, then it will run the notification function. {% embed https://gist.github.com/muyiwexy/c91a379b7bc54f85e124ca9401071032 %} The done button triggers the `_notificationService.scheduleNotifications();` while passing `context, widget.reminderstate.reminderitem!` as parameters. These parameters will assist in building the notification function. Here is the code for the notification function: {% embed https://gist.github.com/muyiwexy/e73b2c8735239b76de12f6ee464599b2 %} The code above defines a `NotificationService` class that handles scheduling and canceling local notifications using `FlutterLocalNotificationsPlugin`. The `NotificationService` class follows the singleton pattern, ensuring that only one class instance can exist simultaneously. Next, we create the `init()` method to initialize the `FlutterLocalNotificationsPlugin`, which is required before scheduling any notifications. The `scheduleNotifications()` method schedules notifications for a list of `reminders` (the parameter passed earlier). It converts the scheduled time of each reminder to the local time zone and schedules a notification using `flutterLocalNotificationsPlugin.zonedSchedule()`. Lastly, the `cancelNotifications()` method cancels a previously scheduled notification using `flutterLocalNotificationsPlugin.cancel()`. So, to use this code properly, we will need to call the `init()` method in the `void main` function, like so: ```javascript void main() { WidgetsFlutterBinding.ensureInitialized(); NotificationService().init(); // add runapp method } ``` The earlier results were on a web platform, and `flutter_local_notifications` does not support the web yet. So here, for the moment of truth: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zveh0qa2rae529llgs3o.gif) ## Conclusion Taking medications isn’t the most pleasant task of the day, but failing to do so can pose a risk to a patient and cause problems for their caregivers or healthcare providers. Thus, a scheduler can come in handy, as we can now set a future time and receive a notification when it is time. The Appwrite database assists in sorting out those lists, and with a package like the `flutter_local_notifications`, we are guaranteed to follow our medication schedule. Check out these additional resources that might be helpful: - [Appwrite official documentation.](https://appwrite.io/docs?utm_source=hackmamba&utm_medium=hackmamba-blog) - [Appwrite Flutter SDK.](https://appwrite.io/docs/getting-started-for-flutter?utm_source=hackmamba&utm_medium=hackmamba-blog) - [Appwrite Real-Time technology](https://appwrite.io/docs/realtime?utm_source=hackmamba&utm_medium=hackmamba-blog)
muyiwexy
1,428,718
what are you working on?
A post by Felexonyango
0
2023-04-06T21:20:50
https://dev.to/felexonyango/what-are-you-working-on-580j
felexonyango
1,428,922
Losing Control
As a new developer, every single day is information overload. I am convinced that the pounding I...
0
2023-04-07T22:39:29
https://dev.to/hillswor/losing-control-5d8o
github, javascript, beginners, webdev
As a new developer, every single day is information overload. I am convinced that the pounding I feel in my head after encountering another challenging concept is actually my brain morphing, evolving and rearranging itself to brace for... you guessed it! More information. Objects, arrays, pure functions, context, THIS! These are just some of the concepts that loop through the brain of an aspiring developer and I am no exception. Yet, as I sit here, having completed my first collaborative frontend project, version control is what I am ruminating over. This is by no means a complete rundown of version control, rather a synopsis of what I learned working with others for the first time with an emphasis on version control. ## The Project The goal of the project was to build a single page app that fetches data from an external API and utilizes JavaScript to manipulate the DOM and provide data without without a page refresh. One of the members of my group is a bike enthusiast and put us on to a website called bikeindex.org that maintains an API. The API allows people to search the database for stolen bikes, report their bike as stolen, and register a newly purchased bike. After some brainstorming, we decided to do a page that upon load would pull the user's location and then display bikes that have been stolen locally. ## What We Did Right This was the first collaborative project for all three of us so we definitely ran into hurdles but we also did some things correctly. The first thing we did right, was taking the time to brainstorm and dial in what exactly we wanted to display. We determined that we wanted to populate the bike data on a reusable card that would allow us to use a render function. Likewise, we also decided we would need to get the geolocation of the user and would populate the page on load with local stolen bikes. Deciding on these base level items gave us a path from start to finish and ensured that none of us strayed too far from the original concept. Within a short period of time we developed a function to access the user location using Abstract API and combined it with an initialize function that fetched stolen bike data from the bikeindex.org API and rendered bikes on individual cards using a render function. _Geolocation Function Using XML:_ ``` function httpGetAsync(url, callback) { const xmlHttp = new XMLHttpRequest(); xmlHttp.onreadystatechange = function () { if (xmlHttp.readyState === 4) { if (xmlHttp.status === 200) { callback(xmlHttp.responseText); } else { callback(null, new Error("Request failed")); } } }; xmlHttp.open("GET", url, true); xmlHttp.send(null); } ``` _Initialize Function:_ ``` function initialize(response) { if (response === null) { fetch( `https://bikeindex.org:443/api/v3/search?page=1&per_page=25&location=ip&distance=${distance}&stolenness=proximity` ) .then((response) => response.json()) .then((stolenBikes) => { bikes = stolenBikes.bikes.filter( (x) => x.large_img && x.title && x.description ); if (!bikes.length) { alert("Try extending the search radius"); } bikes.forEach((bike) => renderDisplayCardsOnPageLoad(bike)); }); } else { locationObject = JSON.parse(response); zipCode = locationObject.postal_code; fetch( `https://bikeindex.org:443/api/v3/search?page=1&per_page=100&query=image&location=${zipCode}&distance=${distance}&stolenness=proximity` ) .then((response) => response.json()) .then((stolenBikes) => { console.log(stolenBikes); bikes = stolenBikes.bikes.filter( (x) => x.large_img && x.title && x.description ); if (!bikes.length) { alert("Try extending the search radius"); } bikes.forEach((bike) => renderDisplayCardsOnPageLoad(bike)); }); } } ``` _Render Function:_ ``` function renderDisplayCardsOnPageLoad(bike) { let imageOpacity = true; const stolenLocation = document.createElement("p"); stolenLocation.setAttribute("id", "bike-name"); stolenLocation.textContent = getCityAndState(bike); const card = document.createElement("div"); card.setAttribute("class", "card"); const img = document.createElement("img"); img.setAttribute("src", bike.large_img); const location = document.createElement("p"); location.textContent = getCityAndState(bike); img.addEventListener("click", (e) => { imageOpacity = !imageOpacity; e.preventDefault(); if (!imageOpacity) { const reportButton = document.createElement("button"); reportButton.innerText = "REPORT SIGHTING"; reportButton.setAttribute("id", "report"); const description = document.createElement("p"); description.textContent = bike.title; const serialNumber = document.createElement("p"); const serialNumberString = `Serial Number: ${bike.serial}`; serialNumber.textContent = serialNumberString; const dateStolen = document.createElement("p"); dateStolen.textContent = getDateStolen(bike); const location = document.createElement("p"); location.textContent = getCityAndState(bike); e.target.style.opacity = 0.15; imageOpacity = false; card.appendChild(reportButton); card.appendChild(serialNumber); card.appendChild(dateStolen); card.appendChild(description); reportButton.addEventListener("click", (e) => { card.innerHTML = ""; const returnButton = document.createElement("button"); returnButton.innerText = "RETURN"; returnButton.className = "back-btn"; const reportFormSubmit = document.createElement("button"); reportFormSubmit.setAttribute = ("type", "submit"); reportFormSubmit.setAttribute = ("value", "submit"); reportFormSubmit.innerText = "SUBMIT"; reportFormSubmit.className = "btn"; const reportForm = document.createElement("form"); reportForm.id = "report-form"; const reportFormLocation = document.createElement("input"); reportFormLocation.type = "text"; reportFormLocation.className = "field"; reportFormLocation.id = "report_form_location"; reportFormLocation.placeholder = " ENTER SIGHTING LOCATION"; const reportFormComments = document.createElement("input"); reportFormComments.type = "text"; reportFormComments.className = "field"; reportFormComments.id = "report_form_comments"; reportFormComments.className = "field"; reportFormComments.placeholder = " ADDITIONAL COMMENTS"; const reportFormName = document.createElement("input"); reportFormName.type = "text"; reportFormName.id = "report_form_name"; reportFormName.className = "field"; reportFormName.placeholder = " NAME (optional)"; const bikeDetails = document.createElement("button"); bikeDetails.textContent = "BIKE DETAILS"; bikeDetails.className = "btn"; const location = document.createElement("p"); location.textContent = getCityAndState(bike); location.className = "location"; card.appendChild(reportForm); reportForm.appendChild(reportFormLocation); reportForm.appendChild(reportFormComments); reportForm.appendChild(reportFormName); reportForm.appendChild(reportFormSubmit); reportForm.appendChild(returnButton); returnButton.addEventListener("click", (e) => { card.innerHTML = ""; img.style.opacity = 1; card.appendChild(img); card.appendChild(stolenLocation); }); reportForm.addEventListener("submit", (e) => { e.preventDefault(); const fll = report_form_location.value; const flc = report_form_comments.value; const fln = report_form_name.value; card.innerHTML = ""; img.style.opacity = 1; card.appendChild(img); card.appendChild(stolenLocation); createSightingObj(bike, fll, flc, fln); }); }); } else { card.innerHTML = ""; e.target.style.opacity = 1; card.appendChild(img); card.appendChild(stolenLocation); imageOpacity = true; } }); card.appendChild(img); card.appendChild(stolenLocation); gallery.appendChild(card); } ``` ## Where We Lost Control Feeling pretty good about our progress we opted to have some fun and each of us opened our own branch and played with the code as we saw fit. This is where our first invaluable lesson on version control came in to play and following our the tips I learned and would advise any new developer to follow unless they want to handle nightmare merges down the road... **Split Up The Work** As new developers we all wanted to be very involved in every aspect of the app. All the code above is what the final code wound up being but each of us, on our own time, made our own render and initialize functions. While similar in technique, they were also very different. We also had a tendency to make sweeping changes vs small. This became particularly burdensome when we would go to merge files which leads me to my next tip... **Commit Small and Commit Often** This should be self explanatory but all the members of my group strayed away from this concept too frequently. The end result was less time spent on our code and making our app better vs nightmare merge sessions where we are trying to get all of our code to work together. **Single Intent** This coincides with small commits but even small commits could influence multiple areas of an app. As stated above, when we played with the code, we each did huge changes to multiple areas(JS, HTML, CSS) prior to committing. By keeping commits to single intents, reversing the code to enable better commits is more readily available. **Branch Away** A technique that I feel helped me, too late of course, was to have a separate branch for each area that I was working on. For instance, a branch just to handle the JavaScript file, a branch for the CSS, and a branch for the HTML. I found it to be incredibly difficult to merge commits that had dabbled in multiple spaces.
hillswor
1,428,966
New to Bootstrap? Don't Panic! Get Started Now !
Bootstrap is a widely used open-source front-end development framework that allows developers to...
0
2023-04-07T04:47:54
https://dev.to/gaius_001/new-to-bootstrap-lets-learn-together--i41
webdev, bootstrap, css, html
Bootstrap is a widely used open-source front-end development framework that allows developers to create responsive and mobile-first websites quickly. Bootstrap simplifies the process of designing a website by providing pre-built UI components, responsive grids, and JavaScript plugins. In this technical write-up, we will go through the process of getting started with Bootstrap, from downloading the framework to building a simple website. **Step 1: Download Bootstrap** The first step in getting started with Bootstrap is to download the framework from the official website (https://getbootstrap.com/). Bootstrap can be downloaded in two forms: the compiled CSS and JS files or the source code. The compiled files are the ones that are ready to be used and do not require any compilation. They are best suited for those who want to get started with Bootstrap quickly and don't need to modify the source code. The source code, on the other hand, requires compilation before it can be used. This option is best suited for those who want to modify the Bootstrap code or add their custom code to it. **Step 2: Include Bootstrap in your Project** After downloading Bootstrap, the next step is to include it in your project. You can do this by either linking to the compiled files in your HTML document or by importing the source code into your project. To link to the compiled files, you need to add the following code to the head section of your HTML document: ``` <link rel="stylesheet" href="path/to/bootstrap.min.css"> <script src="path/to/bootstrap.min.js"></script> ``` If you downloaded the source code, you can import it into your project using a module bundler like Webpack or Parcel. Once imported, you can use the components and classes provided by Bootstrap in your HTML, CSS, and JavaScript code. **Step 3: Understanding the Grid System** Bootstrap's grid system is one of its most powerful features. The grid system provides a responsive and flexible layout for your website. It consists of a 12-column grid that can be used to create layouts of different sizes. To use the grid system, you need to create a container element with the class "container" or "container-fluid". The "container" class creates a fixed-width container, while the "container-fluid" class creates a full-width container. Inside the container element, you can create rows using the "row" class. Each row can contain up to 12 columns. To create columns, use the "col" class, followed by a number that represents the number of columns you want the element to span. For example, to create a row with two equal columns, you can use the following code: ``` <div class="container"> <div class="row"> <div class="col-6">Column 1</div> <div class="col-6">Column 2</div> </div> </div> ``` This will create a row with two columns, each spanning six columns. **Step 4: Using Bootstrap Components** Bootstrap provides a wide range of UI components that can be used to create responsive and mobile-first websites. These components include buttons, forms, alerts, modals, and more. To use a component, you need to add its corresponding HTML code to your document. For example, to create a button, you can use the following code: ``` <button class="btn btn-primary">Click me</button> ``` This will create a button with the primary color. **Step 5: Customizing Bootstrap** Bootstrap provides a wide range of customization options that allow you to modify the look and feel of your website. These options include variables, mixins, and utilities. Variables are used to define colors, font sizes, and other design-related properties. Mixins are reusable blocks of CSS that can be included in
gaius_001
1,428,991
Content.Mozfire.in: The AI-Powered Platform for Effortless Content Creation
Introduction: In today's fast-paced digital world, content creation has become more important than...
0
2023-04-07T05:53:28
https://dev.to/darshiethixxx/contentmozfirein-the-ai-powered-platform-for-effortless-content-creation-1ofl
Introduction: In today's fast-paced digital world, content creation has become more important than ever. With the increase in online businesses, blogs, and social media platforms, the demand for high-quality content has skyrocketed. However, creating compelling and engaging content can be time-consuming and challenging, especially for those without experience. This is where Content.Mozfire.in comes in - a game-changing platform that utilizes AI to simplify and streamline the content creation process. Overview of Content.Mozfire.in: Content.Mozfire.in is an AI-based platform that offers over 50 tools for generating different types of content for websites, blogs, social media, and marketing. Its tools leverage natural language processing and machine learning algorithms to create content that is 100% real. The website offers a seamless experience for content creation, providing a wide range of tools for various content creation purposes. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/flo37tasweljue7103b8.png) Tools Offered by Content.Mozfire.in: The platform offers a variety of tools for generating content for different purposes. For instance, the article generator tool can generate articles based on the title, keywords, and subheading. The blog intro, outro, listicle, outline, paragraph, post, section, and tags tools can help generate various parts of a blog post, including the introduction, conclusion, sections, and tags. The website also offers marketing tools, such as ad descriptions, Facebook and Google advertisements, job descriptions, mission and vision statements, product sheets, press releases, and value propositions. These tools help marketers to create compelling content that can attract and retain customers. The social media tools offered by the website can help generate hashtags, social media posts, captions, tweets, Twitter threads, video descriptions, scripts, tags, and titles. These tools can help individuals and businesses to create engaging content that can increase social media engagement and reach. Moreover, the website provides a custom tool, called Freestyle, which allows users to generate any type of content they desire. This tool offers unlimited possibilities and can help users unleash their imagination. Additional Features: In addition to content creation, Content.Mozfire.in also offers tools to improve the quality of existing content. The Content Grammar tool corrects grammatical errors in any text in seconds, while the Content Rewrite tool helps users to rewrite any type of content in an enhanced way. The Content Summary tool can summarize any content in seconds, which is particularly useful for long-form content. Conclusion: Content.Mozfire.in offers a comprehensive suite of AI-powered tools that can help businesses and individuals to generate high-quality content quickly and easily. The platform's tools can assist with various content creation tasks, including writing blog posts, creating marketing materials, and generating social media content. Whether you are a marketer, content creator, or just need assistance generating content, Content.Mozfire.in has something for everyone. With its user-friendly interface and powerful AI algorithms, this platform is a game-changer for effortless content creation.
darshiethixxx
1,429,074
Six Month AI Pause
Brief video written by GPT4 about the Open Letter promoting a pause on AI training of models larger than GPT4.
0
2023-04-07T07:34:31
https://dev.to/cheetah100/six-month-ai-pause-509h
gpt4, ai
--- title: Six Month AI Pause published: true description: Brief video written by GPT4 about the Open Letter promoting a pause on AI training of models larger than GPT4. tags: gpt4, ai cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vecdlwnyyqcg93nai65t.png # Use a ratio of 100:42 for best results. # published_at: 2023-02-19 00:52 +0000 --- Recently there was a [call by thousands of Artificial Intelligence researchers](https://futureoflife.org/open-letter/pause-giant-ai-experiments/) to pause training of models more advanced than GPT4. We are getting to the point that what we are seeing is close to General Intelligence, and that there are very real risks. {% embed https://www.youtube.com/watch?v=Phm1YcZHzMU %} Critics of the open letter have come in a few varieties. Some have claimed that the letter is scare mongering, and that we really have no evidence of the risk, while it may undermine progress and manifest benefits to humanity. Others agree with the assessment of the risk itself, but don't believe the open letter will bring about a real moratorium. Even if there were a moratorium would it be long enough or effective in the long term? Or would be need a more severe ban on AI? What is certain is that there is no certainty or agreement, even among experts in the field. _The video dialog is entirely generated by GPT4_
cheetah100
1,429,084
Turkish Airlines New York Office
This air administrator's workplaces constantly work to offer ideal support to their clients. The...
0
2023-04-07T08:11:39
https://dev.to/corporateairlinesoffices/turkish-airlines-new-york-office-422p
This air administrator's workplaces constantly work to offer ideal support to their clients. The [Turkish Airlines New York Office](https://corporateairlinesoffices.com/turkish-airlines/turkish-airlines-new-york-airport-office-in-usa/) has selected a productive staff power to investigate its administrations and connect with clients. Individuals visit here to book tickets, drop off the excursion, request a discount, or enquire about postponed flights. Clients may likewise accompany issues regarding movement, visa, obligation-free merchandise, lost things, pet travel strategy, etc. They can stroll into the workplace with the issues or contact the carriers with the assistance of the authority number. The officials will work proficiently to guarantee fulfilment for each client.
corporateairlinesoffices
1,429,175
Ranger: JS Range Syntax for Anything
Following on from my little experiment with a basic number range syntax in JS, I decided (as with...
0
2023-04-08T06:12:17
https://dev.to/jonrandy/ranger-js-range-syntax-for-anything-4djc
javascript, syntax, showdev, programming
Following on from my little [experiment](https://dev.to/jonrandy/js-magic-making-a-range-syntax-55im) with a basic number range syntax in JS, I decided (as with [Metho](https://dev.to/jonrandy/introducing-metho-safely-adding-superpowers-to-js-1lj) and [Turboprop](https://dev.to/jonrandy/turboprop-js-arrays-as-property-accessors-52h)) to generalise the idea and make it into a library that others might find useful (or at least interesting)...<br><br> [Ranger](https://github.com/jonrandy/js-ranger) is a small JS library that allows you to use a range-like syntax with **any** object. All you need to do is to define a function that builds the required 'range' given a starting and ending object (+ optional extra parameters if you so desire). The 'range' syntax is as follows: `rangeStart[[...rangeEnd`*`, optionalParam1, optionalParam2...`*`]]` So, for example, if you created a range function for `Number`s - you could then use it like this: ```javascript // create a range of numbers from 1-10 const numbers = 1[[...10]] // log the numbers from 6-3 6[[...3]].forEach(x => console.log(x)) ``` ## How to Use Usage is extremely simple, just import the library and use it to set up the range function on your required object (usually this would be a prototype): ```js import * as ranger from '@jonrandy/js-ranger' const myRangeFunction = (start, end) => { // logic to return 'range' here } ranger.attach(myObject, myRangeFunction) ``` If you pass in optional parameters according to the syntax detailed at the start of this README, they will simply be passed as additional arguments to your range function. Your 'range' making function can return anything - it doesn't have to be an array. ## Number Ranges Also exported by the library as an example of usage is a function called `initNumberRangeSyntax` that sets up a basic range syntax on the `Number` prototype - that does pretty much what you would expect. It can also take an additional `stepSize` parameter that defaults to `1` and decides the (absolute) size of the steps between items in the range: ```js import * as ranger from '@jonrandy/js-ranger' ranger.initNumberRangeSyntax() console.log(1[[...3]]) // [1, 2, 3] console.log(5[[...2]]) // [5, 4, 3, 2] console.log(0[[...3, 0.75]]) // [0, 0.75, 1.5, 2.25, 3] console.log(2[[...0, 0.5]]) // [2, 1.5, 1, 0.5, 0] ``` And the source for `initNumberRangeSyntax()`: ```js export function initNumberRangeSyntax() { attach(Number.prototype, (start, end, stepSize = 1) => { const absStep = stepSize<0 ? Math.abs(stepSize) : stepSize const step = start<=end ? absStep : -absStep let arr = [], i, d = end > start for (i=+start; d ? i<=end : i>=end; i+=step) arr.push(i) return arr }) } ``` **UPDATE**: It has been pointed out that returning an iterator would probably be more efficient and allow for dealing with infinite ranges. That is just as possible with Ranger - simply return a generator function: ```js // Number ranges as iterators attach(Number.prototype, function* (start, end, stepSize = 1) { const absStep = stepSize<0 ? Math.abs(stepSize) : stepSize const step = start<=end ? absStep : -absStep let arr = [], i, d = end > start for (i=+start; d ? i<=end : i>=end; i+=step) yield i }) ``` ## Possible Usages This was written as a general purpose tool that could have any number of potential uses. Some random ideas: ```js const myDateRange = date1[[...date2]] const myRoute = location1[[...location2, {via: location3}]] const myLine = point1[[...point2]] const translator = language1[[...language2]] // could return a function that takes strings in one language and translates to another ``` {% embed https://github.com/jonrandy/js-ranger %}
jonrandy
1,429,193
Why choose letsremotify?
As a software engineer who has worked remotely for many years, I have used various platforms to find...
0
2023-04-07T10:20:17
https://dev.to/muznabatool/why-choose-letsremotify-40lc
As a software engineer who has worked remotely for many years, I have used various platforms to find remote job opportunities. One of the most popular platforms is [letsremotify](https://letsremotify.com/) and I can confidently say that [letsremotify](https://letsremotify.com/) is the better choice for software engineers looking for remote jobs. Here's why: **1. Focus on Quality over Quantity:** [letsremotify](https://letsremotify.com/) has a variety of projects available from top US companies. As a software engineer, I have found projects ranging from small coding tasks to larger projects though it. The platform also offers opportunities to work on projects in different programming languages, which has allowed me to expand my skills. On the other hand, filtering out low-quality job opportunities is comparatively difficult on other remote platforms. **2. Better User Experience:** [letsremotify](https://letsremotify.com/) has a more user-friendly interface that makes it easy to navigate and find job listings that match your skills and experience. The platform also offers a variety of filtering options that can help you narrow down your search to find the most relevant job listings. Other remote work platforms, on the other hand, can feel cluttered and overwhelming, which can make it difficult to find the right job opportunities. **3. Talent Success Support:** [letsremotify](https://letsremotify.com/) offers personalized support to job seekers, which can be incredibly helpful when you are looking for a remote job. Our talent success support is guaranteed even during projects which might not be the case for other similar platforms. Overall, [letsremotify](https://letsremotify.com/) is the better choice for software engineers looking for remote job opportunities. With a focus on quality over quantity, a user-friendly interface, and personalized support, [letsremotify](https://letsremotify.com/) is the clear winner in my opinion.
muznabatool
1,429,198
Should I master Python or JavaScript?
Deciding whether to master Python or JavaScript depends on your goals and the type of programming you...
0
2023-04-07T10:25:29
https://dev.to/fredric33761248/should-i-master-python-or-javascript-p2a
Deciding whether to master Python or JavaScript depends on your goals and the type of programming you want to do. Python is a versatile language that is widely used in data analysis, scientific computing, and machine learning. It is also used in web development, server-side programming to [Pinterest free](https://pintdd.com/pinterest-video-downloader/), and automation tasks. On the other hand, JavaScript is primarily used for web development, both on the client-side and server-side, and has become increasingly popular in the field of front-end development with the advent of popular frameworks like React and Angular. If you are interested in data analysis or machine learning, mastering Python is essential. Similarly, if you are looking to get into server-side programming or automation, Python is a great choice. However, if your interest lies in web development and front-end technologies, JavaScript is the way to go. Ultimately, the choice between the two depends on your career aspirations and the type of projects you want to work on. It's worth noting that both languages have a large and supportive community, and there are plenty of resources available to help you learn either one.
fredric33761248
1,429,229
Learning Software Architecture? Start With Why!
61,698,933! That's the number of views (at the time of writing) Simon Sinek's TED talk on 'How great...
0
2023-04-07T11:29:22
https://jameseastham.co.uk/post/software-design/learning-software-architecture/
architecture, design, designpatterns, aws
61,698,933! That's the number of views (at the time of writing) [Simon Sinek's TED talk on 'How great leaders inspire action'](https://www.ted.com/talks/simon_sinek_how_great_leaders_inspire_action/no-comments) has. A fundamental idea from the talk is the idea of a golden circle, and how we default to approaching things in the complete opposite way. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5tal6p73a5c7d0opvu7i.png) Many of us approach things with an outside in approach. We start with what, take steps to work out how, and rarely worry too much about the why. This is a flawed approach. Starting with why, at least in business, explains your purpose and why you are doing the thing you are doing. People buy your why, not your what. Yet many businesses start with the what, and completely lose the sense of purpose. Your why drives the relationship with your customers. You might wonder how this short piece of business advice relates to how we teach software. But let's take this same golden circle, and apply it to how many of us try to learn about software architecture. People interested in software architecture often ask me which AWS service to learn first. Or which certification to take. Do I learn about serverless or containers? Event driven architecture or microservices? These areas are inevitably ones you will need to learn about at some point, but they are **unlikely** to be the best the place to start. You are starting with **what**! Instead, start with **why**. ## Starting with Why Build your knowledge from [first principles](https://fs.blog/first-principles/). Technology iterates on itself quickly, and every iteration takes a problem seen previously and makes it better. These iterations stack on top of each other to give us the technology choices we have today. On-Premise servers -> Elastic Cloud Servers -> Containers -> Container Orchestrators -> Serverless The most famous saying in software architecture is **it depends**. Taking a specific technology driven approach, to learning software architecture limits your 'it depends' to the specific niche of technology you are learning. If you have a hammer, everything looks like a nail. If you have a [Lambda](https://aws.amazon.com/lambda/) function, everything looks like it's event driven. Instead of asking why exactly are we trying to hang this thing from the wall? Let's take databases as an example. [DynamoDB](https://aws.amazon.com/dynamodb/) is a popular, fast and flexible NoSQL database service provided by AWS. It's actually my second favourite AWS service (just beaten by [Step Functions](https://aws.amazon.com/step-functions/) if you're interested). DynamoDB is largely schemaless. The schema of the items you write is determined at runtime. What this means is that you can store almost anything in DynamoDB, in any format. It's an incredibly flexible database that could probably fit almost any application use case. That flexibility is one of DynamoDB's biggest strengths, and also it's greatest downfall if misused. If you spend all of your time learning about the specific API's of DynamoDB, you understand single-table design, global secondary indexes, sparse indexes, read and write capacity units but never understand the why... Well then there is every possibility you may choose to use DynamoDB in a scenario where it doesn't actually fit. Traditional relational databases still have a perfectly valid use case in modern technology. As do graph and time series databases. There is a reason AWS (currently) has [15 different database services](https://aws.amazon.com/products/databases/). Yet if you are stuck in your little DynamoDB bubble you never even look up to see what else is out there. I'm not suggesting everyone needs to go deep on every single different type of database, but understanding the history of why we got to where we are is important. [Designing Data-Intensive Applications by Martin Klepmann](https://www.amazon.co.uk/Designing-Data-Intensive-Applications-Reliable-Maintainable/dp/1449373321) is an absolute goldmine for this content. Even reading the first 3 chapters of what is an admittedly dense technical book, would give you a **huge** architectural advantage because you understand enough to make an informed decision. ## A Generalist Growth Mindset In the excellent book [Range](https://www.amazon.co.uk/Range-Key-Success-Performance-Education/dp/1509843523) David Epstein discusses how being a generalist is a fantastic skill to have. The cross-pollination of different ideas, from different domains, leads to better decision making and more creative thinking. Having a general idea of the use cases for the different database options, a set of first principles that guide your decision making, forces you to start from the problem you are solving and work outwards to **what** database you are going to use. Suddenly, the decision between 16 different AWS database services becomes less challenging. You're making your decision based on the specific problem domain, applying your general knowledge of all databases to work out to the best technology choice. Once you've gone through the why and reached the what, it's time to double down on a growth mindset. [Professor Carol Dweck](https://www.ted.com/speakers/carol_dweck) proposed the idea of fixed and growth mindsets. A person holding a fixed mindset assumes that their abilities are immutable, whereas a person with a growth mindset acknowledges that they can acquire and develop any new skill with sufficient time and effort. Fixed = I don't know Growth = I don't know, **yet** Baseline your thinking as a generalist and start from first principles, before doubling down on your growth mindset to pickup a new technology as and when the time arises. Starting from why; a skill all architects, aspiring or otherwise, should cultivate at all times. As always, thanks for reading. James
jeastham1993
1,429,291
What is JSON
JavaScript Object Notation (JSON) is a lightweight text-based format for storing structured data that...
0
2023-04-07T12:21:09
https://dev.to/alexanie_/what-is-json-30e7
json, javascript, jsonstringify, jsonparse
JavaScript Object Notation (JSON) is a lightweight text-based format for storing structured data that can be assembled, parsed and generated by JavaScript and other C-family ( C, C++, C#, Java, JavaScript, Perl, Python) programming languages. # Uses of JSON JavaScript Object Notation (JSON) is used for parsing data interchangeably across multiple or different systems such as transferring data from a backend server to the frontend of an application or vice-versa. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/67rqaxe3m1gc9fkmib4o.png) Data in JSON are stored in curly braces `{}`, as key and value pairs, wrapped in double-quote `("")` and separated by a colon `(:)` while a comma `(,)` is used to indicate the end of a particular key and value pair. This makes it easy to read and write by humans. # How to Create a JSON file To create is JSON file, create a file and give it any name of your choice and then save it with a JSON (`.json`) file extension. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6l57lqz4q71gnjt5rd67.png) After you must have created the file, start creating the data of the object you want directly in your JSON file. ```json { "firstName":"John", "LastName":"Plum", "occupation": "web Developer", "Hobbies": ["Reading", "Coding", "Basket Ball", "Football", "Movies", "Music"], "married" : true, "Age" : 29 } ``` From the code example above, The JSON file is used to store the information of a person with a `firstName` of `John` together with his `lastName` and other information. JSON can take up any valid JavaScript data type as a value pair, these include; 1. String 2. Number 3. Boolean (true or false) 4. Null 5. Array 6. Object 7. Infinity 8. NaN However, it's important to note that, we can also create JSON files in an array object syntax. For example ```json [ { "firstName":"John", "LastName":"Plum", "occupation": "web Developer", "Hobbies": ["Reading", "Coding", "Basket Ball", "Football", "Movies", "Music"], "married" : true, "Age" : 29 } ] ``` Here, square brackets are used as the starting point. # Static Methods Depending on the environment you are working on, you will sometimes be required to parse the JSON file to function in a new environment. Parsing in this context means converting it from either an object to a string or string to an object. There are two *static methods* available in JSON for doing this, which is; * JSON.stringify() * JSON.parse() ## JSON.stringify() The `JSON.stringify()` static method is used to convert the JSON object into a string. This is used in a JavaScript environment. However, keep in mind that a valid JSON object syntax is a valid object in JavaScript. To see how this works, copy the JSON code below and paste it into a JavaScript file and then store it in a variable called `person`. *Try the code example below.* ```javascript let person = { "firstName":"John", "LastName":"Plum", "occupation": "web Developer", "Hobbies": ["Reading", "Coding", "Basket Ball", "Football", "Movies", "Music"], "married" : true, "Age" : 29 } ``` The above code is a valid JavaScript object and we can access the properties (keys and values) as an Object. *Try the code example below.* ```javascript console.log(person); ``` Preview ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zjn6cie8mpu43gr4uw67.png) Here, from the console. The `person` object returns a `prototype` of `Object`. Now that we have returned the person object and checked the prototype in JavaScript. Let's see how the `JSON.stringify()` method can be used to convert the object into a string. *Try the code example below.* ```javascript const newPersion = JSON.stringify(person) console.log(newPersion) ``` Preview ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0eh3rqdr21isps49ktx2.png) Here, The `person` object is stored in `newPersion` variable and then the `JSON.stringify()` method is used to parse the object into a string. from the console preview, notice the syntax highlighting has changed and no prototype is returned compared to the first example. This is because the `newPerson` is not an `Object` but a `string`. We can use the `typeof` operator to confirm these. *Try the code example below.* ```javascript console.log(typeof newPersion) ``` Preview ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0f9gyh33htp9k2z4upup.png) From the browser preview, a `string` is returned as the data type. ## JSON.parse() The `JSON.parse()` does the reverse opposite of the `JSON.stringify()` does. It simply converts the JSON string into an Object. *Try the code example below.* ```javascript let person = { "firstName":"John", "LastName":"Plum", "occupation": "web Developer", "Hobbies": ["Reading", "Coding", "Basket Ball", "Football", "Movies", "Music"], "married" : true, "Age" : 29 } const newPersion = JSON.stringify(person) const newPersion2 = JSON.parse(newPersion); console.log(newPersion2) ``` Preview ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ypcs26vx43ow3cleqfy6.png) Here, the `JSON.parse()` method is used to convert the `newPersion` back to an Object. However the JSON string can also be written in a different style, that is using the backticks to wrapper wrap around the curly braces. *Try the code example below.* ```javascript let person2 = `{ "firstName":"John", "LastName":"Plum", "occupation": "web Developer", "Hobbies": ["Reading", "Coding", "Basket Ball", "Football", "Movies", "Music"], "married" : true, "Age" : 29 }`; console.log(person2) console.log(typeof person2) ``` Preview ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q16aaumv8y7g4dzgc0p3.png) From the preview above, the `persion2` JSON object is returned as a string. # Wrapping up We learned about *what is JSON* and we discussed the following. * What is JSON * Uses of JSON * How to create a JSON file * JSON.stringify() * JSON.parse() Alright! We’ve come to the end of this tutorial. Thanks for taking the time to read this article to completion. Feel free to ask questions. I’ll gladly reply. You can find me on Twitter and other social media @ocxigin. or email me at ocxigin@gmail.com *Cheers*
alexanie_
1,429,451
Exporting CSV Data from Real Estate Website with BS4
In this article, we will explore how to scrape data from a real estate website using Beautiful Soup...
0
2023-04-07T15:26:12
https://dev.to/damilare_abogunrin/exporting-csv-data-from-real-estate-website-with-bs4-3k3n
bs4, python, csv
In this article, we will explore how to scrape data from a real estate website using Beautiful Soup version 4. Specifically, we will be scraping data on rental apartments in Amsterdam from www.pararius.com, one of the leading real estate websites in the Netherlands. Using BS4, we will parse the HTML content of the webpage, extract relevant data such as apartment details, pricing, and location, and export it to CSV files for further analysis. This technical guide will provide step-by-step instructions on how to set up and execute a web scraping script using Python and BS4 to obtain and export data from real estate websites. ##Packages Beautiful Soup Python CSV library pip Installer ##Script ``` from bs4 import BeautifulSoup import requests from csv import writer url= "https://www.pararius.com/apartments/amsterdam?ac=1" page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser') lists = soup.find_all('section', class_="listing-search-item") with open('housing.csv', 'w', encoding='utf8', newline='') as f: thewriter = writer(f) header = ['Title', 'Location', 'Price', 'Area'] thewriter.writerow(header) for list in lists: title = list.find('a', class_="listing-search-item__link--title").text.replace('\n', '') location = list.find('div', class_="listing-search-item__location").text.replace('\n', '') price = list.find('span', class_="listing-search-item__price").text.replace('\n', '') area = list.find('span', class_="illustrated-features__description").text.replace('\n', '') info = [title, location, price, area] thewriter.writerow(info) ```
damilare_abogunrin
1,429,657
Quem deve implementar a integração da API, Desenvolvedor Backend ou Frontend?
As APIs do backend são projetadas para serem consumidas por outros aplicativos ou serviços. Quando...
0
2023-04-07T18:58:38
https://dev.to/robsonamendonca/quem-deve-implementar-a-integracao-da-api-desenvolvedor-backend-ou-frontend-96f
api, backend, frontend, webdev
As APIs do backend são projetadas para serem consumidas por outros aplicativos ou serviços. Quando falamos de desenvolvimento web, a interação com a API do backend é feita pelo frontend, que é a parte da aplicação acessada pelo usuário final. Quando uma API do backend está pronta para ser usada, é importante entender quem deve interagir e consumir essa API. O papel do backend é desenvolver e manter a API, enquanto o papel do frontend é consumir a API e exibir os dados em uma interface amigável ao usuário. O backend é responsável por criar a lógica de negócios e processamento de dados. Eles projetam e desenvolvem a API para fornecer aos desenvolvedores de frontend uma maneira de acessar e manipular dados do aplicativo. Isso inclui definir rotas, parâmetros e retornar dados estruturados em um formato que possa ser facilmente consumido. Ao implementar a API do backend no projeto do frontend, é importante lembrar que o desenvolvedor do frontend deve ter um conhecimento básico sobre o funcionamento da API, como os endpoints disponíveis, os dados que a API retorna, e as possíveis mensagens de erro que podem ser recebidas. O frontend é responsável por consumir a API do backend e exibir esses dados para o usuário final. Eles usam tecnologias como HTML, CSS e JavaScript para criar uma interface de usuário atraente e fácil de usar. O frontend também pode manipular e enviar dados de volta ao backend por meio da API. Em resumo, o papel do backend é criar e manter a API, enquanto o papel do frontend é consumir a API e criar uma interface de usuário atraente. Ambos são igualmente importantes no processo de desenvolvimento de aplicativos, e a colaboração entre eles é fundamental para criar um produto final de alta qualidade.
robsonamendonca
1,429,724
Awesome Supports from other Global Web 3 Leaders in Blockchain Space
Code 247 Foundation is a non-profit organization that focuses on promoting coding skills and...
0
2023-04-07T21:52:13
https://dev.to/femostic4j/awesome-supports-from-other-web-3-leaders-in-blockchain-space-28fk
Code 247 Foundation is a non-profit organization that focuses on promoting coding skills and blockchain technology among African youth, with a particular emphasis on Nigeria. The foundation aims to equip young people with the necessary knowledge and skills needed to thrive in the fast-evolving technological landscape, thus creating a more vibrant and sustainable tech ecosystem in Africa. The organization offers various programs and initiatives, such as coding bootcamps, hackathons, mentorship, and networking opportunities, to help young people develop practical skills and experiences in software development and blockchain. Code 247 Foundation is run by a team of certified tech experts who are passionate about using their expertise to empower the next generation of African tech leaders. They collaborate with various organizations and stakeholders in the tech industry to expand their reach and impact. Target audience: The foundation's primary target audience is African youth aged 16 to 35 years old, particularly those from underserved communities who may not have had access to quality technology education or resources. However, their programs and initiatives are open to anyone who is interested in learning coding or blockchain technology. Programs and initiatives: Code 247 Foundation offers a range of programs and initiatives, including coding bootcamps, hackathons, mentorship, networking events, and online resources such as tutorials and webinars. These programs are designed to be hands-on and practical, allowing participants to gain real-world experience in coding and blockchain technology. Partnerships and collaborations: The foundation collaborates with various organizations and stakeholders in the tech industry, including tech companies, universities, government agencies, and non-profits. Impact: Since its founding, the foundation has impacted over 1,000 African youth through its programs and initiatives, helping them develop valuable skills and experiences in technology. The foundation's ultimate goal is to have a lasting impact on the African tech ecosystem, by empowering the next generation of tech leaders and entrepreneurs. As we get set to onboard new set of talents through bootcamp, welcome new set of Web 3 leaders doing amazing things in this amazing space. Code 247 Foundation Summit 2023 is going to be awesome. Brief details about our guest. **Dr Tammy Francis (USA)** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u6chskmuuzaqewd22ld2.jpg) She is the Founder and CEO of Catalyst 4 Change Global, LLC. She is an edupreneur, a tenured Associate Professor in Texas, USA and a Professor at Althash University, a global campus that offers online blockchain programs. Tammy is a Global Strategist, Educator, Consultant, Educational Researcher, Speaker, Author, Podcaster, Mentor, and Traveler. Ph.D holder in Curriculum and Instruction. who have taught for over 22 years in the traditional educational system--grades 6-12 and higher education. As a professor and educator, she help adults improve their reading and writing skills as well as their ability to be successful in college and life. **Omoseyin Faye (Nigeria)** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2fumnbuwo0tf7ufqdkh4.jpg) Expert in blockchain development, Defi, Nft and skilled in Economics and Statistics, knowledgeable in researching, designing, developing and testing blockchain technologies as well as proficeliency in Solidity, Geth, Truffle, Remix, Metamask, Ganache, JavaScript, React.js, web3.js, RELL and more. **Pretty Kubyane (South Africa)** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o5m09xqal6mxa704f5kj.jpg) Co-founder Coronet Blockchain, co-founded Coronet Blockchain back in 2019, and led the team to raise two funding tranches across the US and Switzerland. Following the Swiss funding round, she also co-founded the eFama App a B2B2C Marketplace: that enables farmers to sell their fresh produce and meat direct to consumers, commercial buyers, and achieve quality assurance. She was flown into a Southern African nation on the brink of a kimberlite diamond mining boom, leading a multi-disciplinary team. Where dhw was tasked with spearheading the design of a ZAR 29 billion social impact investment framework (an SLP), to hold mines accountable whilst ensuring communities benefit from the extractive industry successes. Overall, the Code 247 Foundation is a promising organization that is making significant efforts to promote technology education and entrepreneurship in Africa. If you are interested in learning more about our programs or getting involved with our initiatives, you can visit our website or reach out to us directly.
femostic4j
1,429,870
133. Leetcode Solution in cpp
/* // Definition for a Node. class Node { public: int val; vector&lt;Node*&gt; neighbors; ...
0
2023-04-08T03:04:03
https://dev.to/chiki1601/133-leetcode-solution-in-cpp-2p2h
cpp
```cpp /* // Definition for a Node. class Node { public: int val; vector<Node*> neighbors; Node() { val = 0; neighbors = vector<Node*>(); } Node(int _val) { val = _val; neighbors = vector<Node*>(); } Node(int _val, vector<Node*> _neighbors) { val = _val; neighbors = _neighbors; } }; */ class Solution { public: Node* cloneGraph(Node* node) { if (!node) return nullptr; queue<Node*> q{{node}}; unordered_map<Node*, Node*> map{{node, new Node(node->val)}}; while (!q.empty()) { Node* u = q.front(); q.pop(); for (Node* v : u->neighbors) { if (!map.count(v)) { map[v] = new Node(v->val); q.push(v); } map[u]->neighbors.push_back(map[v]); } } return map[node]; } }; ``` #leetcode #challenge Here is the link for the problem: https://leetcode.com/problems/clone-graph/
chiki1601
1,429,969
Solidity test internal function
To test an internal solidity function, create a child contract that inherits from the contract being...
0
2023-04-08T05:30:18
https://www.rareskills.io/post/solidity-test-internal-function
To test an internal solidity function, create a child contract that inherits from the contract being tested, wrap the parent contract's internal function with an external one, then test the external function in the child. Foundry calls this inheriting contract a "harness" though others call it a "fixture." Don't change the solidity function to become virtual or public to make it easier to extend, you want to test the contract you will actually deploy. Here is an example. ``` contract InternalFunction { function calculateReward(uint256 depositTime) internal view returns (uint256 reward) { reward = (block.timestamp - depositTime) * REWARD_RATE_PER_SECOND; } } ``` The function above gives a linear reward rate for each unit of time that passes by. The fixture (or harness) would look like this: ``` contract InternalFunctionHarness is InternalFunction { function calculateReward(uint256 depositTime) external view returns (uint256 reward) { reward = super.calculateReward(depositTime); } } ``` When you call a parent function that has the same name as the child, you must use the super keyword of the function will call itself and go into infinite recursion. Alternatively, you can explicitly label your test function as a harness or fixture as follows ``` contract InternalFunctionHarness is InternalFunction { function calculateReward_HARNESS(uint256 depositTime) external view returns (uint256 reward) { reward = calculateReward(depositTime); } } ``` ##Don't change the function to be public Changing the function to become public isn't a good solution because this will increase the contract size. If a function doesn't need to be public, then don't make it public. It will increase the gas cost both for deployment, and the execution of the other functions. When a contract receives a transaction, it must compare the function selector to all the public ones in a linear or binary search. In either case, it has more selectors to search through. Furthermore, the added selector is added bytecode which increases the deployment cost. ##Don't override virtual solidity functions Suppose we had the following contract: ``` contract InternalFunction { function calculateReward(uint256 depositTime) internal view virtual returns (uint256 reward) { reward = (block.timestamp - depositTime) * REWARD_RATE_PER_SECOND; } } ``` It could be tempting to simply override it on the the fixture for convenience, but this is not advisable since you end up duplicating code and if your implementation in the harness diverges from the parent contract, you won't be actually testing your business logic anymore. Note that this method forces us to copy and paste the original code: ``` contract InternalFunctionHarness in InternalFunction { function calculateReward(uint256 depositTime) external view override returns (uint256 reward) { reward = (block.timestamp - depositTime) * REWARD_RATE_PER_SECOND; } } ``` What about testing private solidity functions? There is no way to test private functions in solidity as they are not visible to the child contract. The distinction between an internal function and a private function doesn't exist after the contract is compiled. Therefore, you can change private functions to be internal with no negative effect on the gas cost. As an exercise for the reader, benchmark the following code to see that changing "foo" to be internal does not affect the gas cost. ``` contract A { // change this to be private function foo() internal pure returns (uint256 f) { f = 2; } function bar() internal pure returns (uint256 b) { b = foo(); } } contract B is A { // 146 gas: 0.8.7 no optimizer function baz() external pure returns (uint256 b) { b = bar(); } } ``` ##Learn more See our [advanced solidity bootcamp](https://www.rareskills.io/solidity-bootcamp) to learn more advanced testing methodologies.
rareskills
1,430,070
Starter Code - Object Detector Codelab
This demo shows how we can use a pre made machine learning solution to recognize objects (yes, more...
0
2023-04-08T08:54:28
https://dev.to/mohamedsdz/starter-code-object-detector-codelab-4j8b
codepen, coding, practice, javascript
<p>This demo shows how we can use a pre made machine learning solution to recognize objects (yes, more than one at a time!) on any image you wish to present to it. Even better, not only do we know that the image contains an object, but we can also get the co-ordinates of the bounding box for each object it finds, which allows you to highlight the found object in the image. </p> <p>For this demo we are loading a model using the ImageNet-SSD architecture, to recognize 90 common objects it has already been taught to find from the COCO dataset.</p> <p>If what you want to recognize is in that list of things it knows about (for example a cat, dog, etc), this may be useful to you as is in your own projects, or just to experiment with Machine Learning in the browser and get familiar with the possibilities of machine learning. </p> <p>If you are feeling particularly confident you can check out our GitHub documentation (<a href="https://github.com/tensorflow/tfjs-models/tree/master/coco-ssd" target="_blank">https://github.com/tensorflow/tfjs-models/tree/master/coco-ssd</a>) which goes into much more detail for customizing various parameters to tailor performance to your needs.</p> {% codepen https://codepen.io/MohnySid/pen/WNabpeR %}
mohamedsdz
1,430,123
AI Color Palette Generator
I just launched a simple color palette generator using AI. Simply input what kind of color scheme...
0
2023-04-08T10:35:10
https://dev.to/rarestoma/ai-color-palette-generator-1fn5
webdev, css, html, color
I just launched a simple color palette generator using AI. Simply input what kind of color scheme you're looking for, and AI will do the work for you. Here are some fun inputs to try: A color palette: - for a rainbow explosion - for 1980 party - for a cotton candy shop Try it: https://www.loopple.com/color-palette-generator
rarestoma