id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,887,939
Today I learned about constant variables
<?php $a = 100 $b = 550 echo "$a + $b"; ?> Enter fullscreen mode Exit...
0
2024-06-14T04:40:34
https://dev.to/ahtshamajus/today-i-learned-about-constant-variables-4110
webdev, php, beginners, backend
```php <?php $a = 100 $b = 550 echo "$a + $b"; ?> ```
ahtshamajus
1,887,938
Test Data Generator: A Vital Tool in Software Development
In the realm of software development and quality assurance, one critical aspect that often...
0
2024-06-14T04:39:17
https://dev.to/keploy/test-data-generator-a-vital-tool-in-software-development-205k
data, webdev, javascript, ai
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uaq5mev2y52recsng6cj.png) In the realm of software development and quality assurance, one critical aspect that often determines the success or failure of a project is the quality of test data. The accuracy and robustness of testing directly influence the reliability and performance of the software. This is where a Test Data Generator (TDG) comes into play. A Test Data Generator is a tool or a set of tools designed to create data that can be used for testing purposes. It helps developers and testers simulate real-world data scenarios without the need for live production data, thus ensuring comprehensive testing while maintaining data privacy and integrity. **What is a Test Data Generator?** A [Test Data Generator](https://keploy.io/test-data-generator) automates the creation of data sets required for testing applications. This data can be varied in structure and format, ranging from simple numerical values to complex hierarchical data. The primary objective is to mimic real-world scenarios to test the application under conditions that closely resemble its operational environment. Importance of Test Data Generation 1. Data Privacy and Security: Utilizing production data for testing can lead to privacy breaches and security risks. A TDG mitigates this by generating synthetic data, which eliminates the need for sensitive real-world data. 2. Comprehensive Testing: By providing a wide range of data sets, a TDG ensures that all possible scenarios, including edge cases, are tested. This helps in identifying and fixing bugs that might not be apparent with a limited data set. 3. Time and Cost Efficiency: Manually creating test data is time-consuming and error-prone. A TDG automates this process, saving valuable time and resources, and allowing developers to focus on more critical tasks. 4. Consistency and Repeatability: Automated test data generation ensures consistency in the data used for testing across different test cycles. This repeatability is crucial for regression testing and for verifying that fixes work correctly without introducing new issues. **Types of Test Data Generators** Test Data Generators can be classified based on the type of data they generate and the methodology they use: 1. Static Data Generators: These tools create a fixed set of data that remains unchanged. They are useful for scenarios where the data does not need to vary, such as testing with a predefined set of inputs. 2. Dynamic Data Generators: These tools generate data dynamically based on certain rules or parameters. They are ideal for testing applications where the data inputs need to vary to simulate different conditions. 3. Pattern-Based Generators: These generate data based on specified patterns or templates. They are particularly useful for creating data that follows specific formats, such as email addresses, phone numbers, or structured file formats like JSON or XML. 4. Rule-Based Generators: These use predefined rules and constraints to create data. They are useful for generating complex data sets that must adhere to specific business rules or logic. **Key Features of an Effective Test Data Generator** An effective TDG should possess the following features: 1. Data Variety: The ability to generate different types of data, including numerical, text, date, and complex structures like nested arrays or objects. 2. Scalability: It should handle large volumes of data to test the application under stress or load conditions. 3. Customization: Users should be able to define custom rules and constraints to generate data that meets their specific testing requirements. 4. Ease of Integration: The TDG should easily integrate with various testing frameworks, databases, and CI/CD pipelines to streamline the testing process. 5. Data Masking: For scenarios where production data needs to be used, the TDG should support data masking to protect sensitive information. **Popular Test Data Generators** Several tools in the market can cater to various test data generation needs. Some of the popular ones include: 1. Mockaroo: A web-based tool that allows users to create mock data for testing purposes. It supports a wide variety of data types and formats. 2. Tonic.ai: An advanced tool that generates realistic and privacy-compliant synthetic data. It focuses on maintaining data integrity and supporting complex data relationships. 3. Redgate SQL Data Generator: This tool is specifically designed for generating SQL database test data. It provides extensive customization options and supports a variety of data types. 4. Jailer: An open-source tool that helps in generating test data by extracting data from existing databases while maintaining referential integrity. **Challenges in Test Data Generation** While Test Data Generators offer numerous benefits, they also come with their own set of challenges: 1. Data Realism: Generating data that accurately mimics real-world scenarios can be difficult. Unrealistic test data can lead to tests that do not adequately reflect actual usage conditions. 2. Complex Data Relationships: In complex applications, data entities are often interrelated. Ensuring that generated data maintains these relationships and adheres to business rules can be challenging. 3. Performance: Generating large volumes of data quickly and efficiently without affecting system performance is another significant challenge. 4. Maintenance: Keeping the test data generation rules and scripts up-to-date with changes in the application or business logic requires ongoing effort. Future Trends in Test Data Generation The field of test data generation is continuously evolving, with several emerging trends set to shape its future: 1. AI and Machine Learning: Leveraging AI and machine learning to create more realistic and complex test data sets that adapt to evolving testing needs. 2. Self-Service Tools: Developing more user-friendly, self-service tools that allow non-technical users to generate test data without deep technical knowledge. 3. Integration with DevOps: Enhancing integration capabilities with DevOps pipelines to facilitate continuous testing and seamless data generation across different stages of the development lifecycle. 4. Improved Data Masking Techniques: Advancing data masking techniques to better protect sensitive information while maintaining the usability and relevance of the test data. **Conclusion** In conclusion, Test Data Generators play a crucial role in modern software development and testing. They provide a means to create realistic, diverse, and secure data sets that enable comprehensive testing, improve software quality, and enhance data privacy. As technology continues to advance, the capabilities and sophistication of these tools will only grow, further solidifying their place as indispensable assets in the software development toolkit.
keploy
1,887,937
Tien Bao Dong Phuc
Công ty may đồng phục Tiến Bảo - chuyên may áo thun đồng phục, tạp dề đồng phục, quần áo bảo hộ, đồng...
0
2024-06-14T04:36:06
https://dev.to/rtdongphuctienbao/tien-bao-dong-phuc-3dbg
Công ty may đồng phục Tiến Bảo - chuyên may áo thun đồng phục, tạp dề đồng phục, quần áo bảo hộ, đồng phục học sinh, đồng phục công sở... giá xưởng tại hcm, miễn phí giao hàng, miễn phí may mẫu. Hotline 0902335112. Website: https://dongphuctienbao.com/ Phone: 0902335112 Address: 276A Trần Thị Cờ, Tân Thới An, Quận 12, Thành phố Hồ Chí Minh https://www.scoop.it/u/tien-baodong-phuc-3 https://www.instapaper.com/p/14467155 https://tinhte.vn/members/updongphuctienbao.3026554/
rtdongphuctienbao
1,887,936
Exploring the New Features in TypeScript 5.5 Beta
TypeScript 5.5 Beta introduces several exciting features and improvements aimed at enhancing the...
0
2024-06-14T04:34:53
https://www.jenuel.dev/blog/Exploring-the-New-Features-in-TypeScript-5-5-Beta
typescript, javascript
TypeScript 5.5 Beta introduces several exciting features and improvements aimed at enhancing the developer experience and increasing the language's flexibility and performance. Below, we delve into these new additions, providing detailed explanations and examples to illustrate their practical applications. #### 1. **RegExp** `v` Flag Support TypeScript 5.5 Beta now supports the `v` flag in regular expressions, introduced in ECMAScript 2024. This flag allows developers to leverage more powerful and expressive regex patterns, enhancing text processing capabilities. **Example:** ```ts const regex = new RegExp('^[\\p{L}]+$', 'v'); const result = regex.test('HelloWorld'); // true ``` #### 2. **Template Literal Type Improvements** Template literal types see significant enhancements, including better support for nested template literals and improved type inference. This update allows for more precise and flexible type definitions, especially useful in complex string manipulations. **Example:** ```ts type Color = 'red' | 'green' | 'blue'; type ColorMessage<T extends Color> = `The color is ${T}`; function showMessage<T extends Color>(message: ColorMessage<T>) { console.log(message); } showMessage('The color is red'); // valid // showMessage('The color is yellow'); // invalid ``` #### 3. **Enhanced Tuple Types** Tuple types in TypeScript 5.5 Beta are more powerful, with improved control over the types and lengths of tuples. This feature helps in scenarios where tuples represent fixed-size arrays with varying types. **Example:** ```ts type Point = [number, number]; type Triangle = [Point, Point, Point]; const triangle: Triangle = [ [0, 0], [1, 0], [0, 1], ]; ``` #### 4. **Performance Improvements** TypeScript 5.5 Beta includes numerous performance enhancements, particularly in type-checking and incremental builds. These improvements aim to reduce compilation times and increase overall efficiency, making development smoother and more responsive. **Example:** While this feature does not have a direct code example, developers will notice faster feedback in their development environments, particularly in large projects. #### 5. **Smarter Inference for JSX Intrinsic Elements** JSX intrinsic elements (like `div`, `span`, etc.) now benefit from smarter type inference. This means TypeScript can more accurately infer the types of props for these elements, reducing the need for explicit type annotations and making code more concise. **Example:** ```ts interface CustomButtonProps { label: string; onClick: () => void; } const CustomButton: React.FC<CustomButtonProps> = ({ label, onClick }) => ( <button onClick={onClick}>{label}</button> ); // TypeScript infers the types correctly without explicit annotations <CustomButton label="Click me" onClick={() => alert('Clicked!')} />; ``` #### 6. **Improved Type Narrowing in Control Flow** TypeScript 5.5 Beta introduces better type narrowing within control flow statements, such as `if`, `else`, and `switch`. This improvement allows the compiler to more precisely determine variable types based on the context, reducing type errors and enhancing code safety. **Example:** ```ts function process(value: string | number) { if (typeof value === 'string') { console.log(value.toUpperCase()); // TypeScript knows value is a string here } else { console.log(value.toFixed(2)); // TypeScript knows value is a number here } } ``` TypeScript 5.5 Beta brings several powerful features and enhancements that continue to solidify its position as a leading language for building robust and scalable applications. From regex enhancements and template literal improvements to smarter JSX inference and control flow type narrowing, these updates offer developers more tools and capabilities to write efficient, maintainable, and type-safe code. To explore these features further and see them in action, you can read the official [TypeScript 5.5 Beta announcement](https://devblogs.microsoft.com/typescript/announcing-typescript-5-5-beta/) from Microsoft. --- If you enjoy this article and would like to show your support, you can easily do so by buying me a coffee. Your contribution is greatly appreciated! [![Jenuel Ganawed Buy me Coffee](https://img.buymeacoffee.com/button-api/?text=Buy%20me%20a%20coffee&emoji=%E2%98%95&slug=jenuel.dev&button_colour=FFDD00&font_colour=000000&font_family=Inter&outline_colour=000000&coffee_colour=ffffff)](https://www.buymeacoffee.com/jenuel.dev)
jenueldev
1,887,934
Futuristic Worms Micelio
Check out this Pen I made!
0
2024-06-14T04:32:47
https://dev.to/kevgutierrez09_/futuristic-worms-micelio-112i
codepen, javascript, css
Check out this Pen I made! {% codepen https://codepen.io/Kevin_Gutier09/pen/ExzKXbN %}
kevgutierrez09_
1,887,915
Repositorio Institucional RIUCA
Check out this Pen I made!
0
2024-06-14T04:19:52
https://dev.to/kevgutierrez09_/repositorio-institucional-riuca-30fc
codepen, javascript
Check out this Pen I made! {% codepen https://codepen.io/Kevin_Gutier09/pen/zYQEJwp %}
kevgutierrez09_
1,887,933
A New Whatsapp Bot
Hey Guys I AM Turbo And I Am New To This Platform But I Wanted To Show You All About A Whatsapp Bot I...
0
2024-06-14T04:23:57
https://dev.to/toxic_turbo777/a-new-whatsapp-bot-4ip3
showdev, node, docker, git
Hey Guys I AM Turbo And I Am New To This Platform But I Wanted To Show You All About A Whatsapp Bot I Made It's A Chatbot And Userbot I Made It's Using @whiskeysockets/baileys And This Was Built On Node Js Please Check Out My GitHub And Give A Star It Will Help Me To Continue GitHub Repository: https://github.com/TURBOHYPER/Toxic-Alexa_V4
toxic_turbo777
1,887,932
Lô đề online là gì Cách chơi lô đề 123B hiệu quả
Việc đánh đề từ xưng đến nay luôn thu hút được rất nhiều người tham gia bởi lối chơi đa dạng và hấp...
0
2024-06-14T04:23:47
https://dev.to/123bonl/lo-de-online-la-gi-cach-choi-lo-de-123b-hieu-qua-2ce
8us, lodeonline, lode
Việc đánh đề từ xưng đến nay luôn thu hút được rất nhiều người tham gia bởi lối chơi đa dạng và hấp dẫn. Trò chơi này không chỉ đem lại nguồn thu nhập cho người chơi mà nó còn làm từ bộ môn số học đòi hỏi tính logic và tính suy luận cao. Bên cạnh đó ngành công nghệ thông tin ngày càng phát triển mạnh mẽ nên hầu hết các trò chơi truyền thống đều được phổ cập hết dưới dạng online, không ngoại trừ bộ môn đánh đề này. Hôm nay của chúng tôi tìm hiểu về những cách chơi lô đề 123B online hiệu quả nhé! Link: [(https://123b.onl/123b-lo-de/)]
123bonl
1,887,931
LinkNetwork丨Scalability: The Ultimate Challenge for Public Chains
In the extensive field of blockchain technology applications, public chains, serving as the...
0
2024-06-14T04:23:07
https://dev.to/linknetwork/linknetworkgun-scalability-the-ultimate-challenge-for-public-chains-pjb
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g0xxcajfsv3w7sldpkso.jpg) In the extensive field of blockchain technology applications, public chains, serving as the foundational platforms for decentralized applications (DApps), have constantly faced a significant challenge in scalability. The scalability issue of public chains not only concerns transaction processing speed but also directly impacts transaction costs, user experience, and ultimately, the practicality of applications. With the increasing popularity of blockchain technology and the surge in decentralized applications, effectively addressing the scalability issue of public chains has become a critical topic in the blockchain domain. **The Core Issue of Public Chain Scalability** The scalability issue of public chains fundamentally stems from the need to balance decentralization, security, and efficiency in their design. These three factors are often mutually restrictive: enhancing security often requires more nodes to participate in verification, thereby reducing the overall efficiency of the system; and improving efficiency, especially transaction processing speed, often reduces the degree of transaction verification, which may compromise the security of the system. Additionally, with each additional node, the communication volume between nodes grows exponentially, further limiting the scalability of public chains. Currently, the most mainstream solutions include improving consensus mechanisms and adopting sharding technology. These technologies aim to enhance the transaction processing capabilities of public chains without sacrificing security and decentralization. **Optimization of Consensus Mechanisms** Consensus mechanisms entail the process through which all nodes in a public chain agree on the state of the ledger. The most famous consensus mechanisms include Proof of Work (PoW) and Proof of Stake (PoS). **Proof of Work (PoW)** Advantages: Provides extremely high network security, as attackers would need to control over 50% of the computational power to tamper with any information. Disadvantages: Consumes excessive energy, long transaction confirmation times, difficult to scale on a large scale. For example, the Bitcoin network is limited to processing approximately 7 transactions per second. **Proof of Stake (PoS)** Advantages: Compared to PoW, PoS significantly reduces energy consumption since it does not rely on miners’ computational power competition but rather on the amount and duration of coins held by participants to select block creators. Disadvantages: May face the “rich-get-richer” problem, where nodes holding more coins have a greater opportunity to control the network state. **Practical Implementation of Sharding Technology** Sharding technology divides the network into multiple shards, with each shard processing a portion of transactions and data, thereby enhancing the overall processing capacity of the entire network. Advantages: Significantly improves transaction processing speed as multiple shards can process transactions simultaneously. Disadvantages: The implementation of sharding technology increases network complexity, especially handling cross-shard transactions, which may lead to new security issues. In this technological context, the emergence of LinkNetwork provides a new perspective. LinkNetwork aims to offer a more efficient, secure, and scalable public chain solution by combining an improved PoSA consensus mechanism with innovative on-chain structural design. **Scalability Solutions Implemented in LinkNetwork Environment** In the field of blockchain technology, the scalability of public chains has always been a challenge, especially against the backdrop of increasing user base and transaction volume. The core of this problem lies in how to enhance network processing capacity while maintaining decentralization and security. Existing solutions, such as changing consensus mechanisms or attempting sharding technology, each have their advantages and limitations. In this scenario, the approach of LinkNetwork provides a unique solution aimed at addressing this industry pain point through the combination of innovative network architecture and consensus mechanism. LinkNetwork adopts the Proof of Staked Authority (PoSA) consensus mechanism, which combines the features of traditional PoS and PoA, aiming to provide higher transaction throughput and faster confirmation times. PoSA not only reduces energy requirements, making it more environmentally friendly, but also enhances network security and stability by distributing validation power to reputable validators. The advantages of PoSA can be categorized as follows: **Enhancing network security:** PoSA enhances network security by combining the advantages of PoS and PoA. The staking mechanism ensures that validators have sufficient economic incentives to maintain network security and stability. Furthermore, by introducing a reputation system, the likelihood of malicious attackers manipulating the network is reduced. **Improving processing speed and efficiency:** Compared to PoW, PoSA does not require significant computational resources to solve complex mathematical problems, thus significantly reducing energy consumption and processing time. The rotation mechanism of validators enables quick transaction confirmations, greatly enhancing network throughput. **Promoting decentralization:** Although the PoA component tends to centralize, PoSA balances the demands of centralization and decentralization by combining the proof of stake mechanism. Network participants can enhance their role in the network by increasing their stake, which promotes wider user participation and governance. **Environmentally friendly:** Due to the absence of high-energy-consuming hardware competition for block creation, PoSA has a relatively minor environmental impact. This makes PoSA a more sustainable and environmentally friendly consensus choice. **Adaptability and flexibility:** The PoSA mechanism allows for the adjustment of parameters such as staking requirements and validator election rules to adapt to different network conditions and security requirements. This flexibility is crucial for addressing constantly changing network states and security challenges. In terms of scalability, LinkNetwork adopts a unique inter-chain architecture design, allowing for more efficient transaction processing by optimizing the roles and functions of network nodes. Unlike traditional sharding technology, LinkNetwork achieves efficient data processing capabilities by optimizing data processing flows and enhancing collaboration among nodes, without the need to partition the network into multiple processing segments, thus avoiding the complexity and security issues associated with cross-shard transactions common in sharding technology. The implementation of these technologies and strategies enables LinkNetwork to significantly reduce congestion and costs when processing large-scale transactions while maintaining the decentralization and security of the system. For example, through the PoSA mechanism, LinkNetwork can rapidly process transactions, with each transaction requiring only a few validation steps, thereby improving efficiency and significantly reducing the costs of participating in network operations. Furthermore, this architectural design of LinkNetwork provides developers with more flexibility, enabling them to develop and deploy complex applications without worrying about the performance limitations of the underlying public chain. This provides strong support for various applications, including decentralized finance (DeFi), non-fungible tokens (NFTs), and decentralized applications (DApps), driving innovation and growth within the entire ecosystem. In these ways, LinkNetwork offers a unique consensus mechanism and network architecture, providing a scalable on-chain solution that is both efficient and secure, making LinkNetwork an ideal platform to support future blockchain applications. With further technological maturity and the continuous expansion of applications, LinkNetwork is poised to take a leading position in the competition among blockchain public chains, providing stable underlying infrastructure support for the Web3 world at an earlier stage. **About Link Network** Link Network is a blockchain infrastructure that supports Web3 applications that provide efficient, secure, and scalable blockchain services.It will thereby promote the development of decentralized finance (DeFi), non-fungible tokens (NFT), decentralized gaming (GameFi), and other Web3 domains. **📤Twitter**:https://twitter.com/LinkNet_Global **🎬YouTube:** https://www.youtube.com/channel/UCugVKiIiDwTnFMDFDV6ayIA **☘️Linktree:** https://linktr.ee/linknetwork_ **📱Telegram Community:** https://t.me/+sfH4grw-ASc5M2I1
linknetwork
1,887,930
Animated Slider | Punishing Gray Graven #Team
Inspired by the pen about 'Stranger Things' by Ricardo Oliva Alonso Deaign Original. Adaptation:...
0
2024-06-14T04:22:35
https://dev.to/kevgutierrez09_/animated-slider-punishing-gray-graven-team-41ni
codepen, javascript, css, animation
Inspired by the pen about 'Stranger Things' by Ricardo Oliva Alonso Deaign Original. Adaptation: Kevin Gutiérrez {% codepen https://codepen.io/Kevin_Gutier09/pen/WNBpWGM %}
kevgutierrez09_
1,887,905
Best Practices for Migrating from Heroku to AWS
Migrating from Heroku to Amazon Web Services (AWS): Essential Considerations and Best...
0
2024-06-14T03:50:22
https://dev.to/emma_in_tech/best-practices-for-migrating-from-heroku-to-aws-11aa
##Migrating from Heroku to Amazon Web Services (AWS): Essential Considerations and Best Practices In today's cloud-centric era, businesses frequently face critical decisions when selecting the appropriate platform for hosting their applications. This article delves into the essential considerations, challenges, and best practices for migrating from Heroku to Amazon Web Services (AWS). We compare Heroku and AWS in terms of scalability, ease of use, and cost to illustrate why enterprises might favor the enhanced flexibility and control provided by AWS over Heroku's simplicity. Additionally, the article covers specific migration steps such as configuring networking, databases, caches, and automation pipelines in AWS, along with common pitfalls associated with manual migration. ## Understand UI Differences ### Heroku UI: ![Image of Heroku UI dashboard](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/77m8up69z2myjox6xt49.png) - **Simplicity:** Heroku's user interface is known for its simplicity and user-friendliness, making it easy for developers to manage applications without a steep learning curve. - **Dashboard:** The Heroku dashboard provides an intuitive and clean layout, allowing users to easily navigate between different applications, resources, and add-ons. - **Deployment:** Deploying applications on Heroku is streamlined, with options to deploy via Git, GitHub, or using Heroku’s own CLI. - **Add-ons Marketplace:** Heroku offers an integrated marketplace for add-ons, where users can quickly find and install third-party services such as databases, monitoring tools, and more. ### AWS UI: ![Image of AWS UI dashboard](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ha4b9sql9c92rovk53sn.png) - **Complexity and Flexibility:** AWS's user interface is more complex compared to Heroku, reflecting its extensive range of services and configurations available to users. - **Management Console:** The AWS Management Console is feature-rich, offering detailed control over a vast array of services. However, this can be overwhelming for new users. - **Service Navigation:** Navigating through AWS services requires familiarity with the platform, as the interface includes numerous services and settings that may not be immediately intuitive. - **Customization:** AWS allows for a high degree of customization and automation, which can significantly benefit advanced users looking to tailor their environment to specific needs. ## Best Practices for Getting Accustomed to the AWS UI ![Image of Aws Network Implementation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ahkjf16li7nxdk8cm1kp.png) - **Leverage AWS Training Courses:** Enroll in AWS training courses to gain a comprehensive understanding of the capabilities and functionalities of various AWS services. - **Start Small:** Begin with a few essential services and gradually expand your usage. This approach helps manage complexity and prevents feeling overwhelmed. - **Refer to Documentation:** When exploring new services, rely on AWS documentation instead of prior knowledge. AWS documentation is thorough and provides up-to-date information on service features and configurations. - **Get Certified:** Consider obtaining certifications in key services like EC2, S3, and VPC. These certifications validate your knowledge and provide a structured learning path to mastering AWS services. While the intricate AWS interface may initially seem daunting, dedicating time to learn best practices can unlock the full potential of AWS. ## Migrate Networks Effectively Replicating the network isolation on Heroku to your AWS VPC architecture is crucial for the security of your application. ### Best Practices for Setting Up VPC Architecture in AWS: 1. **Define Subnets, Route Tables, and Security Groups:** - Mirror or enhance the isolation provided by Heroku. - Segregate resources like databases, ECS instances, and ElastiCache Redis instances into private subnets to prevent direct external access. - Allocate public subnets for resources requiring external connectivity. 2. **Leverage Redundancy for Fault Tolerance:** - Use multiple availability zones to ensure high availability and fault tolerance. 3. **Regulate Traffic Flow:** - Use network access control lists (NACLs) and security groups to control inbound and outbound traffic within the VPC. 4. **Monitor and Safeguard Network Traffic:** - Utilize VPC Flow Logs and AWS Network Firewall to monitor and secure your network traffic. ### Key Steps for Setting Up a VPC: 1. **Design a VPC Diagram:** - Map out public, private, database, ElastiCache, and other subnets. 2. **Configure Route Tables:** - Manage inter-subnet and internet traffic flows using well-defined route tables. 3. **Set Up NACLs and Security Groups:** - Align them to the VPC diagram to control traffic flow and enhance security. 4. **Launch EC2 Instances:** - Place instances in subnets based on public vs private segmentation requirements. 5. **Enable VPC Flow Logs:** - Monitor traffic to and from your VPC for enhanced security and troubleshooting. Properly configuring VPC infrastructure is complex but essential for securing AWS-hosted applications. Referencing AWS best practices and documentation can ease the transition from Heroku’s simplified networking. ## Migrate the Database ![Image of aws data migration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8nkcmuj2adnv39zpnom6.png) ### Steps to Migrate from Heroku Database to Amazon RDS: 1. **Verify Version Compatibility:** - Ensure your existing Heroku database engine version is compatible with Amazon RDS. 2. **Evaluate Database Requirements:** - Assess your database needs, including storage, memory, and compute requirements, and select the appropriate RDS instance type. 3. **Create Database Instance:** - Follow the AWS tutorial to create a database instance using the RDS management console or APIs. 4. **Leverage AWS Database Migration Service (DMS):** - Use DMS to minimize downtime by replicating data changes from the Heroku database to RDS in real-time. 5. **Test and Optimize:** - Thoroughly test and optimize the sizes and configurations of your RDS instances to meet your workload demands. 6. **Enable Automated Backup and Snapshots:** - Set up automated backups and database snapshots for disaster recovery. ## Conclusion Migrating from Heroku to AWS is a significant undertaking that requires meticulous planning and execution across various domains such as networks, databases, automation, monitoring, and more. While Heroku offers simplicity, AWS provides the scalability, flexibility, and infrastructure control that growing enterprises need. ### Key Takeaways: 1. **Leverage AWS Training and Documentation:** - Utilize AWS training courses and documentation to fully understand and harness the platform's extensive capabilities. 2. **Build VPC Diagrams:** - Create detailed VPC diagrams that align with your isolation requirements before implementation to ensure a robust network architecture. 3. **Choose DMS for Real-Time Replication:** - Use AWS Database Migration Service (DMS) for real-time data replication to prevent downtime during database migration. 4. **Implement CI/CD with CodePipeline and CodeDeploy:** - Set up AWS CodePipeline and CodeDeploy to facilitate rapid and efficient application updates. 5. **Monitor and Audit:** - Utilize AWS CloudWatch for monitoring and AWS CloudTrail for auditing activities across regions to maintain oversight and security. While migrating from Heroku to AWS presents challenges, companies that dedicate the necessary time and resources can achieve significant benefits in terms of scale, cost savings, and innovation velocity over the long term.
emma_in_tech
1,887,913
argc, argv의 차이
int argc argc = argument count argc는 운영체제가 이 프로그램을 실행했을 때 전달되는 인수의 갯수이다.  즉, main()함수에 전달되는 데이터의 갯수를...
0
2024-06-14T04:16:17
https://dev.to/sunj/argc-argvyi-cai-3aho
server
1. int argc - argc = argument count - argc는 운영체제가 이 프로그램을 실행했을 때 전달되는 인수의 갯수이다.  - 즉, main()함수에 전달되는 데이터의 갯수를 의미한다.   2. char *argv[] - argv = argument variable - char *argv[]: 문자열의 주소를 저장하는 포인터 배열 - argv[0]은 프로그램의 실행경로이다.  - argv[1], argv[2] ... 에는 순서대로 사용자가 입력한 argument가 저장된다.    예를 들어, int main(int argc, char *argv[])에 ./tiny 8000 aaa이라는 입력을 준다면, argc는 2개일 것이고, argv[0]에는 실행경로인 ./tiny가 들어가고, argv[1]에는 8000이 들어가고, argv[2]에는 aaa가 들어갈 것이다.  => argv의 각 인자는 띄어쓰기로 구분된다.  _참조 : https://bo5mi.tistory.com/165 [대범하게:티스토리]_
sunj
1,887,912
Mastering NextJS: Exploration of Its 12 Key Concepts
Introduction: In the realm of web development, NextJS has emerged as a powerful tool that simplifies...
0
2024-06-14T04:13:37
https://dev.to/vyan/mastering-nextjs-exploration-of-its-12-key-concepts-14e1
webdev, beginners, nextjs, react
**Introduction:** In the realm of web development, NextJS has emerged as a powerful tool that simplifies the process of building full-stack applications with just React. In this blog, we will delve into the 12 key concepts of NextJS, as outlined in a tutorial, to provide a comprehensive understanding of its capabilities and features. **1. Overview of NextJS:** NextJS is introduced as a game-changer in web development, offering exciting features like the latest app router that revolutionizes the way applications are built. With NextJS, developers can create full-stack applications without the need for a separate backend, making it a versatile and efficient framework. ```javascript // Example of app router usage in NextJS import Link from 'next/link'; const HomePage = () => { return ( <div> <h1>Welcome to NextJS!</h1> <Link href="/about"> <a>About</a> </Link> </div> ); }; export default HomePage; ``` **2.Simplifying Boilerplate Code:** One of the first steps in creating a NextJS application is removing unnecessary boilerplate code. By streamlining the code structure, developers can focus on building the core functionality of their applications without being weighed down by redundant code. ```javascript // Example of simplified NextJS page component const AboutPage = () => { return <h1>About Us</h1>; }; export default AboutPage; ``` **3.Layout Component**: The layout component serves as the root component in NextJS applications, where all pages are rendered as children. Understanding the role of the layout component is crucial for organizing the structure of the application and ensuring a cohesive user experience. ```javascript // Example of a layout component in NextJS import Header from '../components/Header'; import Footer from '../components/Footer'; const Layout = ({ children }) => { return ( <div> <Header /> {children} <Footer /> </div> ); }; export default Layout; ``` **4.Routing and Navigation:** NextJS simplifies routing and navigation by managing all client-side routing internally. The use of the NextJS link component enables seamless navigation between pages without the need for server-side requests, enhancing the overall performance of the application. ```javascript // Example of navigation using NextJS Link component import Link from 'next/link'; const Navigation = () => { return ( <nav> <Link href="/">Home</Link> <Link href="/about">About</Link> </nav> ); }; export default Navigation; ``` **5.Dynamic Routing:** NextJS supports dynamic routing, allowing developers to create dynamic routes based on parameters such as post IDs. This feature enables the creation of dynamic and interactive content within the application, enhancing user engagement. ```javascript // Example of dynamic routing in NextJS import { useRouter } from 'next/router'; const PostPage = () => { const router = useRouter(); const { id } = router.query; return <h1>Post: {id}</h1>; }; export default PostPage; ``` **6.Server Actions:** Server actions in NextJS enable developers to fetch data from a server and perform server-side operations seamlessly. By abstracting away the complexities of server-side rendering, NextJS simplifies the process of interacting with server data and integrating it into the application. ```javascript // Example of server-side data fetching in NextJS export async function getServerSideProps(context) { // Fetch data from external API const res = await fetch(`https://api.example.com/posts`); const data = await res.json(); // Pass data to the page component as props return { props: { data }, }; } ``` **7.Server Components:** Server components play a crucial role in optimizing the performance of NextJS applications by efficiently fetching and rendering data on the server side. By leveraging server components, developers can enhance the speed and responsiveness of their applications, providing a seamless user experience. ```javascript // Example of a server component in NextJS import { serverFetchData } from '../api/posts'; const ServerComponent = ({ data }) => { return <div>{data}</div>; }; export async function getServerSideProps() { const data = await serverFetchData(); return { props: { data } }; } export default ServerComponent; ``` **8.Suspense:** NextJS incorporates suspense to manage data fetching and rendering, ensuring that components are loaded efficiently and asynchronously. By implementing suspense, developers can improve the performance of their applications and create smoother transitions between different states. ```javascript // Example of suspense usage in NextJS import { Suspense } from 'react'; const LazyComponent = React.lazy(() => import('./LazyComponent')); const MyPage = () => ( <Suspense fallback={<div>Loading...</div>}> <LazyComponent /> </Suspense> ); export default MyPage; ``` **9.Caching Mechanism:** NextJS employs a caching mechanism to optimize the performance of applications by storing and reusing data efficiently. Understanding how caching works in NextJS is essential for managing application state and improving overall performance. ```javascript // Example of data caching in NextJS import { SWRConfig } from 'swr'; import fetcher from '../utils/fetcher'; const MyApp = ({ Component, pageProps }) => ( <SWRConfig value={{ fetcher }}> <Component {...pageProps} /> </SWRConfig> ); export default MyApp; ``` **10.Deployment Options:** NextJS offers various deployment options, allowing developers to deploy their applications to different environments such as VPS. By exploring deployment options, developers can ensure that their applications are accessible and functional across different platforms. ```javascript // Example of deployment configuration in NextJS module.exports = { // Deployment settings }; ``` **11.Folder Structure:** A typical folder structure in NextJS is discussed, highlighting the importance of organizing files and components effectively. By following a structured folder hierarchy, developers can streamline the development process and maintain code consistency throughout the project. ```plaintext nextjs-project/ │ ├── components/ │ ├── Navbar.js │ ├── Header.js │ └── Footer.js │ ├── app/ │ ├── layout.js │ ├── page.js │ │ ├── public/ │ ├── images/ │ │ └── logo.png │ └── styles/ │ └── main.css │ ├── api/ │ └── posts.js │ ├── utils/ │ └── api.js │ └── next.config.js ``` **12.Conclusion and Further Learning:** In conclusion, the 12 key concepts of NextJS provide a comprehensive overview of its capabilities and features, empowering developers to build robust and efficient applications. To delve deeper into NextJS and enhance your skills, consider exploring advanced topics and enrolling in courses that offer in-depth insights into NextJS development. In conclusion, NextJS stands out as a versatile and powerful framework for building modern web applications. By mastering the 12 key concepts outlined in this blog, developers can unlock the full potential of NextJS and create dynamic, high-performance applications that meet the demands of today's digital landscape.
vyan
1,887,911
Documenting my pin collection with Segment Anything: Part 3
In my last post, I showed how to use the Segment Anything Model with prompts to improve the...
27,656
2024-06-14T04:11:08
https://blog.feregri.no/blog/documenting-my-pin-collection-with-segment-anything-part-3/
fastapi, segmentanything, imagesegmentation
In [my last post](https://dev.to/feregri_no/documenting-my-pin-collection-with-segment-anything-part-2-4pjc), I showed how to use the Segment Anything Model with prompts to improve the segmentation output, in it I decided that using bounding boxes to prompt the model yielded the best results for my purposes. In this post I will try to describe a tiny, but slightly complex, app I made with the help of GitHub Copilot. This app is made with vanilla JavaScript and HTML uses SAM in the backend to extract the cutouts along with the bounding polygons for further use in my ultimate collection display. Before we dive into a mess of code, have a look at the app I created: {% youtube ALIFCBvGnFg %} (if you just want the code, go to the end of this post) ## Requirements The app I created needed to: - Allow me to draw boxes on an image, - perform image segmentation using the drawn box as a prompt. - Once the image semgmentation is done, show the candidate cutouts and allow me to select the best and, - give each one of them a unique identifier and a name. ## Solution In the end, I created a client-server app: ![App architecture](https://ik.imagekit.io/thatcsharpguy/posts/documenting-my-pin-collection/app-architecture.png?updatedAt=1718227239286) For the backend, the obvious decision was Python, since the Segment Anything Model is readily accessible in that language, and it is the language I know the most. The client app is done with vanilla JavaScript, CSS and HTML; Using the *canvas* API it is effortless to draw bounding boxes over an image, and all the mouse events help us send the necessary data to extract a cutout. ## Implementation ### **Project Structure Overview** The project consists of several interconnected components, including a FastAPI backend, HTML5 and JavaScript for the frontend, and CSS for styling. Here’s a breakdown of the key files and their roles: - **`web/labeller.py`**: The core backend file built with FastAPI. It handles route definitions, image manipulations, and interactions with the image segmentation model. - **`web/static/app.css`**: Contains CSS styles to enhance the appearance of the application. - **`web/static/app.js`**: Manages the frontend logic, particularly the interactions on the HTML5 Canvas where users draw annotations. - **`web/templates/index.html.jinja`**: The Jinja2 template for the HTML structure, dynamically integrating backend data. - **`web/resources.py`**: Manages downloading necessary resources like images and model files. - **`web/sam.py`**: Integrates the machine learning model for image segmentation. Out of these files, perhaps the most important ones are the one that manages the frontend logic and the core of the app; will try my best to describe them below: ## **`web/static/app.js`** The script starts by setting up an environment where users can draw rectangular boxes on an image loaded into a canvas element. This functionality is an essential part of the app, since these boxes will be the prompts to the segment anything model in the backend. ### 1. Initialization on Window Load: The script begins execution once the window has fully loaded, ensuring all HTML elements, especially the `<canvas>` and `<img>`, are available to manipulate. ```jsx window.addEventListener('load', function() { const canvas = document.getElementById('canvas'); const ctx = canvas.getContext('2d'); const img = document.getElementById('image'); const results = document.getElementById('results'); const contours = []; ``` ### 2. Canvas and Context Setup: Here, the canvas dimensions are set to match the image dimensions, and the image is then drawn onto the canvas. This forms the base on which users will draw the bounding boxes. ```jsx canvas.width = img.width; canvas.height = img.height; ctx.drawImage(img, 0, 0); ``` ### 3. Drawing Interactions: Listeners for `mousedown`, `mousemove`, and `mouseup` events are added to the canvas to handle drawing: - Create a variables to hold the mouse position and the drawing state: ```jsx let startingMousePosition = { x: 0, y: 0 }; let isDrawing = false; ``` - **Start Drawing:** On `mousedown`, it captures the starting point where the user begins the draw interaction. ```jsx canvas.addEventListener('mousedown', function(e) { startingMousePosition = { x: e.offsetX, y: e.offsetY }; isDrawing = true; }); ``` - **Drawing in Progress:** The `mousemove` event updates the drawing in real-time, showing a visual feedback of the rectangle being drawn on the canvas via the `redrawCanvas` and the `drawBox` functions. ```jsx canvas.addEventListener('mousemove', function(e) { if (isDrawing) { const currentX = e.offsetX; const currentY = e.offsetY; redrawCanvas(); drawBox(startingMousePosition.x, startingMouse-Position.y, currentX - startingMousePosition.x, currentY - startingMousePosition.y); } }); ``` - **End Drawing:** The `mouseup` event finalises the drawing and optionally sends the drawn box data to the server using the `sendBoxData` function. ```jsx canvas.addEventListener('mouseup', function(e) { if (isDrawing) { const endX = e.offsetX; const endY = e.offsetY; const box = { x1: Math.min(startingMousePosition.x, endX), y1: Math.min(startingMousePosition.y, endY), x2: Math.max(startingMousePosition.x, endX), y2: Math.max(startingMousePosition.y, endY) }; sendBoxData(box); redrawCanvas(); isDrawing = false; } }); ``` ### 4. Server Interaction: Upon completing a drawing, the box data is sent to the server using a `fetch` call. This allows the application to process the box. This processing involves using the segment anything model to extract the candidate cutouts and returning them to be presented to the user using the `createForm` function: ```jsx function sendBoxData(box) { fetch('/cut', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(box) }) .then(response => response.json()) .then(data => { results.innerHTML = ''; data.results.forEach(result => { const t = createForm(result); results.appendChild(t); }); }) .catch(error => { console.error('Error:', error); }); } ``` ### 5. Dynamic Form Generation: Responses from the server include an image and an identifier, that is used to create and populate forms dynamically. Using Mustache.js for templating, the script generates HTML forms based on this data, which are then inserted into the DOM, allowing further user interaction. ```jsx const template = ` <div class="form-container"> <div class="image-container"> <img src="{{image}}"> </div> <form action="/select_cutout" method="POST"> <input type="text" name="name" placeholder="Name"> <input type="hidden" name="id" value="{{id}}"> <button type="submit">Select</button> </form> </div> `; function createForm(result) { const rendered = Mustache.render(template, { id: result.id, image: result.image }); const div = document.createElement('div'); div.innerHTML = rendered.trim(); return div.firstChild; } ``` ### 6. Utility Functions: Several utility functions handle repetitive tasks: - **`redrawCanvas`**: Clears and redraws the canvas, useful for updating the view when needed. ```jsx function redrawCancas() { ctx.clearRect(0, 0, canvas.width, canvas.height); ctx.drawImage(img, 0, 0); contours.forEach(points => { drawPolygon(points); }); } ``` - **`drawBox`**: Draws rectangles based on coordinates, the pins that have already been cut. ```jsx function drawBox(x, y, width, height, fill = false) { ctx.beginPath(); ctx.rect(x, y, width, height); ctx.strokeStyle = 'red'; if (fill) { ctx.fillStyle = '#ff000033'; ctx.fill(); } ctx.stroke(); } ``` - **`drawPolygon`**: A more complex drawing function that can render polygons, used here to illustrate the capability to handle various shapes. ```jsx function drawPolygon(points) { ctx.beginPath(); ctx.moveTo(points[0][0], points[0][1]); points.forEach(point => { ctx.lineTo(point[0], point[1]); }); ctx.fillStyle = '#ff0000FF'; ctx.closePath(); ctx.fill(); ctx.stroke(); } ``` These utility functions are essential for managing the visual elements on the canvas, allowing for efficient updates and complex graphical operations like drawing polygons and boxes. ## `web/labeller.py` ### 1. Environment Setup and Initialisations: The application begins with setting up the FastAPI environment and configuring static file paths and template directories. This setup is crucial for serving static content like images and CSS, and for rendering HTML templates. ```python app = FastAPI() app.mount("/static", StaticFiles(directory="web/static"), name="static") templates = Jinja2Templates(directory="web/templates") ``` The image to annotate and the model to use are downloaded or loaded into the application, ensuring that all necessary components are available for image processing and analysis. ```python resources = download_resources() ``` ### 2. Image Loading and Preprocessing: The image to annotate is loaded and preprocessed. This involves reading the image from a path, converting it to an appropriate colour format, and resizing it to a manageable size. This resizing is particularly important to ensure that the processing is efficient. ```python original_image = cv2.cvtColor(cv2.imread(str(resources["image_path"])), cv2.COLOR_BGR2RGB) image_to_show = Image.fromarray(original_image) image_to_show = image_to_show.resize((desired_image_width, int(image_to_show.height * ratio))) ``` ### 3. Model Loading for Image Segmentation: The segmentation model is loaded, configured, and prepared to predict masks based on user-defined annotations. ```python mask_predictor = get_mask_predictor(resources["model_path"]) mask_predictor.set_image(original_image) ``` ### 4. Web Routing and Request Handling: FastAPI routes handle different types of web requests. The main route serves the annotated image along with tools for the user to interact with. This is done through a both POST and GET request which renders an HTML template with the image and existing annotations. ```python @app.get("/") @app.post("/") def get_index(request: Request): img = turns_image_to_base64(image_to_show) existing_cutouts = [] for file in os.listdir(selected_folder): if file.endswith(".json"): with open(f"{selected_folder}/{file}") as f: metadata = json.load(f) existing_cutouts.append(metadata) data = { "request": request, "image": img, "width": image_to_show.width, "height": image_to_show.height, "existing_cutouts": existing_cutouts, "ratio": ratio, } return templates.TemplateResponse("index.html.jinja", data) ``` ### 5. Image Annotation and Segmentation: When a user submits a bounding box annotation, the coordinates are scaled back to the original, unresized image and processed to segment the image. The application uses the model to predict the mask and then extracts the relevant part of the image based on these masks. Apart from the extracted image cutouts, metadata is saved to a temporary folder, so that when a user selects a given cutout, they can be recovered. ```python @app.post("/cut/") def post_cut(request: Request, box: BoundingBox): box = np.array([box.x1, box.y1, box.x2, box.y2]) original_box = box / ratio masks, _, _ = mask_predictor.predict(box=original_box, multimask_output=True) results = [] for mask in masks: uuid = str(uuid4()) cutout, bbox = extract_from_mask(original_image, mask) base64_cutout = turns_image_to_base64(cutout, format="PNG") results.append({ "id": uuid, "image": base64_cutout, }) metadata = { "uuid": uuid, "bbox": {"x1": bbox[0], "y1": bbox[1], "x2": bbox[2], "y2": bbox[3]}, "original_bbox": { "x1": original_box[0], "y1": original_box[1], "x2": original_box[2], "y2": original_box[3], }, "polygons": [poly.tolist() for poly in sv.mask_to_polygons(mask)], } with open(f"{temp_folder}/{uuid}.png", "wb") as f: cutout.save(f, format="PNG") with open(f"{temp_folder}/{uuid}.json", "w") as f: f.write(json.dumps(metadata)) return {"results": results} ``` ### 6. Handling user selection of cutouts After users mark and submit their desired cutout, this endpoint manages the user's selection, moving the annotated image data from temporary storage to a selected folder and updating its associated metadata with new user-provided information (like a name for the annotation): ```python @app.post("/select_cutout/") def post_select_cutout(request: Request, id: Annotated[str, Form()], name: Annotated[str, Form()]): import shutil # Move the PNG image from temporary to selected folder shutil.move(f"{temp_folder}/{id}.png", f"{selected_folder}/{id}.png") # Load the existing metadata for the selected annotation with open(f"{temp_folder}/{id}.json") as f: metadata = json.load(f) metadata["name"] = name # Update the name field with user-provided name # Write the updated metadata back to the selected folder with open(f"{selected_folder}/{id}.json", "w") as f: f.write(json.dumps(metadata)) # Redirect back to the main page after processing is complete return RedirectResponse("/") ``` ### 7. Utility Functions for Image Manipulation: Several utility functions facilitate image manipulation tasks like cropping the image based on the mask: ```python def extract_from_mask(image, mask, crop_box=None, margin=10): image_rgba = np.zeros((image.shape[0], image.shape[1], 4), dtype=np.uint8) alpha = (mask * 255).astype(np.uint8) for i in range(3): image_rgba[:, :, i] = image[:, :, i] image_rgba[:, :, 3] = alpha image_pil = Image.fromarray(image_rgba) if crop_box is None: bbox = Image.fromarray(alpha).getbbox() crop_box = ( max(0, bbox[0] - margin), max(0, bbox[1] - margin), min(image_pil.width, bbox[2] + margin), min(image_pil.height, bbox[3] + margin), ) cropped_image = image_pil.crop(crop_box) return cropped_image, crop_box ``` And converting images to a web-friendly format to be sent as responses to the front end. ```python def turns_image_to_base64(image, format="JPEG"): buffered = BytesIO() image.save(buffered, format=format) img_str = base64.b64encode(buffered.getvalue()).decode("utf-8") return "data:image/jpeg;base64," + img_str ``` These functions ensure that the application can handle image data efficiently and render it appropriately on the web interface. ## Libraries I used 1. [**FastAPI**](https://fastapi.tiangolo.com/): A web framework for building APIs (and web pages). It is used as the backbone of the application to handle web requests, routing, and server logic, and orchestrates the overall API structure. Although not used here, FastAPI provides robust features such as data validation, serialisation, and asynchronous request handling. 2. [**OpenCV (cv2)**](https://docs.opencv.org/): OpenCV is a powerful library used for image processing operations. It is utilised to read and transform images, such as converting colour spaces and resizing images, which are essential pre-processing steps before any segmentation tasks. 3. [**NumPy**](https://numpy.org/): This library is fundamental for handling arrays and matrices, such as for operations that involve image data. NumPy is used to manipulate image data and perform calculations for image transformations and mask operations. 4. [**PIL (Pillow)**](https://pillow.readthedocs.io/en/stable/): The Python Imaging Library (Pillow) is used for opening, manipulating, and saving many different image file formats. Here it is specifically used to convert images to different formats, handle image cropping, and integrate alpha channels to extract the cutouts. 5. [**Supervision**](https://supervision.roboflow.com/latest/): Although not yet a widely known library, this powerful library provides a seamless process for annotating predictions generated by various object detection and segmentation models; in this case, I used it to evaluate the results of SAM, and to convert its predictions to Polygon masks. 6. [**Mustache.js**](https://github.com/janl/mustache.js): This is a templating engine used for rendering templates on the web. In your application, Mustache.js is used to dynamically create HTML forms based on the data received from the server, such as image cutouts and identifiers. ## Closing thoughts I hope I did not bore you to death with some of these deep dives into my (and some of my friendly coding assistant's) code – I tried my best to be thorough. But if you still have doubts do not hesitate to reach out to me. [**Here is the code by the way**](https://github.com/fferegrino/pin-detection-with-sam/tree/part3). Believe it or not, this app is not complete yet, there is some other functionality yet to be implemented: - A way to easily recover the selected cutouts - A way to match already existing cutouts so that when the user selects the same cutout we don't duplicate entries - A way to handle updated canvas pictures, because what is going to happen when I inevitably expand my collection? I will explore these details in the next blog post in the series.
feregri_no
1,887,910
PostgreSQL Full-Text Search in a Nutshell
If you ask me to choose a database for microservices, I would probably say PostgreSQL. On one hand,...
0
2024-06-14T04:02:48
https://dev.to/lazypro/postgresql-full-text-search-2dio
database, elasticsearch, tutorial, programming
If you ask me to choose a database for microservices, I would probably say PostgreSQL. On one hand, PostgreSQL is a popular open source database with many mature practices, both on the server side and the client side. On the other hand, PostgreSQL is very "scalable". Scalability of course includes non-functional requirements such as traffic and data volume, as well as functional requirements such as full-text search. In the early stages of planning a microservice, we may not be sure what features it needs to have. I have to say those tutorials always telling us microservices need to have clear boundaries early in design are nonsense. Most of the time, as a microservice goes live, there are more requirements and more iterations, and those requirements don't care about the boundaries you define at the beginning. Full-text search is a good example. In the early days of a service, we might be able to work just fine with a exact match or prefix match of text. Until one day, you get a request. "Hey, let's add a search feature!" If the database you chose in the beginning doesn't have a search function, then that's a big problem. I'm sure you wouldn't introduce a database like Elasticsearch, which specializes in full-text search, for this sudden need. So what to do? Fortunately, PostgreSQL has a solution. ## PostgreSQL Built-In Features In the [official document](https://www.postgresql.org/docs/current/textsearch.html) there is a large chapter on how to do full-text searching with PostgreSQL. Let's skip all the technical details and go straight to the how-to. ```SQL SELECT id, title, body, author, created_at, updated_at, published, ts_rank(tsv, to_tsquery('english', 'excited | trends')) AS rank FROM blog_posts WHERE tsv @@ to_tsquery('english', 'excited | trends') ORDER BY rank DESC; ``` `blog_posts` is a table of stored blog posts, where `tsv` is a special column. It is not a metadata that a blog needs, it is a column that we create for searching purposes. ```sql ALTER TABLE blog_posts ADD COLUMN tsv tsvector; UPDATE blog_posts SET tsv = to_tsvector('english', title || ' ' || body); ``` As we can see, `tsv` is the result set when we join `title` and `body` and do the English stemming. The methods `to_tsvector` and `to_tsquery` are the core of this query. It is through these two methods that you can efficiently create the synonyms from the built-in dictionary. If you're familiar with Elasticsearch, then what these two methods are doing corresponds to `analyzer`. Let's explain this with a flowchart. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ybyrjsp541bolcuicqea.png) The `tsvector` is the result that we store in advance through tokenizer and token filter operations. The same is applied to `tsquery`, but it is lighter. Then we compare `tsquery` and `tsvector` by using the matching operator `@@`, which produces a dataset containing the query results. Finally, the score is calculated and sorted by `ts_rank`. In fact, the whole process is the same as what Elasticsearch does, except that PostgreSQL takes away the ability to customize and relies on the built-in dictionary. It's worth mentioning that the `tsv` column needs to be indexed with a `GIN` type index, otherwise the performance will be poor. ```sql CREATE INDEX idx_fts ON blog_posts USING gin(tsv); ``` Elasticsearch is efficient not because it has a powerful `analyzer`, but because it uses inverted indexes at the backend. In PostgreSQL's case, it's GIN, Generalized Inverted Index. Again, the technical detail is not the focus of this article, so I'll skip it. ## Non-built-in PostgreSQL catalogs It is not difficult to list all currently supported languages for PostgreSQL. ```sql SELECT cfgname FROM pg_ts_config; ``` As of today (PostgreSQL 16), there are only 29 in total. ``` postgres=# SELECT cfgname FROM pg_ts_config; cfgname ------------ simple arabic armenian basque catalan danish dutch english finnish french german greek hindi hungarian indonesian irish italian lithuanian nepali norwegian portuguese romanian russian serbian spanish swedish tamil turkish yiddish (29 rows) ``` As seen, there is no CJK language at all, i.e. there is no ability to handle double-byte characters. For languages without built-in support, they can be handled by extensions. Take Chinese as an example, [`pg_jieba`](https://github.com/jaiminpan/pg_jieba) is a widely used extension. After installing the extension, we just need to modify the catalog in PostgreSQL. ```sql CREATE EXTENSION pg_jieba; UPDATE blog_posts SET tsv = to_tsvector('jieba', title || ' ' || body); ``` The above is an example of `to_tsvector`, and of course, `to_tsquery` is the same. So languages without built-in support can find language extensions to enhance PostgreSQL's capabilities. This is one of the great things about PostgreSQL, it has a rich ecosystem of additional features. ## What about AWS RDS? In the previous section we mentioned that we can install additional extensions to support more languages, however, AWS RDS cannot customize extensions. In this case, we have to transfer the server-side effort to the client-side. In other words, we need to implement the language-specific stems on the client side and write `tsv` on the client side. Let's continue with [`jieba`](https://github.com/fxsjy/jieba) as an example. This is the main program logic for `pg_jieba`, which is also a Python package, so let's use Python for the example. ```python import jieba def tokenize(text): return ' '.join(jieba.cut(text)) cur.execute(""" INSERT INTO blog_posts (title, body, author, tsv) VALUES (%s, %s, %s, to_tsvector('english', %s)) """, (title, body, author, tokenize(title + ' ' + body))) ``` Similarly, the query is also done by `jieba.cut` and then `to_tsquery`. One interesting thing is we still use `english` as the catalog in this example, but it doesn't really matter what we use. I just need the ability to split text with whitespace. ## Conclusion In this article, we can see the power of PostgreSQL. It has many useful features and a rich ecosystem. Even if these readymade implementations are not enough, we can still implement on our own through its stable and flexible design. That's why I prefer PostgreSQL. If we don't think about something clearly in the beginning, there are many chances that we can fix it without getting stuck in a mess. This article provided an example of full-text searching, and I'll provide an additional interesting example. At one time, I was using PostgreSQL for a microservice and focused on its ACID capabilities. However, the requirements changed so drastically that ACID was no longer a priority, but rather the ability to flexibly support a variety of data types. Suddenly, PostgreSQL can be transformed into a document database, storing all kinds of nested JSON through `JSONB`. Thanks to the flexibility of PostgreSQL, it has saved my life many times, and I would like to provide a reference for you.
lazypro
1,887,908
Alien: Defining New Freedoms and Ultimate Efficiency in Global Communication
In the current era of rapid globalization, communication technology plays an essential role....
0
2024-06-14T04:00:26
https://dev.to/alien_web3/alien-defining-new-freedoms-and-ultimate-efficiency-in-global-communication-2h9d
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qu0xvjbgoaoutjccd3fn.jpg) In the current era of rapid globalization, communication technology plays an essential role. However, despite the continuous advancements in technology, the global communication industry still faces numerous challenges. The most notable are application fragmentation and regulatory restrictions. Even current communication applications cannot support an online collaboration system that suits the needs of modern society. These issues have become the focus of a potential revolution in human communication. Firstly, each application tends to cater to a specific language or cultural region. For instance, WhatsApp, despite having a vast global user base, is almost unusable in countries like China and Russia due to policy restrictions. Similarly, WeChat is extremely popular in China but not as widespread in Western countries. One reason is that the app’s interface and feature design are deeply influenced by local culture, making it difficult to overcome cultural differences and be accepted by users from other countries. Moreover, language differences are an issue that cannot be overlooked. Many communication applications support multiple language interfaces, but language remains a barrier in actual communication. This is especially true when the app’s automatic translation features fail to effectively address differences in context and regional expressions. **Regulatory and Legal Restrictions** Beyond the ecosystem fragmentation and language differences between applications, the legal regulations implemented in various regions and countries also impose strict limits on the operation of communication applications due to the policies and laws of the respective nations. Different countries have varying regulations regarding data privacy and cybersecurity, posing a significant challenge for global communication application providers. For instance, the European Union’s GDPR requires that data processing complies with strict privacy standards, which are not entirely applicable in the United States or certain countries in Asia. This inconsistency in regulations means that communication application providers need to develop different data processing policies for different markets, increasing operational costs and complexity. These issues not only affect the communication experience of users around the world, limiting the global proliferation of communication technology but also hinder the free flow of information and culture. In the face of such circumstances, a global communication solution that can transcend national and cultural borders becomes particularly important. **The Upper Limits and Limitations of Mainstream Communication Software** The current mainstream communication software has obvious limitations in supporting large-scale online meetings, which are reflected not only in technical capacity but also in the reliability of functions and the consistency of user experience. We will illustrate the limitations of current communication software with a few simple examples. - Zoom: As one of the most popular video conferencing software during the COVID-19 pandemic, Zoom can support up to 1000 video participants and display 49 screens on one screen. Although this is sufficient for most businesses, for events that require very large numbers of participants (such as global conferences or massive online open courses), this number is still very limited. - Microsoft Teams: Another widely used platform, Teams limits the number of people in a meeting to 300. Although it recently launched a feature that supports interactive meetings for up to 1,000 people and can expand to 10,000 people in observer mode, participants in this mode cannot interact, which limits the interactivity and engagement of the meeting. - Google Meet: This service from Google allows a maximum of 250 participants to join a meeting, which is enough for small and medium-sized enterprises, but it is insufficient for larger-scale application scenarios. These limitations indicate that while existing tools can meet regular business needs, they are not yet fully capable of meeting the market’s demands for larger-scale real-time interaction and global coverage. **Reliability of Features and Inconsistency in User Experience** In addition to the limitations of technical capacity, mainstream communication software often faces issues with feature stability and consistency in user experience during large-scale meetings. For example, participants may encounter technical problems such as audio-video desynchronization, screen freezing, and connection interruptions, which are especially pronounced when the number of participants increases significantly. Moreover, variations in network environments and device configurations across different regions can also affect the quality and efficiency of meetings, particularly in international conferences. **How can this be resolved?** Alien may have a unique solution to these issues. As an innovative instant encrypted communication application, Alien not only ensures a high level of privacy in communication security but also provides a communication experience superior to existing solutions through its unique technology and features. These include, but are not limited to, unrestricted global use, the capability to support large-scale online meetings, and the provision of 100% free services. These are key factors that make it stand out in the market. **Global Accessibility** The Alien app employs advanced network optimization technologies, utilizing distributed data centers and local server caching techniques, to ensure high-speed and stable communication services worldwide. Regardless of whether users are in the United States, Europe, Asia, or Africa, Alien can automatically connect to the nearest server based on the user’s geographical location, optimize data transmission paths, thereby reducing latency and improving communication quality. Furthermore, Alien uses multi-layer encryption protocols to ensure data security during communication, while also complying with internet regulations and privacy protection standards around the world. This means that users do not have to worry about legal risks or privacy breaches due to regional differences. Alien’s user interface supports multiple languages, including but not limited to English, Chinese, Japanese, Korean, and Vietnamese, and plans to continuously update its language offerings in the future. This allows users of any language to easily get started and enjoy a convenient communication experience. **High-Quality Audio and Video Calls, and Solutions for Large-Scale Communication Demands** Alien’s audio and video call service supports high-definition transmission, a feature made possible by its advanced data compression algorithms and efficient transmission protocols. By dynamically adjusting the data compression ratio and bandwidth usage, Alien is capable of providing the best possible call quality according to the user’s current network conditions. Additionally, Alien utilizes end-to-end encryption for transmissions, not only ensuring the security of the calls but also effectively reducing latency during data transmission. For scenarios requiring support for large-scale online meetings with up to 100,000 participants, Alien employs a distributed server architecture and an intelligent traffic management system. Alien can dynamically balance a large number of data requests, ensuring that each participant experiences a stable and smooth call, even with a high volume of users. This capability is particularly suited for business users and large online events, offering a reliable communication platform for organizations of all sizes. **100% Free to Use** Unlike many communication applications, Alien offers 100% free service for both domestic and international calls. This policy has greatly promoted its global adoption rate, allowing more users to experience Alien’s high-quality communication services without any barriers. The implementation of this free policy is backed by Alien’s robust technical support and meticulous operational management. By optimizing server resource allocation and data traffic management, costs are reduced while maintaining service quality. With the aforementioned high-standard communication services, Alien is able to meet the growing communication needs of global users, setting new industry standards and potentially becoming the future unified online communication platform worldwide. Of course, Alien also positions itself as a leading global communication solution provider, bringing to the world a freer, more advanced, and more secure one-stop communication solution as a representative in the Web3 world.
alien_web3
1,886,848
Mastering Version Control with Git: Beyond the Basics
_Welcome Aboard Week 2 of DevSecOps in 5: Your Ticket to Secure Development Superpowers! Hey there,...
27,560
2024-06-14T03:48:00
https://dev.to/gauri1504/mastering-version-control-with-git-beyond-the-basics-44ib
devops, devsecops, cloud, security
_Welcome Aboard Week 2 of DevSecOps in 5: Your Ticket to Secure Development Superpowers! Hey there, security champions and coding warriors! Are you itching to level up your DevSecOps game and become an architect of rock-solid software? Well, you've landed in the right place! This 5-week blog series is your fast track to mastering secure development and deployment. Get ready to ditch the development drama and build unshakeable confidence in your security practices. We're in this together, so buckle up, and let's embark on this epic journey!_ --- Welcome to the world of Git, the ubiquitous version control system powering countless software development projects. While you might have grasped the fundamental commands for initializing repositories, committing changes, and pushing code, this blog delves deeper, exploring advanced strategies and workflows to supercharge your Git mastery. ## Branching Strategies: Beyond GitFlow Branching, a core concept in Git, allows developers to work on independent lines of code without affecting the main codebase. However, effective branching strategies are crucial for maintaining a clean and collaborative development environment. Here, we'll explore popular branching strategies and their nuances: #### GitFlow vs. GitHub Flow: These two prevalent branching strategies offer distinct approaches: #### GitFlow: Favored by larger teams, GitFlow employs a dedicated set of branches: #### Master: The sacrosanct production branch, holding only the most stable and thoroughly tested code. #### Develop: The central development branch where ongoing features and bug fixes are integrated. #### Feature Branches: Short-lived branches branched from develop for specific features, merged back after completion. #### Hotfix Branches: Short-lived branches branched directly from master for urgent bug fixes, later merged back to develop and master. Release Branches: Short-lived branches branched from develop to prepare releases for different environments. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rd07388its3zftj37eau.png) #### GitHub Flow: More lightweight and suitable for smaller teams, GitHub Flow utilizes: #### Master: Similar to GitFlow, holding only production-ready code. #### Feature Branches: Branched directly from master, these branches encompass features and bug fixes, merged directly into master after review and testing. #### Hotfix Branches: Similar to GitFlow, used for critical bug fixes, merged directly into master and deleted afterward. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5a1osj6j2n0re1myppe7.png) #### Strengths and Suitability: GitFlow offers structured control for larger teams, ensuring code stability before reaching production. However, it requires stricter enforcement of branch naming conventions and workflows. GitHub Flow is simpler and faster for smaller teams, focusing on continuous integration and rapid iteration. Choose the strategy that best suits your project's size, complexity, and team structure. #### Bonus Tip: Consider using a branching model visualization tool like "git branch" to gain a clear graphical view of your branches and their relationships. ## Feature Branch Workflows: Best Practices Feature branches are the workhorses of Git development. Here's how to optimize your workflow with them: #### Create Clear and Descriptive Branch Names: Use a consistent naming convention (e.g., feature/new-login-system) to improve project clarity and discoverability. #### Regular Code Reviews: Before merging back to the main branch, have another developer review your code for quality, efficiency, and adherence to coding standards. Utilize platforms like GitHub or GitLab's built-in review features for streamlined communication. #### Merging Strategies: Employ either "merge" or "rebase" strategies to integrate your feature branch: #### Merge: Creates a merge commit, recording the integration point between your branch and the main branch. This is simpler but can lead to a more complex Git history. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/efj71to6b9ahiwtxbson.png) #### Rebase: Re-writes your feature branch's commits on top of the latest main branch commits, resulting in a cleaner Git history. However, rebasing requires caution, as it can rewrite history seen by other collaborators. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/no72l54ss657ygf3t4dy.png) #### Conflict Resolution Techniques: Merging conflicts can arise when changes made on separate branches affect the same lines of code. Learn to identify and resolve conflicts using Git's built-in merge tools or manual editing. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3jkzk7vi9iy1in80qvic.png) ## Branching for Hotfixes and Releases Dedicated branches serve specific purposes beyond feature development: #### Hotfix Branches: For critical bug fixes that need immediate deployment, create hotfix branches directly from the master. Fix the issue, thoroughly test in a staging environment, and merge the hotfix back to master (and develop if applicable) for a quick resolution. Delete the hotfix branch once merged. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9wc9hbgwq8ellvdgv94x.png) #### Release Branches: Prepare releases with dedicated branches branched from develop. Integrate bug fixes, final feature polish, and documentation updates. Once rigorous testing is complete, merge the release branch to master to deploy. Consider tagging the commit in master for version control purposes. ## Collaborative Workflows with Git #### Forking and Pull Requests: Platforms like GitHub and GitLab allow developers to "fork" a repository, creating a personal copy. On their forks, they can create feature branches, implement changes, and then submit "pull requests" to the original repository. This triggers a code review process where maintainers can review the changes, suggest modifications, and approve the pull request to merge the code into the main branch. #### Resolving Merge Conflicts: When multiple developers work on the same files in separate branches, merge conflicts occur. Git will typically highlight these conflicts, and it's your responsibility to manually edit the files to resolve them. Tools like Git's merge tool or visual merge editors in Git clients can streamline this process. #### Working with a Remote Repository: Centralize your version control using a remote repository service like GitHub or GitLab. This offers numerous benefits: #### Collaboration: Team members can easily fork, clone, and push code to the remote repository, facilitating collaborative development. Version Control History: The remote repository maintains a complete Git history, allowing you to revert to previous versions or track code evolution. #### Backup and Disaster Recovery: In case of local machine failures, the remote repository ensures a safe backup of your codebase. ## Git Hooks for Automated Tasks Git hooks are scripts that run automatically at specific points in your Git workflow, adding automation and enforcing best practices. #### Types of Git Hooks: There are several predefined hook types: #### Pre-commit: Runs before a commit is made, allowing you to enforce coding standards or run linting checks. #### Post-commit: Runs after a commit is made, useful for updating build versions or sending notifications. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gqmapedl10bu9hub5ozt.png) #### Pre-push: Runs before code is pushed to a remote repository, often used for final checks or tests. #### Post-push: Runs after code is pushed, potentially triggering deployments or integrations. Common Git Hook Use Cases: Git hooks can automate various tasks: #### Code Formatting: Enforce consistent code style using hooks that run code formatters like autopep8 or clang-format before commits. #### Unit Tests: Run automated unit tests with hooks like pytest or Jest before pushing code, ensuring basic functionality before integration. #### Static Code Analysis: Integrate static code analysis tools like Pylint or ESLint into your workflow via pre-commit hooks to identify potential errors or vulnerabilities. #### Creating Custom Git Hooks: While predefined hooks cover common needs, you can create custom hooks using scripting languages like Bash or Python. Refer to Git's documentation for detailed instructions on creating and configuring custom hooks. #### Git for Non-Programmers: Git isn't just for programmers! It's valuable for anyone working on collaborative projects with text-based files. Use it for managing documents, configuration files, or even creative writing projects with version control. Advanced Git Topics: #### Stashing: Temporarily save uncommitted changes for later use. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x6ztfitlypw84z9evpgz.png) #### Submodules: Manage dependencies between different Git repositories within a larger project. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2aclbe66vn30rv3ipfx7.png) #### Rebasing : Reorganize your Git history for a cleaner linear progression (use with caution!). #### Using Git with Different Tools and IDEs: Popular development tools and IDEs like Visual Studio Code, IntelliJ IDEA, and Eclipse integrate seamlessly with Git, providing a smooth workflow for committing, branching, and merging code directly within your development environment. ## Deep Dive into Git: Advanced Techniques and Power User Tips Now that you've grasped the fundamentals, let's delve into advanced Git concepts for seasoned users: #### Advanced Branching Strategies: Feature Flags and Branch Toggling: Manage the rollout of new features to specific environments or user groups using feature flags. Couple this with Git branching to create feature branches with feature flags enabled, allowing for staged rollouts and controlled deployments. #### Git Mirroring: Create a synchronized copy of a remote repository for disaster recovery or redundancy purposes using Git mirroring. This establishes a complete replica of the repository on another server, ensuring data safety in case of outages or accidental deletions. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nfgyskg46rxgo4gvsn87.png) #### Cherry-Picking and Rebasing for Advanced Version Control: These techniques offer granular control over your Git history: #### Cherry-Picking: Select and apply specific commits from one branch to another, useful for incorporating bug fixes from a hotfix branch without merging the entire branch. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6mi9w8wsnu5n99acsa9a.png) Rebasing (Interactive): Rewrite Git history by rearranging, editing, or squashing commits. Interactive rebasing allows for more fine-grained control over the rewriting process. Utilize these techniques cautiously, as they can alter history seen by collaborators and require careful coordination. ## Git Porcelain Commands and Refactoring #### Detachable HEAD and Rebasing Workflows: The HEAD in Git refers to the currently checked-out commit. A detachable HEAD allows you to detach it from the working directory, enabling advanced workflows like complex rebases. This is a powerful but conceptually challenging feature. #### Interactive Rebasing: As mentioned earlier, interactive rebasing allows for editing existing commits and restructuring your Git history interactively. You can: Split a large commit into smaller, more focused commits. Combine multiple commits into a single commit. Edit the commit message of an existing commit. Reorder commits to reflect the logical flow of development. Git Porcelain Commands for Everyday Tasks: Git offers a suite of powerful "porcelain" commands for various use cases: `git add -p (patch): ` Stage specific changes within a file instead of the entire file. `git stash:` Temporarily stash uncommitted changes for later retrieval, useful for switching contexts or testing branches. `git lfs (Large File Storage):` Manage large files (videos, images) efficiently within your repository using Git LFS, which stores them separately without bloating the repository size ## Git with Large Codebases #### Git Large File Storage (LFS): As mentioned earlier, Git LFS is crucial for managing large files within a Git repository. It tracks these files in the repository but stores them in a separate location, keeping the main repository lean and efficient. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pyamfu90d4mlhnksb4e2.png) #### Submodules for Modular Development: Break down large projects into smaller, modular components managed by separate Git repositories. You can integrate these submodules into a larger project (monorepo) while maintaining independent version control for each module. ## Git for Distributed Teams and Continuous Integration (CI): Leveraging Git for Distributed Teams: Git excels in geographically dispersed teams. Here's how: #### Remote Repositories: Centralize version control on platforms like GitHub or GitLab, enabling everyone to clone, push, and pull code seamlessly. #### Branching Strategies: Employ clear branching strategies like GitFlow or GitHub Flow to manage concurrent development and avoid conflicts. #### Communication and Coordination: Maintain clear communication channels and utilize tools like pull request reviews and issue trackers for effective collaboration. #### Git Integration with CI/CD Pipelines: Continuous Integration and Continuous Delivery (CI/CD) pipelines automate builds, testing, and deployments. Integrate Git with your CI/CD pipeline to trigger these processes automatically upon code changes: #### CI Triggers: Configure your CI system to trigger builds and tests whenever code is pushed to a specific branch. Deployment Automation: Automate deployments to different environments (staging, production) based on successful builds and tests. #### Git Hooks for CI Pipelines: Custom Git hooks can trigger specific actions within your CI pipeline: #### Pre-push Hooks: Run code quality checks or unit tests before pushing code, preventing regressions before they reach the remote repository. #### Post-push Hooks: Trigger deployments or automated notifications upon successful pushes. #### Git for Version Control of Non-Code Assets: Git isn't limited to code. Use it for managing version control of non-code assets like: #### Documentation: Track changes to documentation files over time. Configuration Files: Maintain different configurations for development, staging, and production environments. #### Design Mockups: Version control design assets like mockups and prototypes for easy collaboration and iteration. #### Visualizing Git History: Tools like "git log --graph" or graphical clients like GitKraken can visualize your Git history in a user-friendly format, helping you understand branching and merging activity at a glance. ## Conclusion This comprehensive guide has equipped you with the knowledge and techniques to navigate Git beyond the basics. Remember, mastering Git is a continuous journey. Keep practicing, experiment with these concepts, and leverage the vast online Git community for further exploration. Here are some additional resources to fuel your Git mastery: Official Git Documentation: https://git-scm.com/ - The definitive source for all things Git, with in-depth explanations, commands, and tutorials. Interactive Git Training: https://learngitbranching.js.org/ - A hands-on platform to learn Git fundamentals and experiment with branching and merging in a simulated environment. Git SCM Blog: https://git-scm.com/ - Stay updated on the latest Git developments, news, and best practices from the Git team. Online Git Communities: Platforms like Stack Overflow, GitHub Discussions, and Git forums offer a wealth of knowledge and assistance from experienced Git users. By actively engaging with these resources and putting your newfound knowledge into practice, you'll transform yourself into a Git power user, ready to tackle any version control challenge your projects throw your way. Happy branching! --- I'm grateful for the opportunity to delve into Mastering Version Control with Git: Beyond the Basics with you today. It's a fascinating area with so much potential to improve the security landscape. Thanks for joining me on this exploration of Mastering Version Control with Git: Beyond the Basics. Your continued interest and engagement fuel this journey! If you found this discussion on Mastering Version Control with Git: Beyond the Basics helpful, consider sharing it with your network! Knowledge is power, especially when it comes to security. Let's keep the conversation going! Share your thoughts, questions, or experiences Mastering Version Control with Git: Beyond the Basics in the comments below. Eager to learn more about DevSecOps best practices? Stay tuned for the next post! By working together and adopting secure development practices, we can build a more resilient and trustworthy software ecosystem. Remember, the journey to secure development is a continuous learning process. Here's to continuous improvement!🥂
gauri1504
1,887,904
How Data Integration Is Evolving Beyond ETL
Forward-looking technologies are generally cutting-edge and used by early adopters, offering some...
0
2024-06-14T03:47:11
https://dev.to/seatunnel/how-data-integration-is-evolving-beyond-etl-4gn1
datascience, dataintegration, opensource, etl
> Forward-looking technologies are generally cutting-edge and used by early adopters, offering some business value. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v7pyh26yspntnczhuly1.png) Image by GrumpyBeere from Pixabay. When it comes to data integration, some people may wonder what there is to discuss— isn’t it just ETL? That is, extracting from various databases, transforming, and ultimately loading into different data warehouses. However, with the rise of big data, data lakes, real-time data warehouses, and large-scale models, the architecture of data integration has evolved from the ETL of the data warehouse era to the ELT of the big data era, and now to the current stage of EtLT. In the global tech landscape, emerging EtLT companies like FiveTran, Airbyte, and Matillion have emerged, while giants like IBM have invested $2.3 billion in acquiring StreamSets and webMethods to upgrade their product lines from ETL to webMethods (DataOps). Whether you’re a manager in an enterprise or a professional in the data field, it’s essential to re-examine the changes in data integration in recent times and future trends. ## Chapter 1: ETL to EtLT ### ETL Architecture Most experts in the data field are familiar with the term ETL. During the heyday of data warehousing, ETL tools like IBM DataStage, Informatica, Talend, and Kettle were popular. Some companies still use these tools to extract data from various databases, transform it, and load it into different data warehouses for reporting and analysis. The pros and cons of the ETL architecture are as follows: **Advantages of ETL Architecture**: - Data Consistency and Quality - Integration of Complex Data Sources - Clear Technical Architecture - Implementation of Business Rules **Disadvantages of ETL Architecture:** - Lack of Real-time Processing - High Hardware Costs - Limited Flexibility - Maintenance Costs - Limited Handling of Unstructured Data ### ELT Architecture With the advent of the big data era, facing the challenges of ETL’s inability to load complex data sources and its poor real-time performance, a variant of ETL architecture, ELT, emerged. Companies started using ELT tools provided by various data warehousing vendors, such as Teradata’s BETQ/Fastload/TPT and Hadoop Hive’s Apache Sqoop. The characteristics of ELT architecture include directly loading data into data warehouses or big data platforms without complex transformations and then using SQL or H-SQL to process the data. The pros and cons of the ELT architecture are as follows: Advantages of ELT Architecture: - Handling Large Data Volumes - Improved Development and Operational Efficiency - Cost-effectiveness - Flexibility and Scalability - Integration with New Technologies Disadvantages of ELT Architecture: - Limited Real-time Support - High Data Storage Costs - Data Quality Issues - Dependence on Target System Capabilities ### EtLT Architecture The weaknesses of ELT architecture in real-time processing and handling unstructured data are highlighted by the popularity of data lakes and real-time data warehouses. Thus, a new architecture, EtLT, emerged. EtLT architecture enhances ELT by adding real-time data extraction from sources like SaaS, Binlog, and cloud components, as well as incorporating small-scale transformations before loading the data into the target storage. This trend has led to the emergence of several specialized companies worldwide, such as StreamSets, Attunity (acquired by Qlik), Fivetran, and SeaTunnel by the Apache Foundation. The pros and cons of the EtLT architecture are as follows: Advantages of EtLT Architecture: - Real-time Data Processing - Support for Complex Data Sources - Cost Reduction - Flexibility and Scalability - Performance Optimization - Support for Large Models - Data Quality and Governance Disadvantages of EtLT Architecture: - Technical Complexity - Dependence on Target System Capabilities - Management and Monitoring Challenges - Increased Data Change Management Complexity - Dependency on Tools and Platforms Overall, in recent years, with the rise of data, real-time data warehouses, and large models, the EtLT architecture has gradually become mainstream worldwide in the field of data integration. For specific historical details, you can refer to the relevant content in my article “[ELT is dead, and EtLT will be the end of modern data processing architecture“](https://blog.devgenius.io/elt-is-dead-and-etlt-will-be-the-end-of-modern-data-processing-architecture-154b87c1cce0?gi=e70a383e1b1a). Under this overarching trend, let’s interpret the maturity model of the entire data integration track. Overall, there are four clear trends: 1. In the trend of ETL evolving into EtLT, the focus of data integration has shifted from traditional batch processing to real-time data collection and batch-stream integrated data integration. The hottest scenarios have also shifted from past single-database batch integration scenarios to hybrid cloud, SaaS, and multiple data sources integrated in a batch-stream manner. 2. Data complexity transformation has gradually shifted from traditional ETL tools to processing complex transformations in data warehouses. At the same time, support for automatic schema changes (Schema Evolution) in the case of DDL (field definition) changes during real-time data integration has also begun. Even adapting to DDL changes in lightweight transformations has become a trend. 3. Support for data source types has expanded from files and traditional databases to include emerging data sources, open-source big data ecosystems, unstructured data systems, cloud databases, and support for large models. These are also the most common scenarios encountered in every enterprise, and in the future, real-time data warehouses, lakes, clouds, and large models will be used in different scenarios within each enterprise. 4. In terms of core capabilities and performance, diversity of data sources, high accuracy, and ease of troubleshooting are the top priorities for most enterprises. Conversely, there are not many examination points for capabilities such as high throughput and high real-time performance. ## Chapter 2: Data Integration Maturity Model Interpretation ### Data Production The data production segment refers to how data is obtained, distributed, transformed, and stored within the context of data integration. This part poses the greatest workload and challenges in integrating data. When users in the industry use data integration tools, their primary consideration is whether the tools support integration with their databases, cloud services, and SaaS systems. If these tools do not support the user’s proprietary systems, then additional costs are incurred for customizing interfaces or exporting data into compatible files, which can pose challenges to the timeliness and accuracy of data. - Data Collection: Most data integration tools now support batch collection, rate limiting, and HTTP collection. However, real-time data acquisition (CDC) and DDL change detection are still in their growth and popularity stages. Particularly, the ability to handle DDL changes in source systems is crucial. Real-time data processing is often interrupted by changes in source system structures. Effectively addressing the technical complexity of DDL changes remains a challenge, and various industry vendors are still exploring solutions. - Data Transformation: With the gradual decline of ETL architectures, complex business processing (e.g., Join, Group By) within integration tools has gradually faded into history. Especially in real-time scenarios, there is limited memory available for operations like stream window Join and aggregation. Therefore, most ETL tools are migrating towards ELT and EtLT architectures. Lightweight data transformation using SQL-like languages has become mainstream, allowing developers to perform data cleaning without having to learn various data integration tools. Additionally, the integration of data content monitoring and DDL change transformation processing, combined with notification, alerts, and automation, is making data transformation a more intelligent process. - Data Distribution: Traditional JDBC loading, HTTP, and bulk loading have become essential features of every mainstream data integration tool, with competition focusing on the breadth of data source support. Automated DDL changes reduce developers’ workload and ensure the smooth execution of data integration tasks. Various vendors employ their methods to handle complex scenarios where data table definitions change. Integration with large models is emerging as a new trend, allowing internal enterprise data to interface with large models, though it is currently the domain of enthusiasts in some open-source communities. - Data Storage: Next-generation data integration tools come with caching capabilities. Previously, this caching existed locally, but now distributed storage and distributed checkpoint/snapshot technologies are used. Effective utilization of cloud storage is also becoming a new direction, especially in scenarios involving large data caches requiring data replay and recording. - Data Structure Migration: This part deals with whether automatic table creation and inspection can be performed during the data integration process. Automatic table creation involves automatically creating tables/data structures in the target system that are compatible with those in the source system. This significantly reduces the workload of data development engineers. Automatic schema inference is a more complex scenario. In the EtLT architecture, in the event of real-time data DDL changes or changes in data fields, automatic inference of their rationality allows users to identify issues with data integration tasks before they run. The industry is still in the experimentation phase regarding this aspect. ### Computational Model The computational model evolves with the changing landscape of ETL, ELT, and EtLT. It has transitioned from emphasizing computation in the early stages to focusing on transmission in the middle stages, and now emphasizes lightweight computation during real-time transmission: - Offline Data Synchronization: This has become the most basic data integration requirement for every enterprise. However, the performance varies under different architectures. Overall, ETL architecture tools have much lower performance than ELT and EtLT tools under conditions of large-scale data. - Real-time Data Synchronization: With the popularity of real-time data warehouses and data lakes, real-time data synchronization has become an essential factor for every enterprise to consider when integrating data. More and more companies are beginning to use real-time synchronization. - Batch-Streaming Integration: New-generation data integration engines are designed from the outset to consider batch-stream integration, providing more effective synchronization methods for different enterprise scenarios. In contrast, most traditional engines were designed to focus on either real-time or offline scenarios, resulting in poor performance for batch data synchronization. Unified use of batch and streaming can perform better in data initialization and hybrid batch-stream environments. - Cloud Native: Overseas data integration tools are more aggressive in this aspect because they are billed on a pay-as-you-go basis. Therefore, the ability to quickly obtain/release responsive computing resources for each task is the core competitiveness and profit source for every company. In contrast, progress in big data cloud-native integration in China is still relatively slow, so it remains a subject of exploration for only a few companies domestically. ### Data Types and Typical Scenarios - File Collection: This is a basic feature of every integration tool. However, unlike in the past, apart from standard text files, the collection of data in formats like Parquet and ORC has become standard. - Big Data Collection: With the popularity of emerging data sources such as Snowflake, Redshift, Hudi, Iceberg, ClickHouse, Doris, and StarRocks, traditional data integration tools are significantly lagging in this regard. Users in China and the United States are generally at the same level in terms of big data usage, hence requiring vendors to adapt to these emerging data sources. - Binlog Collection: This is a burgeoning industry in China, as it has replaced traditional tools like DataStage and Informatica during the process of informatization. However, the replacement of databases like Oracle and DB2 has not been as rapid, resulting in a large number of specialized Binlog data collection companies emerging to solve CDC problems overseas. - Informatization Data Collection: This is a scenario unique to China. With the process of informatization, numerous domestic databases have emerged. Whether these databases’ batch and real-time collection can be adapted, presents a higher challenge for Chinese vendors. - Sharding: In most large enterprises, sharding is commonly used to reduce the pressure on databases. Therefore, whether data integration tools support sharding has become a standard feature of professional data integration tools. - Message Queues: Driven by data lakes and real-time data warehouses, everything related to real-time is booming. Message queues, as the representatives of enterprise real-time data exchange centers, have become indispensable options for advanced enterprises. Whether data integration tools support a sufficient number of memory/disk message queue types has become one of the hottest features. - Unstructured Data: Non-structural data sources such as MongoDB and Elasticsearch have become essential for enterprises. Data integration also supports such data sources correspondingly. - Big Model Data: Numerous startups worldwide are working on quickly interacting with enterprise data and large - SaaS integration: This is a very popular feature overseas but has yet to generate significant demand in China. - Data unified scheduling: Integrating data integration with scheduling systems, especially coordinating real-time data through scheduling systems and subsequent data warehouse tasks, is essential for building real-time data warehouses. - Real-time data warehouse/data lake: These are currently the most popular scenarios for enterprises. Real-time data entry into warehouses/lakes enables the advantages of next-generation data warehouses/lakes to be realized. - Data disaster recovery backup: With the enhancement of data integration real-time capabilities and CDC support, integration in the traditional disaster recovery field has emerged. Some data integration and disaster recovery vendors have begun to work in each other’s areas. However, due to significant differences in detail between disaster recovery and integration scenarios, vendors penetrating each other’s domains may lack functionality and require iterative improvements over time. ### Operation and Monitoring In data integration, operation and monitoring are essential functionalities. Effective operation and monitoring significantly reduce the workload of system operation and development personnel in case of data issues. - Flow control: Modern data integration tools control traffic from multiple aspects such as task parallelism, single-task JDBC parallelism, and single JDBC reading volume, ensuring minimal impact on source systems. - Task/table-level statistics: Task-level and table-level synchronization statistics are crucial for managing operations and maintenance personnel during data integration processes. - Step-by-step trial run: Due to support for real-time data, SaaS, and lightweight transformation, running a complex data flow directly becomes more complicated. Therefore, some advanced companies have introduced step-by-step trial run functionality for efficient development and operation. - Table change event capture: This is an emerging feature in real-time data processing, allowing users to make changes or alerts in a predefined manner when table changes occur in the source system, thereby maximizing the stability of real-time data. - Batch-stream integrated scheduling: After real-time CDC and stream processing, integration with traditional batch data warehouse tasks is inevitable. However, ensuring accurate startup of batch data without affecting data stream operation remains a challenge. This is why integration and batch-stream integrated scheduling are related. - Intelligent diagnosis/tuning/resource optimization: In cluster and cloud-native scenarios, effectively utilizing existing resources and recommending correct solutions in case of problems are hot topics among the most advanced data integration companies. However, achieving production-level intelligent applications may take some time. ### Core Capabilities There are many important functionalities in data integration, but the following points are the most critical. The lack of these capabilities may have a significant impact during enterprise usage. - Full/incremental synchronization: Separate full/incremental synchronization has become a necessary feature of every data integration tool. However, the automatic switch from full to incremental mode has not yet become widespread among small and medium-sized vendors, requiring manual switching by users. - CDC capture: As enterprise demands for real-time data increase, CDC capture has become a core competitive advantage of data integration. The support for the CDC from multiple data sources, the requirements, and the impact of the CDC on source databases, often become the core competitiveness of data integration tools. - Data diversity: Supporting multiple data sources has become a “red ocean competition” in data integration tools. Better support for users’ existing system data sources often leads to a more advantageous position in business competition. - Checkpoint resumption: Whether real-time and batch data integration supports checkpoint resumption is helpful in quickly recovering from error data scenes in many scenarios or assisting in recovery in some exceptional cases. However, only a few tools currently support this feature. - Concurrency/limiting speed: Data integration tools need to be highly concurrent when speed is required and effectively reduce the impact on source systems when slow. This has become a necessary feature of integration tools. - Multitable synchronization/whole-database migration: This refers not only to convenient selection in the interface but also to whether JDBC or existing integration tasks can be reused at the engine level, thereby making better use of existing resources and completing data integration quickly. ### Performance Optimization In addition to core capabilities, performance often represents whether users need more resources or whether the hardware and cloud costs of data integration tools are low enough. However, extreme performance is currently unnecessary, and it is often considered the third factor after interface support and core capabilities. - Timeliness: Minute-level integration has gradually exited the stage of history, and supporting second-level data integration has become a very popular feature. However, millisecond-level data integration scenarios are still relatively rare, mostly appearing in disaster recovery special scenarios. - Data scale: Most scenarios currently involve Tb-level data integration, while Pb-level data integration is implemented by open-source tools used by Internet giants. Eb-level data integration will not appear in the short term. - High throughput: High throughput mainly depends on whether integration tools can effectively utilize network and CPU resources to achieve the maximum value of theoretical data integration. In this regard, tools based on ELT and EtLT have obvious advantages over ETL tools. - Distributed integration: Dynamic fault tolerance is more important than dynamic scaling and cloud native. The ability of a large data integration task to automatically tolerate errors in hardware and network failure situations is a basic function when doing large-scale data integration. Scalability and cloud native are derived requirements in this scenario. - Accuracy: How data integration ensures consistency is a complex task. In addition to using multiple technologies to ensure “Exactly Once,” CRC verification is done. Third-party data quality inspection tools are also needed rather than just “self-certification.” Therefore, data integration tools often cooperate with data scheduling tools to verify data accuracy. - Stability: This is the result of multiple functions. Ensuring the stability of individual tasks is important in terms of availability, task isolation, data isolation, permissions, and encryption control. When problems occur in a single task or department, they should not affect other tasks and departments. - Ecology: Excellent data integration tools have a large ecosystem that supports synchronization with multiple data sources and integration with upstream and downstream scheduling and monitoring systems. Moreover, tool usability is also an important indicator involving enterprise personnel costs. ## Chapter 3: Trends In the coming years, with the proliferation of the EtLT architecture, many new scenarios will emerge in data integration, while data virtualization and DataFabric will also have significant impacts on future data integration: - Multicloud Integration: This is already widespread globally, with most data integrations having cross-cloud integration capabilities. In China, due to the limited prevalence of clouds, this aspect is still in the early incubation stage. - ETL Integration: As the ETL cycle declines, most enterprises will gradually migrate from tools like Kettle, Informatica, Talend, etc., to emerging EtLT architectures, thereby supporting batch-stream integrated data integration and more emerging data sources. - ELT: Currently, most mainstream big data architectures are based on ELT. With the rise of real-time data warehouses and data lakes, ELT-related tools will gradually upgrade to EtLT tools, or add real-time EtLT tools to compensate for the lack of real-time data support in ELT architectures. - EtLT: Globally, companies like JPMorgan, Shein, Shoppe, etc., are embedding themselves in the EtLT architecture. More companies will integrate their internal data integration tools into the EtLT architecture, combined with batch-stream integrated scheduling systems to meet enterprise DataOps-related requirements. - Automated Governance: With the increase in data sources and real-time data, traditional governance processes cannot meet the timeliness requirements for real-time analysis. Automated governance will gradually rise within enterprises in the next few years. - Big Model Support: As large models penetrate enterprise applications, providing data to large models becomes a necessary skill for data integration. Traditional ETL and ELT architectures are relatively difficult to adapt to real-time, large batch data scenarios, so the EtLT architecture will deepen its penetration into most enterprises along with the popularization of large models. - ZeroETL: This is a concept proposed by Amazon, suggesting that data stored on S3 can be accessed directly by various engines without the need for ETL between different engines. In a sense, if the data scenario is not complex, and the data volume is small, a small number of engines can meet the OLAP and OLTP requirements. However, due to limited scenario support and poor performance, it will take some time for more companies to recognize this approach. - DataFabric: Currently, many companies propose using DataFabric metadata to manage all data, eliminating the need for ETL/ELT during queries and directly accessing underlying data. This technology is still in the experimental stage, with significant challenges in query response and scenario adaptation. It can meet the needs of simple scenarios with small data queries, but for complex big data scenarios, the EtLT architecture will still be necessary for the foreseeable future. - Data Virtualization: The basic idea is similar to the execution layer of DataFabric. Data does not need to be moved; instead, it is queried directly through ad-hoc query interfaces and compute engines (e.g., Presto, TrinoDB) to translate data stored in underlying data storage or data engines. However, in the case of large amounts of data, engine query efficiency and memory consumption often fail to meet expectations, so it is only used in scenarios with small amounts of data. From an overall trend perspective, with the explosive growth of global data, the emergence of large models, and the proliferation of data engines for various scenarios, the rise of real-time data has brought data integration back to the forefront of the data field. If data is considered a new energy source, then data integration is like the pipeline of this new energy. The more data engines there are, the higher the efficiency, data source compatibility, and usability requirements of the pipeline will be. Although data integration will eventually face challenges from Zero ETL, data virtualization, and DataFabric, in the visible future, the performance, accuracy, and ROI of these technologies have always failed to reach the level of popularity of data integration. Otherwise, the most popular data engines in the United States should not be SnowFlake or DeltaLake but TrinoDB. Of course, I believe that in the next 10 years, under the circumstances of DataFabric x large models, virtualization + EtLT + data routing may be the ultimate solution for data integration. In short, as long as data volume grows, the pipelines between data will always exist. ## How to Use the Data Integration Maturity Model Firstly, the maturity model provides a comprehensive view of current and potential future technologies that may be utilized in data integration over the next 10 years. It offers individuals insight into personal skill development and assists enterprises in designing and selecting appropriate technological architectures. Additionally, it guides key development areas within the data integration industry. For enterprises, technology maturity aids in assessing the level of investment in a particular technology. For a mature technology, it is likely to have been in use for many years, supporting business operations effectively. However, as technological advancements reach a plateau, consideration can be given to adopting newer, more promising technologies to achieve higher business value. Technologies in decline are likely to face increasing limitations and issues in supporting business operations, gradually being replaced by newer technologies within 3-5 years. When introducing such technologies, it’s essential to consider their business value and the current state of the enterprise. Popular technologies, on the other hand, are prioritized by enterprises due to their widespread validation among early adopters, with the majority of businesses and technology companies endorsing them. Their business value has been verified, and they are expected to dominate the market in the next 1-2 years. Growing technologies require consideration based on their business value, having passed the early adoption phase, and having their technological and business values validated by early adopters. They have not yet been fully embraced in the market due to reasons such as branding and promotion but are likely to become popular technologies and future industry standards. Forward-looking technologies are generally cutting-edge and used by early adopters, offering some business value. However, their general applicability and ROI have not been fully validated. Enterprises can consider limited adoption in areas where they provide significant business value. For individuals, mature and declining technologies offer limited learning and research value, as they are already widely adopted. Focusing on popular technologies can be advantageous for employment prospects, as they are highly sought after in the industry. However, competition in this area is fierce, requiring a certain depth of understanding to stand out. Growing technologies are worth delving into as they are likely to become popular in the future, and early experience can lead to expertise when they reach their peak popularity. Forward-looking technologies, while potentially leading to groundbreaking innovations, may also fail. Individuals may choose to invest time and effort based on personal interests. While these technologies may be far from job requirements and practical application, forward-thinking companies may inquire about them during interviews to assess the candidate’s foresight.
seatunnel
1,887,902
Introduction to Temporary Environments( Ephemeral Environments): A Beginner's Guide
The article examines the distinctions between conventional persistent staging environments and...
0
2024-06-14T03:43:12
https://dev.to/emma_in_tech/introduction-to-temporary-environments-ephemeral-environments-a-beginners-guide-2ecm
aws, devops, cloudcomputing, development
The article examines the distinctions between conventional persistent staging environments and contemporary ephemeral environments for software testing. It highlights the issues associated with shared persistent environments, such as infrastructure overhead, queueing delays, and the risk of significant changes. On the other hand, ephemeral environments offer automated setup, isolation, and effortless creation and deletion. The article also provides guidelines for setting up ephemeral environments independently or utilizing an environment-as-a-service solution to streamline the process. ## The Drawbacks of Traditional Environments Ideally, code changes should be tested in a production-like environment before going live. However, using traditional persistent staging environments poses several practical challenges. ### Infrastructure Overhead The staging environment must replicate all production infrastructure components, such as frontends, backends, and databases. This requires extra effort to maintain and synchronize infrastructure changes across both environments. Staging can easily diverge from production if changes are forgotten or not perfectly mirrored. ### Queueing Delays With only one staging environment, developers must wait their turn to deploy changes. This reduces release velocity and productivity. Some developers may resort to risky workarounds to release faster, leading to problems from untested changes. ### Risk of "Big Bang" Changes If changes are not regularly deployed from staging to production, staging can get significantly ahead. Deploying to production then involves multiple commits at once, increasing the risk of breaking something. These challenges highlight why traditional environments often fail to ensure safe testing as intended. Modern ephemeral environments offer a better solution. ## The Benefits of Ephemeral Environments Ephemeral environments provide several significant advantages over traditional persistent staging environments. ### Automated Infrastructure Ephemeral environments are created on-demand, automatically setting up the necessary infrastructure to match the current production setup. This ensures consistency without requiring manual intervention from engineers. Any broken environments can be swiftly replaced. ### Complete Isolation Each pull request receives its own newly created environment running in parallel. This eliminates queueing delays and allows testing without interference from other changes. There are no risky "big bang" deployments to production. ### Short Life Span Ephemeral environments exist only as long as needed. They can be configured to be created when a pull request opens and destroyed when it merges. This eliminates the cost of maintaining unused environments, leading to substantial cost savings. These benefits enable developers to test safely and release quickly, addressing the common issues of traditional setups. ## Implementing Ephemeral Environments Setting up ephemeral environments requires some initial effort, but the benefits are substantial. ### Prerequisites Some essential infrastructure components should already be in place: - Containerized service instances (e.g., Docker, Kubernetes) for easy setup and teardown - A CI/CD pipeline for managing deployment and code integration ### Configuration Steps The main steps to implement ephemeral environments include: 1. **Set Up Production Infrastructure Declaratively** - Define your production infrastructure using a declarative approach to ensure consistency. 2. **Create a Test Database with Sample Data** - Set up a test database that includes sample data to facilitate accurate testing. 3. **Add Declarative Infrastructure with Dynamic Naming** - Implement infrastructure that dynamically names resources based on branches or commits. 4. **Trigger Deployment in the CI/CD Pipeline** - Ensure your CI/CD pipeline can deploy the full stack automatically. 5. **Generate Secure URLs for Access** - Create secure URLs to access the deployed instances for testing purposes. 6. **Replace Old Environments with New Ones** - Automatically replace outdated environments with new ones when code updates are made. 7. **Configure Auto-Removal After Inactivity** - Set up auto-removal of environments after periods of inactivity to manage resources efficiently. 8. **Prevent Direct Deployment to Production** - Ensure the pipeline does not deploy directly to production. Implement a manual trigger for production deployment. These steps streamline the workflow, but fully automating ephemeral environments does require a significant initial effort. ## Conclusion In summary, ephemeral environments offer modern solutions to the persistent challenges associated with traditional staging environments. By automating the provisioning and teardown of isolated environments on demand, they facilitate rapid and safe iteration without the delays and overhead typical of traditional setups. Implementing ephemeral environments requires an upfront investment in adopting declarative infrastructure, CI/CD pipelines, and containerization. However, the long-term productivity and stability benefits make this investment worthwhile for most development teams.
emma_in_tech
1,887,901
O que é um mapeamento de estoque?
Mapeamento de estoque Um mapeamento de estoque eficaz envolve um conjunto de regras para garantir...
0
2024-06-14T03:38:06
https://dev.to/marlonjerold/o-que-e-um-mapeamento-de-estoque-1kmn
softwaredevelopment, projectideas
**Mapeamento de estoque** Um mapeamento de estoque eficaz envolve um conjunto de regras para garantir que os processos relacionados a inventário e gestão de estoque sejam precisos e eficientes com o objetivo de suprir as necessidades da empresa. Armazenamento e localização! - Definição de locais de armazenamento (prateleiras, seções, armazéns) - Capacidade e limites de armazenamento - Sistema de endereçamento para rápida localização **O que é um WMS? ** Warehouse Management System (Sistema de Gerenciamento de Armazém) = Gestão de Armazém. Pense que no seu sistema empresarial, você tem o controle de estoque, você tem funcionalidades como entrada e saída, funcionalidades pertinentes a um controle de estoque. Até aí tudo bem, está auxiliando bastante sua vida no cadastro de produtos, entender o que tem, e beleza. Mas onde as coisas estão armazenadas? Como identificar? Como trazer uma melhoria no relacionamento com o cliente trazendo agilidade na entrega, e quanto maior for o armazenamento, e quanto maior fica, maior aumenta a complexidade de entender o que está acontecendo lá dentro. Podem surgir métodos lógicos para o armazenamento por indivíduo, cada um seguindo uma lógica que não se sabe qual a melhor, já imagina a bagunça. Mas fique tranquilo, que tal criar padrões? **O que temos dentro do armazém?** - Matéria prima - Trabalhos em processo - Produto acabado É interessante mapear o que é matéria prima, trabalhos em processos e principalmente produtos acabados. **Benefícios:** Com a implementação de um sistema de armazenamento, você terá uma excelente redução de perdas no inventário, e não podemos deixar de lado o ganho de produtividade. Pensando em pessoas, você não estaria dependente apenas de uma pessoa específica para entender onde estão os itens, irá existir um padrão que seja interessante com base nas regras, tamanhos, dentre outras variáveis. Coletores de dados, códigos de barras, endereçamento de itens e, sem papel! Espero que tenha gostado do post de hoje, até mais!
marlonjerold
1,887,900
How to connect MySQL using SQLynx
Query your MySQL data to safely and asily write, update, and delete without accidentally dropping...
0
2024-06-14T03:37:25
https://dev.to/concerate/how-to-connect-mysql-using-sqlynx-50g3
Query your MySQL data to safely and asily write, update, and delete without accidentally dropping tables. You can also inspect your schema, save and share queries, and run ad hoc analyses quickly. SQLynx is a database IDE that is tailored to suit the specific needs of professional SQL developers. SQLynx stands out as a premier web-based SQL IDE, highly recommended by ChatGPT for its robust features and user-centric design. This cutting-edge tool offers unparalleled accessibility, allowing users to connect and manage databases from any device with an internet connection. Its real-time collaboration capabilities make it ideal for team projects, enabling multiple users to work on queries and share results simultaneously. To connect to a MySQL database using SQLynx, you can follow the steps below: **Step 1: Download SQLynx** First, you need to download and install SQLynx. You can obtain the installation program from the official SQLynx website or other reputable sources. http://www.sqlynx.com/en/#/home/probation/SQLynx ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b5lxwn6tbwbowjgokxwu.png) **Step 2: Launch SQLynx** After the installation is complete, launch the SQLynx application. **Step 3: Add MySQL Database Connection** In SQLynx, click on the "DB Configuration" in the menu bar. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ds1v0r1msamnvo00457.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/exszr4k0snwv4ayihug8.png) In the popup window, click on "Add datasource" or a similar button to create a new database connection. Select the database type as "MySQL". Enter the connection details for the MySQL database, including the hostname (or IP address), port number, database name, username, and password. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xf7644bma7lt6su5sin9.png) Click on "Test Connection" to verify if the connection is successful. If the connection test is successful, click on "Save" or "Connect" to complete the connection setup.
concerate
1,887,899
The Cutting-Edge Technology Behind DeepFasts Drilling Tools
DeepFasts Drilling Tools: The Future of Drilling. Introduction Have you previously seen a...
0
2024-06-14T03:34:45
https://dev.to/tahera_tabiya_2d8c7b907d0/the-cutting-edge-technology-behind-deepfasts-drilling-tools-3daf
DeepFasts Drilling Tools: The Future of Drilling. Introduction Have you previously seen a construction site where big and machines that are powerful through rocks and soil to lay the foundation of a building or a road? The machines they use are called drilling machines, and they make holes in the ground to extract resources or place pipes. But did you know that drilling machines can be used to also explore space, find oil and gas, and even study the earth's crust? DeepFasts Drilling Tools One company revolutionizing the real way we drill is DeepFasts. They have developed technology cutting-edge makes drilling faster, safer, and more efficient. Their drilling tools are designed to break rocks and soil with precision and speed, allowing workers to reach deeper and harder to access areas without damaging the environment. Advantages of DeepFasts Drilling Tools What makes DeepFasts drilling tools so special? For starters, they are made with stronger and more materials that are durable can withstand high temperatures and pressure. This means that they last longer and can handle more drilling intense. Additionally, Drill Bits tools have a shape unique design that creates less friction and vibration, reducing wear and tear on the equipment and minimizing the risk of accidents. Innovation and Safety DeepFasts drilling tools are the result of several years of research and development, a process that involves testing materials that are new shapes, and techniques to find the best solutions for different drilling environments. The company uses advanced computer simulations and sensors to analyze the drilling process in real-time and adjust the tools accordingly. This helps prevent malfunctions and accidents and ensures that the tools work at optimal levels. Use and Application DeepFasts drilling tools are used in a variety of industries, from construction and mining to space exploration and research scientific. Mill Shoe are essential for drilling wells, extracting oil and gas, digging tunnels and foundations, and exploring the depths of the ocean. DeepFasts tools are also used to study earthquake faults, volcanoes, and other features that are geological provide clues about the earth's history and formation. Service and Quality DeepFasts is committed to providing high-quality tools and service excellent its customers. The company offers support technical training, and repair services to ensure that the tools operate at optimal amounts and meet the needs of different drilling projects. DeepFasts Products also tests its tools rigorously assure durability, accuracy, and safety, and works closely with customers to customize the tools to their needs that are specific. Source: https://www.deepfast.net/Laser-rust-removal-machine
tahera_tabiya_2d8c7b907d0
1,887,898
Efficient Roofing Companies in Conroe TX for Quick Fixes
Looking to fix that pesky leak or replace your worn-out roof in Conroe, TX? Look no further!...
0
2024-06-14T03:29:48
https://dev.to/dbanerjee/efficient-roofing-companies-in-conroe-tx-for-quick-fixes-3ni4
Looking to fix that pesky leak or replace your worn-out roof in Conroe, TX? Look no further! Efficient [roofing companies in Conroe TX](https://discount-roofing.com/roofing-companies-in-conroe-tx/) are the superheroes of quick fixes when it comes to protecting your home from the elements. In this blog post, we'll dive into how these experts [roofers in Conroe](https://discount-roofing.com/roofing-companies-in-conroe-tx/) ensure fast and reliable repairs, with a spotlight on [Discount Roofing](https://discount-roofing.com/roofing-companies-in-conroe-tx/) - your go-to for top-notch service in the area. Say goodbye to roofing woes and hello to peace of mind with our local pros!
dbanerjee
1,887,897
Some Thoughts on the Logic of Crypto Currency Futures Trading
Problem scene For a long time, the data delay problem of the API interface of the crypto...
0
2024-06-14T03:27:13
https://dev.to/fmzquant/some-thoughts-on-the-logic-of-crypto-currency-futures-trading-2l3e
cryptocurrency, trading, futures, fmzquant
## Problem scene For a long time, the data delay problem of the API interface of the crypto currency exchange has always troubled me. I haven't found a suitable way to deal with it. I will reproduce the scene of this problem. Usually the market order provided by the contract exchange is actually the counterparty price, so sometimes the so-called "market order" is somewhat unreliable. Therefore, when we write crypto currency futures trading strategies, most of them use limit orders. After each order is placed, we need to check the position to see if the order is filled and the corresponding position is held. The problem lies in this position information. If the order is closed, the data returned by the exchange position information interface (that is, the exchange interface that the bottom layer actually accesses when we call exchange.GetPosition) should contain the information of the newly opened position, but If the data returned by the exchange is old data, that is, the position information of the order just placed before the transaction is completed, this will cause a problem. The trading logic may consider that the order has not been filled and continue to place the order. However, the order placement interface of the exchange is not delayed, but the transaction is fast, and the order is executed. This will cause a serious consequence that the strategy will repeatedly place orders when triggering the operation of opening a position. ## Actual Experience Because of this problem, I have seen a strategy to fill a long position crazy, fortunately, the market was risen at that time, and the floating profit once exceeded 10BTC. Fortunately, the market has skyrocketed. If it is a plunge, the ending can be imagined. ## Try To Solve - Plan 1 It is possible to design the logic of order placement for the strategy to place only one order. The order placing price is a large slippage for the price gap of opponent price at the time, and a certain depth of opponent orders can be executed. The advantage of this is that only one order is placed, and it is not judged based on position information. This can avoid the problem of repeated placing orders, but sometimes when the price changes relatively large, the order will trigger the exchange's price limit mechanism, and it may lead to that the large slippage order is still not completed, and missed the trading opportunity. - Plan 2 Using the "market price" function of the exchange, the price pass -1 on the FMZ is the "market price". At present, the OKEX futures interface has been upgraded to support "real market price". - Plan 3 We still use the previous trading logic and place a limit order, but we add some detection to the trading logic to try to solve the problem caused by the delay of the position data. After the order is placed, if the order is not cancelled, it disappears directly in the list of pending orders (the list of pending orders disappears in two possible ways: 1 withdraw order, 2 executed), detect such situation and place the order amount again. The amount of the last order is the same. At this time, it is necessary to pay attention to whether the position data is delayed. Let the program enter the waiting logic to reacquire the position information. You can even continue to optimize and increase the number of triggering waits. If it exceeds a certain number of times, the position interface data is delayed. The problem is serious, let the transaction logic terminate. ## Design based on Plan 3 ``` // Parameter /* var MinAmount = 1 var SlidePrice = 5 var Interval = 500 */ function GetPosition(e, contractType, direction) { e.SetContractType(contractType) var positions = _C(e.GetPosition); for (var i = 0; i < positions.length; i++) { if (positions[i].ContractType == contractType && positions[i].Type == direction) { return positions[i] } } return null } function Open(e, contractType, direction, opAmount) { var initPosition = GetPosition(e, contractType, direction); var isFirst = true; var initAmount = initPosition ? initPosition.Amount : 0; var nowPosition = initPosition; var directBreak = false var preNeedOpen = 0 var timeoutCount = 0 while (true) { var ticker = _C(e.GetTicker) var needOpen = opAmount; if (isFirst) { isFirst = false; } else { nowPosition = GetPosition(e, contractType, direction); if (nowPosition) { needOpen = opAmount - (nowPosition.Amount - initAmount); } // Detect directBreak and the position has not changed if (preNeedOpen == needOpen && directBreak) { Log("Suspected position data is delayed, wait 30 seconds", "#FF0000") Sleep(30000) nowPosition = GetPosition(e, contractType, direction); if (nowPosition) { needOpen = opAmount - (nowPosition.Amount - initAmount); } /* timeoutCount++ if (timeoutCount > 10) { Log("Suspected position delay for 10 consecutive times, placing order fails!", "#FF0000") break } */ } else { timeoutCount = 0 } } if (needOpen < MinAmount) { break; } var amount = needOpen; preNeedOpen = needOpen e.SetDirection(direction == PD_LONG ? "buy" : "sell"); var orderId; if (direction == PD_LONG) { orderId = e.Buy(ticker.Sell + SlidePrice, amount, "Open long position", contractType, ticker); } else { orderId = e.Sell(ticker.Buy - SlidePrice, amount, "Open short position", contractType, ticker); } directBreak = false var n = 0 while (true) { Sleep(Interval); var orders = _C(e.GetOrders); if (orders.length == 0) { if (n == 0) { directBreak = true } break; } for (var j = 0; j < orders.length; j++) { e.CancelOrder(orders[j].Id); if (j < (orders.length - 1)) { Sleep(Interval); } } n++ } } var ret = { price: 0, amount: 0, position: nowPosition }; if (!nowPosition) { return ret; } if (!initPosition) { ret.price = nowPosition.Price; ret.amount = nowPosition.Amount; } else { ret.amount = nowPosition.Amount - initPosition.Amount; ret.price = _N(((nowPosition.Price * nowPosition.Amount) - (initPosition.Price * initPosition.Amount)) / ret.amount); } return ret; } function Cover(e, contractType, opAmount, direction) { var initPosition = null; var position = null; var isFirst = true; while (true) { while (true) { Sleep(Interval); var orders = _C(e.GetOrders); if (orders.length == 0) { break; } for (var j = 0; j < orders.length; j++) { e.CancelOrder(orders[j].Id); if (j < (orders.length - 1)) { Sleep(Interval); } } } position = GetPosition(e, contractType, direction) if (!position) { break } if (isFirst == true) { initPosition = position; opAmount = Math.min(opAmount, initPosition.Amount) isFirst = false; } var amount = opAmount - (initPosition.Amount - position.Amount) if (amount <= 0) { break } var ticker = _C(exchange.GetTicker) if (position.Type == PD_LONG) { e.SetDirection("closebuy"); e.Sell(ticker.Buy - SlidePrice, amount, "Close long position", contractType, ticker); } else if (position.Type == PD_SHORT) { e.SetDirection("closesell"); e.Buy(ticker.Sell + SlidePrice, amount, "Close short position", contractType, ticker); } Sleep(Interval) } return position } $.OpenLong = function(e, contractType, amount) { if (typeof(e) == "string") { amount = contractType contractType = e e = exchange } return Open(e, contractType, PD_LONG, amount); } $.OpenShort = function(e, contractType, amount) { if (typeof(e) == "string") { amount = contractType contractType = e e = exchange } return Open(e, contractType, PD_SHORT, amount); }; $.CoverLong = function(e, contractType, amount) { if (typeof(e) == "string") { amount = contractType contractType = e e = exchange } return Cover(e, contractType, amount, PD_LONG); }; $.CoverShort = function(e, contractType, amount) { if (typeof(e) == "string") { amount = contractType contractType = e e = exchange } return Cover(e, contractType, amount, PD_SHORT); }; function main() { Log(exchange.GetPosition()) var info = $.OpenLong(exchange, "quarter", 100) Log(info, "#FF0000") Log(exchange.GetPosition()) info = $.CoverLong(exchange, "quarter", 30) Log(exchange.GetPosition()) Log(info, "#FF0000") info = $.CoverLong(exchange, "quarter", 80) Log(exchange.GetPosition()) Log(info, "#FF0000") } ``` Template address: https://www.fmz.com/strategy/203258 The way to call the template interface is just like $.OpenLong and $.CoverLong in the main function above. The template is a beta version, any suggestions are welcome, i will continue to optimize to deal with the problem of delays in position data. From: https://blog.mathquant.com/2020/06/10/some-thoughts-on-the-logic-of-crypto-currency-futures-trading.html
fmzquant
1,546,828
Compilado dicas de carreira - parte 3
Dicas de engenharia de software parte 3
0
2024-06-14T03:26:11
https://dev.to/hugaomarques/compilado-dicas-de-carreira-parte-3-4fd6
iniciante, java, algoritmos, leetcode
--- title: Compilado dicas de carreira - parte 3 published: true description: Dicas de engenharia de software parte 3 tags: #iniciante #java #algoritmos #leetcode --- ## Dica #21: Nem tudo é só vitória Cuidado ao achar que a galera que você segue com sucesso, trabalhando em grandes empresas, só teve vitórias na carreira. Uma coisa que pouco se fala nas redes sociais são as tentativas, fracassos e o trabalho até chegar lá. É bem comum a gente seguir nas redes sociais pessoas trabalhando em todo lugar do mundo e fazendo coisas que achamos incríveis. Muitas vezes sentimos aquela sensação de que somos impostores e não somos tão bons quanto essa galera... O que as redes sociais não mostram são as tentativas que deram errado. Por exemplo: quem me segue sabe que eu comecei a postar sobre minha carreira, especialmente nos últimos 6 meses. O que as pessoas não sabem... 😬 Eu reprovei em 3 disciplinas na faculdade, incluindo Java, que é minha linguagem mais forte hoje em dia. Recebi uma porção de "nãos" quando tentei entrar em projetos de iniciação científica... Na minha cidade, só havia 1 empresa contratando quando me formei. Eu levei não em 2 seleções, só entrei na 3ª. Em 2020, fiz seleção para 9 big techs. Recebi 1 sim, e não era no nível que eu queria. A vaga que eu queria só veio em 2021 😃... Eu compartilho esse trecho da minha jornada para ser mais transparente com todo mundo e mostrar que muita coisa deu errado no meio do caminho. Não é só vitória e coisas legais, não. ## Dica #22: Como se preparar pra entrevistas de algoritmos e estrutura de dados Você vê a galera trabalhando em big tech e fica se perguntando como esse pessoal conseguiu? Não tem segredo, não. É tudo muito treino, estudo e preparação. Segue o fio que eu te falo como eu me preparei para as entrevistas... 🧵👇 Nosso foco vai ser estudar algoritmos e estrutura de dados. Existem outros tópicos como system design (fica para a próxima) e behavioral questions (veja a dica #18), mas sem algoritmos não adianta nem tentar. O “algoritmo” 😎 para estudar algoritmos é: 1. Estudar algoritmos e estrutura de dados. 2. Praticar #1 no [LeetCode](http://leetcode.com). 3. Repetir o ciclo. Você precisa saber os conceitos básicos das seguintes categorias: - Strings/Arrays - Listas - Mapas/Dicionários - Algoritmos de ordenação - Busca binária - Filas - Pilhas - Árvores e BSTs - Heaps/Filas de prioridade - Grafos - BFS - DFS - Tries - Union-Find... Eu nunca encontrei programação dinâmica ou algoritmos gulosos nas entrevistas. Backtracking eu só vi cair uma vez, então, em termos de custo/benefício, recomendo focar nesses aí... Se você não sabe NADA de algoritmos, eu recomendo começar com o livro "Entendendo Algoritmos: Um Guia Ilustrado": [link para o livro](https://amzn.to/3GHeltA). Depois dele, você pode checar "Cracking the Coding Interview", que vai te ensinar os algoritmos mas bem focado no preparo para as entrevistas... Agora que você sabe a base, está na hora de encarar o [LeetCode](http://leetcode.com). O LeetCode é o site que a galera aqui nos EUA usa de fato para se preparar, com centenas de questões reais que já caíram em várias das minhas entrevistas... Recomendo começar na aba "explore" e selecionar tópicos, por exemplo, "arrays". Esses tópicos vão te ensinar os problemas mais comuns daquela categoria e fortalecer ainda mais a base... Agora vamos repetir mais exercícios e voltar para a teoria toda vez que se enrolar em algo. Recomendo fazer ainda mais exercícios no LeetCode, desta vez "avulsos" da aba problems... Ordene por dificuldade e comece fazendo os "easy" primeiro. É normal tomar pau no início. Não desista. Quando estiver mandando bem nos "easy", passe para os medium. Nosso objetivo é ficar bom em resolver problemas medium... Por que medium? A maioria das empresas não pergunta questões hard. Não acho que vale a pena o esforço. As hard é bom fazer por curiosidade e/ou se você gostar... Vou deixar aqui umas listas públicas com perguntas boas para ir praticando: 1. ["Must do easy questions"](https://leetcode.com/list/xip8yt562). 2. ["Must do medium questions"](https://leetcode.com/list/xineettm3). 3. ["Community curated 75 questions"](https://leetcode.com/list/x84cr1pj). ## Dica #23: Na entrevista Você leu a dica #22, se preparou, treinou e agora chegou o dia da entrevista. A pessoa entrevistadora te pergunta um problema de algoritmos. E agora? Como proceder? O que fazer primeiro? Ficou curioso(a)? Segue a thread... 🧵👇 O maior erro que você pode cometer é já sair respondendo. Pior ainda, já sair codificando. Por quê? Como pessoa entrevistadora, a gente está se perguntando: 1. A pessoa assume demais? 2. A pessoa entendeu o problema? 3. A pessoa consegue comunicar o que está pensando? Se você já sai codificando, pode estar sinalizando para a pessoa que te entrevista que, no trabalho do dia-a-dia, você pode sair codificando para resolver o problema errado ou incompleto... Para evitar isso, nós vamos usar o seguinte processo: 1. Ouça o problema. 2. Trabalhe com exemplos. 3. Resolva o problema, na força bruta se necessário. 4. Tente otimizar sua solução. 5. Codifique. 6. Teste seu código. 7. Analise sua solução. Vamos olhar cada um desses passos: 1. **Ouça bem o problema**, preste atenção a qualquer restrição que a pessoa entrevistadora mencionar. Preste atenção aos exemplos. 2. **Peça mais exemplos**, dê exemplos e clarifique se o problema muda com os exemplos que você dá. Tente achar condições extraordinárias que possam alterar sua possível solução. 3. **Resolva o problema na força bruta**, se necessário. Se o primeiro algoritmo que você pensar for sub-ótimo, não tem problema. Converse com a pessoa entrevistadora, explique como o algoritmo resolveria o problema e clarifique de antemão que você sabe dos problemas dessa primeira solução. 4. **Tente otimizar sua solução**, pense se você pode usar mais espaço em memória para ganhar tempo de execução. Tente analisar se você pode usar algumas das restrições para ganhar alguma vantagem. É possível precomputar algum dos passos antes? É possível organizar os dados em uma hashtable para melhorar o tempo de execução? Existe alguma computação duplicada que você pode armazenar e evitar computar de novo? 5. **Escreva o código**, foque em código que funcione, mas mantenha um código organizado. Seja claro com a pessoa entrevistadora sobre suas decisões quanto ao nome de variáveis, funções, etc. Evite refatorar agora, deixe isso para se sobrar tempo no final. Foque em escrever o código principal; se houver funções menores, como pegar um input ou fazer uma troca de variáveis, você pode escrever a chamada de função e falar que fará isso depois se sobrar tempo. 6. **Teste seu código**. Ao terminar, não diga "terminei". Comunique "terminei o código principal e vou testá-lo agora". Rode o seu algoritmo mentalmente com alguns exemplos, execute esses exemplos passo-a-passo com a pessoa entrevistadora. 7. **Analise sua solução**. Faça a análise assintótica (big O) do seu algoritmo. Analise espaço e tempo de execução para o caso médio. Ufa... é isso 😣. Eu sei que não é fácil, mas com treino e prática vai ficando mais natural. Grande parte desse processo é descrito em mais detalhes no livro que recomendei antes: ["Cracking the Coding Interview"](https://amzn.to/3GHeltA). ## Dica #24: Diário Dev Tá preparando o currículo para uma vaga? Precisou escrever o seu documento de promoção e não lembra os resultados que você obteve 1, 2 anos atrás? Segue o fio pra você não esquecer mais suas contribuições e facilitar sua vida na hora de escrever o documento com suas conquistas... 🧵 Em meados de 2020, eu tive que escrever o meu documento de promoção descrevendo todas as minhas contribuições e resultados. Aquele documento era o último passo na minha promoção de SDE 2 para Senior SDE... O problema? Eu tinha feito muitos projetos bacanas em 2018, 2019 e 2020. Muitos artefatos como métricas, gráficos, code reviews eu nem lembrava mais onde estavam... Resultado: Eu demorei um tempão escrevendo o documento. Catando e-mails antigos, procurando por code reviews, documentos de design, feedbacks que dei para os meus colegas. Depois dessa experiência, eu passei a ensinar todo mundo a manter um “diário dev” 📔... 📔 Mas o que é um diário dev? No diário dev, você vai escrever tudo que fez de bacana com links para artefatos, code reviews, design docs, métricas. Imagine escrever um tweet onde você lista o que contribuiu para a empresa naquela semana ou mês... ❓ O que escrever no diário dev? Tudo que você contribuiu que gerou resultados. 1. Fez deploy de um código que reduziu a latência do serviço? Escreva no diário, anexe link para a code review e um printscreen do gráfico. 2. Fez o design de um sistema? Anote no diário e coloque o documento no diário. 3. Fez uma code review bacana para outra pessoa desenvolvedora? Anote no diário e coloque o link para a code review que demonstre seus comentários/boas práticas. ⏰ Quando escrever no diário dev? Você deveria atualizar algo pelo menos 1x por semana. Não deixe passar mais de 1 mês sem escrever. Você vai esquecer. É igual a código, em 2 meses você não vai lembrar mais os detalhes do que fez... 🎉 Se você tiver tudo documentado, quando chegar a hora de escrever o currículo ou documento de promoção, é só uma questão de passar o olho no diário e escolher seus melhores exemplos. E aí, só sucesso! 🥳 ## Dica #25: Material de estudo para algoritmos. Esse curso de Princeton tem 2 partes. A linguagem usada é Java. Recomendo: [Algorithms, Part I](https://coursera.org/learn/algorithms-part1) Offered by Princeton University. A Coursera também tem um curso de Stanford, nunca fiz mas dizem que é excelente também: [Algorithms](https://coursera.org/learn/algorithms) Offered by Stanford University. O William Fiset é um engenheiro da Google, eu acho. O canal dele tem várias explicações sobre algoritmos: [WilliamFiset](https://youtube.com/WilliamFiset) Se você quiser praticar, pode tentar as trilhas na aba explore do LeetCode (é preciso estar logado): [LeetCode Explore](https://leetcode.com/explore/) Tem pessoas que preferem o HackerRank: [HackerRank](https://hackerrank.com) Se você prefere livros, existem vários: Focado em entrevistas, você pode pegar o "Cracking the Coding Interview: 189 Programming Questions and Solutions": [Cracking the Coding Interview](https://www.amazon.com/dp/0984782850) Se você quer algo leve para iniciantes, pode começar com o “Entendendo Algoritmos: Um guia ilustrado para programadores e curiosos”: [Entendendo Algoritmos](https://www.amazon.com.br/dp/8575225631) Um guia ilustrado para programadores e outros curiosos. Um algoritmo nada mais é do que um procedimento passo a passo para a resolução de um problema. Os algoritmos que você mais utilizará como um... O curso que eu falei de Princeton também tem um livro texto chamado “Algorithms”: [Algorithms (4th Edition)](https://www.amazon.com.br/dp/032157351X) Na parte de clássicos, você pode olhar o famoso livro do Cormen [Introduction to Algorithms, fourth edition](https://www.amazon.com/dp/026204630X) Um outro clássico que vi muita gente usando é o “The Algorithm Design Manual”: [The Algorithm Design Manual](https://www.amazon.com/dp/1848000693).
hugaomarques
1,887,896
AI Solutions: The Future of BFSI Innovation
Understanding BFSI AI Solutions Artificial Intelligence (AI) is rapidly transforming the Banking,...
0
2024-06-14T03:21:48
https://dev.to/avinashchander9077/ai-solutions-the-future-of-bfsi-innovation-190h
ai, bfsi, innovation, aisolutions
**Understanding BFSI AI Solutions** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wsjcw9ajh2fp4dt63add.jpg) Artificial Intelligence (AI) is rapidly transforming the Banking, Financial Services, and Insurance (BFSI) sector. AI solutions in BFSI leverage machine learning, natural language processing, and data analytics to improve customer interactions, streamline operations, and enhance decision-making processes. These BFSI AI solutions are redefining the way organizations operate and interact with their customers. **Impact of AI Solutions on Customer Service** AI solutions are revolutionizing customer service in the BFSI sector by providing personalized and efficient support. Here’s how AI is enhancing customer service: **24/7 Availability:** AI-powered chatbots and virtual assistants offer round-the-clock support, ensuring customers receive timely assistance regardless of time zones. **Personalized Interactions:** AI algorithms analyze customer data to provide tailored advice and recommendations, enhancing the customer experience. **Quick Issue Resolution:** AI systems can quickly resolve common customer issues, freeing up human agents to handle more complex inquiries. **Multi-Channel Support:** AI solutions can be integrated across various channels, including mobile apps, websites, and social media, providing a seamless customer experience. **Enhancing Risk Management with AI Solutions** Risk management is a critical function in the BFSI sector, and AI solutions are playing a pivotal role in enhancing this area. Here’s how AI is improving risk management: **Predictive Analytics:** AI-powered predictive analytics tools assess and predict risks by analyzing historical data, market trends, and other relevant factors. **Real-Time Monitoring:** AI systems monitor transactions and activities in real-time, identifying potential risks and anomalies before they escalate. **Fraud Detection:** AI algorithms detect fraudulent activities by analyzing transaction patterns and flagging suspicious behaviors. Regulatory Compliance: AI helps BFSI organizations comply with regulatory requirements by automating data collection, analysis, and reporting processes. **Future Developments in BFSI AI Solutions** The future of AI in the BFSI sector holds exciting possibilities. Here are some anticipated developments: **AI-Enhanced Security:** AI will enhance security measures by providing advanced threat detection and response capabilities, ensuring the safety of financial transactions. **Smart Contracts:** AI will integrate with blockchain technology to facilitate smart contracts, automating and securing financial agreements. **AI-Driven Wealth Management:** AI will power advanced wealth management platforms, offering personalized investment strategies and real-time portfolio management. **Customer Sentiment Analysis:** AI will analyze customer sentiment through various touchpoints, providing insights that help BFSI organizations improve their services and products. **Implementing AI Solutions for BFSI Innovation** To successfully implement BFSI AI solutions, organizations should consider the following steps: **Define Objectives:** Clearly define the objectives you aim to achieve with AI, such as improving customer service, enhancing risk management, or increasing operational efficiency. **Select the Right Tools:** Choose AI platforms and tools that offer the features and capabilities required to meet your objectives. Data Integration: Ensure seamless integration of AI solutions with existing data systems to provide comprehensive insights and analytics. **Continuous Improvement:** Regularly monitor the performance of AI solutions and make necessary adjustments based on feedback and data analysis. **Conclusion** AI solutions are driving innovation in the BFSI sector, offering numerous benefits that enhance customer service, improve risk management, and streamline operations. By embracing AI technologies, BFSI organizations can stay ahead of the curve, meet evolving customer expectations, and achieve greater efficiency and security. The future of BFSI is undoubtedly intertwined with the advancements in AI, promising a landscape of continuous innovation and growth.
avinashchander9077
1,887,895
Mastering Custom Hooks in React: A Comprehensive Guide
React hooks have revolutionized the way developers build components, making it easier to manage state...
0
2024-06-14T03:21:15
https://dev.to/hasancse/mastering-custom-hooks-in-react-a-comprehensive-guide-1bfb
webdev, javascript, programming, react
React hooks have revolutionized the way developers build components, making it easier to manage state and side effects. Custom hooks, in particular, provide a powerful mechanism to encapsulate logic and reuse it across components. In this blog post, we’ll explore how to create and use custom hooks in React, along with some best practices to follow. **Table of Contents** 1. Introduction to React Hooks 2. What are Custom Hooks? 3. Creating Your First Custom Hook 4. Practical Examples of Custom Hooks 5. Best Practices for Custom Hooks 6. Conclusion ## 1. Introduction to React Hooks React hooks, introduced in version 16.8, allow you to use state and other React features in functional components. Some common hooks include: - useState: For managing state. - useEffect: For side effects (e.g., data fetching). - useContext: For accessing context. - useReducer: For complex state logic. ## 2. What are Custom Hooks? Custom hooks are JavaScript functions that start with use and can call other hooks. They enable you to extract and reuse logic in a modular way. Custom hooks follow the same rules as regular hooks: - Only call hooks at the top level. - Only call hooks from React function components or other custom hooks. ## 3. Creating Your First Custom Hook Let's create a simple custom hook called useWindowWidth that tracks the window's width. ``` import { useState, useEffect } from 'react'; function useWindowWidth() { const [width, setWidth] = useState(window.innerWidth); useEffect(() => { const handleResize = () => setWidth(window.innerWidth); window.addEventListener('resize', handleResize); return () => { window.removeEventListener('resize', handleResize); }; }, []); return width; } export default useWindowWidth; ``` This custom hook: - Uses useState to create a state variable width. - Uses useEffect to set up an event listener for the window resize event. - Cleans up the event listener when the component using the hook is unmounted. ## 4. Practical Examples of Custom Hooks Custom hooks can be used for various purposes, such as data fetching, form handling, and more. Let’s explore a few practical examples. **Example 1: Data Fetching** Create a custom hook useFetch to fetch data from an API. ``` import { useState, useEffect } from 'react'; function useFetch(url) { const [data, setData] = useState(null); const [loading, setLoading] = useState(true); const [error, setError] = useState(null); useEffect(() => { const fetchData = async () => { try { const response = await fetch(url); if (!response.ok) throw new Error('Network response was not ok'); const result = await response.json(); setData(result); } catch (err) { setError(err); } finally { setLoading(false); } }; fetchData(); }, [url]); return { data, loading, error }; } export default useFetch; ``` Usage: ``` import React from 'react'; import useFetch from './useFetch'; function App() { const { data, loading, error } = useFetch('https://api.example.com/data'); if (loading) return <div>Loading...</div>; if (error) return <div>Error: {error.message}</div>; return ( <div> <h1>Data</h1> <pre>{JSON.stringify(data, null, 2)}</pre> </div> ); } export default App; ``` **Example 2: Form Handling** Create a custom hook useForm to manage form state and handle form submission. ``` import { useState } from 'react'; function useForm(initialValues, onSubmit) { const [values, setValues] = useState(initialValues); const handleChange = (event) => { const { name, value } = event.target; setValues({ ...values, [name]: value, }); }; const handleSubmit = (event) => { event.preventDefault(); onSubmit(values); }; return { values, handleChange, handleSubmit, }; } export default useForm; ``` Usage: ``` import React from 'react'; import useForm from './useForm'; function App() { const initialValues = { username: '', email: '' }; const onSubmit = (values) => { console.log('Form Submitted:', values); }; const { values, handleChange, handleSubmit } = useForm(initialValues, onSubmit); return ( <form onSubmit={handleSubmit}> <div> <label> Username: <input type="text" name="username" value={values.username} onChange={handleChange} /> </label> </div> <div> <label> Email: <input type="email" name="email" value={values.email} onChange={handleChange} /> </label> </div> <button type="submit">Submit</button> </form> ); } export default App; ``` ## 5. Best Practices for Custom Hooks 1. Start with use: Always name your custom hooks starting with use to ensure they follow the hook rules. 2. Encapsulate Logic: Keep hooks focused on a single piece of functionality. This makes them easier to understand and reuse. 3. Reuse Built-in Hooks: Leverage built-in hooks like useState, useEffect, and useContext within your custom hooks. 4. Return Only Necessary Data: Avoid returning too much information. Only return what’s needed by the consuming component. 5. Document Your Hooks: Provide clear documentation and examples for your custom hooks to make them easier to use and understand. ## 6. Conclusion Custom hooks in React are a powerful way to encapsulate and reuse logic across your application. By creating custom hooks, you can keep your components clean and focused on their core functionality. Remember to follow best practices and keep your hooks simple and well-documented. By mastering custom hooks, you'll enhance your ability to build scalable and maintainable React applications.
hasancse
1,887,884
Dive into Kivy: Building User-Friendly Windows Apps with Python
The realm of app development can often seem daunting, shrouded in complex languages and frameworks....
0
2024-06-14T03:17:35
https://dev.to/epakconsultant/dive-into-kivy-building-user-friendly-windows-apps-with-python-p2f
windows
The realm of app development can often seem daunting, shrouded in complex languages and frameworks. But what if you could leverage the power of Python to create user-friendly applications for Windows? Enter Kivy, a free and open-source framework that empowers you to build cross-platform apps, including those specifically for Windows, using Python. This article explores the foundational concepts of Kivy Windows apps, equipping you to embark on your app development journey. ## Kivy: A Cross-Platform Powerhouse Kivy stands out for its ability to create apps that run seamlessly on various platforms, including Windows, Android, iOS, Linux, and macOS. This eliminates the need to learn separate development languages for each platform, streamlining your workflow. Here's what makes Kivy a compelling choice for Windows app development: • Pythonic Code: Kivy leverages Python, a widely used and beginner-friendly language. This makes it accessible to programmers of all levels, allowing you to focus on app functionality rather than complex syntax. • Native Look and Feel: Kivy apps seamlessly integrate with the native look and feel of the target platform. On Windows, your app will have the familiar Windows interface elements, ensuring a smooth user experience. • Open-Source and Community-Driven: Kivy's open-source nature fosters a vibrant developer community. This translates to comprehensive documentation, tutorials, and readily available support for troubleshooting any challenges you encounter. ## Building Blocks of a Kivy App: A Hands-on Glimpse Let's delve into the core components of a Kivy Windows app: 1.Setting Up the Environment: Install Python on your Windows machine. Download and install Kivy following the official installation guide (https://kivy.org/doc/stable/installation/installation-windows.html). 2.Creating a Python Script: Use a text editor or IDE of your choice to create a new Python script. This script will house the code for your Kivy app. [Mastering AWS S3: The Foundation of Scalable and Reliable Object Storage ](https://cloud-computing-for-beginner.blogspot.com/2024/06/mastering-aws-s3-foundation-of-scalable.html) 3.Importing Kivy: Begin by importing the essential Kivy libraries. Use from kivy.app import App and from kivy.uix.widget import Widget to access the core functionalities for building your app. 4.Defining the App Class: Create a class that inherits from kivy.app.App. This class serves as the foundation for your app and manages its lifecycle. 5.Building the User Interface: Within your app class, define the user interface (UI) elements using Kivy's built-in widgets. Common widgets include Label for displaying text, Button for user interaction, and BoxLayout for arranging elements on the screen. ## Bringing Your App to Life: Running and Deploying Once you've defined your app's UI and functionality, it's time to run it: 1.Running the App: Save your Python script and execute it from the command line. Kivy will automatically launch your app in a window, allowing you to interact with the UI elements you've defined. 2.Packaging for Distribution: While Kivy apps can run directly from Python scripts, for wider distribution, you can package them into standalone executables (.exe files) using tools like pyinstaller. This allows users to run your app without needing Python installed. [Master MetaTrader: A Comprehensive Guide to Trading with MT5](https://www.amazon.com/dp/B0CQPVKYL3) ## Beyond the Basics: Exploring Kivy's Potential Kivy offers a rich feature set to expand your app development capabilities: • Advanced Widgets: Kivy boasts a vast library of widgets beyond basic UI elements. You can integrate sliders, drop-down menus, and custom layouts to create complex and interactive applications. • Multi-touch Support: Kivy offers built-in multi-touch support, allowing you to leverage touch-screen functionalities for a more intuitive user experience on devices like tablets or touch-enabled laptops. • Event Handling: Kivy allows you to define custom actions in response to user interactions. This empowers you to create dynamic and responsive applications that react to user input. ## The Future of Kivy Windows Apps Kivy provides a powerful and accessible platform for building user-friendly Windows apps with Python. Its cross-platform capabilities and open-source nature make it a valuable tool for developers of all levels. As you delve deeper into Kivy's features and explore the vibrant developer community, you'll be well-equipped to create innovative and engaging applications for the Windows platform.
epakconsultant
1,887,882
Demystifying Streamlit: Building Web Apps with Python in Minutes
Imagine transforming your Python scripts into interactive web apps within minutes. Streamlit, a...
0
2024-06-14T03:13:01
https://dev.to/epakconsultant/demystifying-streamlit-building-web-apps-with-python-in-minutes-2k3l
python
Imagine transforming your Python scripts into interactive web apps within minutes. Streamlit, a powerful open-source framework, makes this a reality. This article delves into the core concepts of Streamlit web apps, empowering you to craft user-friendly data visualizations and applications with ease. ## Streamlit: Simplifying Web Development for Pythonistas Streamlit shines by removing the complexities of traditional web development. Forget frameworks like Django or Flask; Streamlit leverages pure Python code, making it accessible even for those with limited web development experience. Here's what makes Streamlit unique: • Write Once, Deploy Everywhere: Develop your app in pure Python, and Streamlit handles the heavy lifting of converting it into a web application. This eliminates the need to learn separate languages like HTML, CSS, or Javascript. • Rapid Prototyping: Streamlit excels at rapid prototyping. Quickly visualize your data and iterate on ideas without getting bogged down in complex web development workflows. See your changes reflected instantly as you modify your Python code. • Interactive Elements: Streamlit empowers you to create interactive web apps. Integrate various UI components like text boxes, sliders, and checkboxes to enable user input and dynamic data exploration. [Mastering Data Modeling and DAX Scripting in Microsoft Power BI: Unleashing the Full Potential of Your Data ](https://dataprophet.blogspot.com/2024/06/mastering-data-modeling-and-dax.html) ## Building Blocks of a Streamlit App: A Hands-on Approach Let's explore the fundamental components of a Streamlit web app: 1.Importing Streamlit: Begin by importing the Streamlit library using import streamlit as st. This grants you access to all the functionalities Streamlit offers. 2.Creating a Title and Text: Use st.title("My Streamlit App") to define your app's title. Similarly, st.write("Hello, world!") displays text on your app's interface. 3.Displaying Data: Streamlit integrates seamlessly with various data structures in Python. Use st.text(data) to display text data, st.write(data) for formatted text output, and st.dataframe(data) to display pandas DataFrames as interactive tables within your app. 4.User Input: Capture user input through various elements. st.text_input("Enter your name:") creates a text box for users to enter their names. Similarly, st.slider("Select a value:", min_value=0, max_value=10) generates a slider for users to choose a value within a defined range. [Demystifying FreeRTOS: An Essential Guide for Beginners: Getting Started with FreeRTOS](https://www.amazon.com/dp/B0CQGV8B8X) 5.Visualizations: Streamlit supports plotting libraries like Matplotlib and Seaborn. Simply import your preferred library and use its functions to create charts and graphs. Use st.pyplot() to display your visualizations within the app. ## Beyond the Basics: Exploring Streamlit's Capabilities While these basics provide a solid foundation, Streamlit offers a rich feature set: • Multi-page Apps: Structure your app with multiple pages using st.sidebar for navigation. This allows for organizing complex applications into logical sections. • Layouts: Control the layout of your app with containers. Use st.columns to create multiple columns for side-by-side content or st.expander to create collapsible sections for better information organization. • Deployment: Once your app is ready, deploy it for others to access. Streamlit offers deployment options like cloud platforms or local hosting for sharing your creation with the world. ## The Power of Simplicity: Why Choose Streamlit? Streamlit streamlines web development for Python users. Its intuitive syntax, interactive capabilities, and rapid prototyping nature make it ideal for data scientists, machine learning enthusiasts, and anyone who wants to showcase their Python projects in a user-friendly web format. So, if you're looking to bridge the gap between Python scripting and web applications, Streamlit is an excellent tool to empower your creative endeavors.
epakconsultant
1,887,880
How Sichuan DeepFast Oil Drilling Tools Co., Ltd is Revolutionizing the Oil Industry
The Revolutionary DeepFast Oil Drilling Devices coming from Sichuan are Changing the Oil Market....
0
2024-06-14T03:11:05
https://dev.to/tahera_tabiya_2d8c7b907d0/how-sichuan-deepfast-oil-drilling-tools-co-ltd-is-revolutionizing-the-oil-industry-27da
The Revolutionary DeepFast Oil Drilling Devices coming from Sichuan are Changing the Oil Market. Introduction The oil fuel market was an resource important of income for several years, it proceeds to become in fantastic need around the world. Due to the require enhancing power, it is end up being essential towards check out more recent much further oil wells, which have resulted in the advancement of advanced progressed drilling innovations. We'll talk about exactly how Sichuan DeepFast Oil Drilling Devices Business is revolutionizing the oil market along with its own advanced innovations offering never-seen-before benefits towards its own clients. Benefits of DeepFast Oil Drilling Devices: Sichuan DeepFast Oil Drilling Devices Business has designed progressed drilling devices that can easily permeate much further accomplish greater drilling rates. The essential profit of DeepFast oil Drill Bits devices is that they are each resilient effective, decreasing deterioration while accomplishing higher prices of infiltration. Another benefit is that they allow drilling in extremely difficult developments that are geological will have been difficult or else. DeepFast devices are likewise environmentally friendly assist reduce the carbon dioxide impact throughout the drilling procedure. Development: Sichuan DeepFast Oil Drilling Devices Business has a credibility worldwide being an trendsetter in the oil drilling market. The company's advancement research study group routinely deals with establishing advanced innovations that can easily add to enhancing the effectiveness of the drilling procedure. The company's ingenious society is shown in its own use progressed products functions like its own little style, which allows quicker smoother drilling. Safety and safety: Security is a concern leading Sichuan Mill Shoe Business. The business takes a method positive producing risk-free functioning problems by utilizing cutting edge innovations that are dependable, resilient, satisfy worldwide security requirements. The devices go through extensive screening towards guarantee they can easily endure the drilling problems that are most difficult stay risk-free for the drivers. Utilize Using DeepFast oil drilling devices is simple. The driver have to choose the device suitable the needed drilling procedure connect it towards the drilling devices. When the device remains in location, the drilling procedure can easily start, the rate of infiltration could be brought as much as the degree ideal. The devices are easy to use simple towards set up, creating all of them an option outstanding each drilling experts novices. Ways to utilize: The complying with actions could be utilized as an overview of utilizing oil deepFast devices: 1. Choose the drilling suitable inning accordance with the drilling operation's demands. 2. Connect the device towards the drilling devices. 3. Start the drilling procedure gradually carry the rate as much as the degree ideal. Solution: Sichuan DeepFast Oil Drilling Devices Business provides a solution extensive exceeds marketing the devices. The business offers continuous support towards its own clients around the world, consisting of assistance on the very best devices towards utilize for a provided drilling procedure sustain technological. The company's after-sales solutions guarantee clients stay pleased obtain the worth finest for their specific financial assets. High premium solutions High top premium is the particular of Sichuan DeepFast Oil Drilling Devices Business. The business utilizes the best products procedures towards produce the devices, guaranteeing that each item satisfies the high top premium requirements that are greatest. The company's quality control procedure consists of a comprehensive evaluation program throughout each manufacturing phase, leading to a regularly dependable item resilient. Request Sichuan DeepFast Oil Drilling Devices Company's DeepFast devices are extremely flexible could be utilized in different oil drilling requests. They are perfect for drilling oil wells in each difficult smooth shake developments work for each traditional drilling non-traditional. These Products have been specifically developed towards run effectively in severe temperature levels problems, creating all of them the option finest for oil drilling in severe atmospheres. Source: https://www.deepfast.net/drill-bits
tahera_tabiya_2d8c7b907d0
1,887,876
Nice to join DEV
Hello, everyone! I am a game fan. welcome to chat with me!
0
2024-06-14T03:08:25
https://dev.to/huayaogames/nice-to-join-dev-1aif
gamedev, ai, webdev
Hello, everyone! I am a game fan. welcome to chat with me!
huayaogames
1,887,875
Building Your Dream App: A Guide to Mobile App Development with Tapcart
In today's mobile-first world, a well-crafted app can be a game-changer for businesses. But...
0
2024-06-14T03:04:58
https://dev.to/epakconsultant/building-your-dream-app-a-guide-to-mobile-app-development-with-tapcart-45go
app
In today's mobile-first world, a well-crafted app can be a game-changer for businesses. But traditional app development can be expensive and time-consuming. Enter Tapcart, a powerful platform that empowers businesses to create native mobile apps without the hassle of complex coding. This article explores the world of Tapcart app development, guiding you through the process and highlighting its key features. ## Tapcart: The All-in-One Mobile App Solution Tapcart stands out as a comprehensive mobile app builder specifically designed for businesses on the Shopify platform. It streamlines the app development process by offering a suite of pre-built features and functionalities tailored to the needs of online stores. This drag-and-drop interface allows users to customize their app without requiring extensive coding knowledge. Here's what makes Tapcart a compelling choice for mobile app development: • Seamless Shopify Integration: Tapcart seamlessly integrates with your existing Shopify store. This means your product catalog, customer data, and inventory are automatically synced with your app, ensuring a consistent and up-to-date experience for your users. • Drag-and-Drop App Builder: Gone are the days of complex coding. Tapcart's intuitive drag-and-drop interface empowers users to design and customize their app's layout, branding, and features. This allows for a high degree of creative control without a steep learning curve. • Powerful Marketing Tools: Tapcart goes beyond just building an app. It equips you with robust marketing functionalities. Send targeted push notifications to re-engage customers, create personalized product recommendations, and leverage in-app promotions to drive sales. [Unleashing the Power of QuantConnect: A Glimpse into the Future of Algorithmic Trading](https://www.amazon.com/dp/B0CPX363Y4) • Advanced Analytics: Gain valuable insights into user behavior within your app. Track key metrics like app downloads, user engagement, and purchase conversions. This data helps you optimize your app and tailor your marketing strategies for maximum impact. [Dive into DeFi: Uncover Hidden Liquidity Gems with BSCscan](https://cryptopundits.blogspot.com/2024/05/dive-into-defi-uncover-hidden-liquidity.html) ## Beyond the Basics: Exploring Tapcart's Full Potential While Tapcart excels in user-friendly app creation, it offers features for those seeking more customization: • Developer Tools: Experienced developers can leverage Tapcart's developer tools to create unique and bespoke functionalities. This allows for building custom features that extend beyond Tapcart's pre-built templates, setting your app apart from the competition. • API Integrations: Tapcart integrates with various third-party APIs, allowing you to extend its functionalities and connect your app with other essential business tools. ## Considering Tapcart? Here's What to Know While Tapcart offers a compelling solution, it's essential to consider your specific needs: • Focus on Shopify: Tapcart is best suited for businesses already using Shopify. Its seamless integration makes app development a breeze. • Limited Design Flexibility: While offering customization options, Tapcart's core strength lies in its pre-built templates. If you require a highly unique app design, you might need to explore alternative development options. • Pricing Considerations: Tapcart offers various pricing plans. Evaluate your needs and choose a plan that aligns with your budget and app usage. ## Conclusion: Tapcart - A Stepping Stone to Mobile App Success Tapcart empowers businesses to enter the mobile app arena without the complexities of traditional development. Its user-friendly interface, robust features, and Shopify integration make it a compelling choice for businesses looking to build a branded app and enhance their customer engagement. As your needs evolve, Tapcart provides a solid foundation for growth, allowing you to explore more advanced functionalities and integrations as your app journey progresses.
epakconsultant
1,887,874
Building Infrastructure as Code: Unlocking the Power of AWS CloudFormation
Building Infrastructure as Code: Unlocking the Power of AWS CloudFormation In the...
0
2024-06-14T03:02:28
https://dev.to/virajlakshitha/building-infrastructure-as-code-unlocking-the-power-of-aws-cloudformation-3273
![topic_content](https://cdn-images-1.medium.com/proxy/1*hXIV3K77zDbI0B5vuV_X3A.png) # Building Infrastructure as Code: Unlocking the Power of AWS CloudFormation In the ever-evolving landscape of cloud computing, managing and provisioning cloud resources efficiently is paramount. AWS CloudFormation emerges as a powerful tool that empowers businesses to adopt Infrastructure as Code (IaC) practices, streamlining their cloud infrastructure management. ### Introduction to AWS CloudFormation AWS CloudFormation is a managed service that allows you to model and provision your AWS resources using a simple text file. This file, known as a CloudFormation template, describes your desired infrastructure in a declarative format. This template, written in either JSON or YAML, defines everything from EC2 instances and S3 buckets to complex multi-tier applications, allowing you to manage your infrastructure with code rather than manual processes. Here’s a simple example of a CloudFormation template that creates an S3 bucket: ```yaml Resources: MyS3Bucket: Type: 'AWS::S3::Bucket' Properties: BucketName: my-unique-bucket-name ``` ### Key Benefits of Using CloudFormation * **Infrastructure as Code:** Model and provision all your AWS resources in a declarative manner, eliminating the need for manual configuration and ensuring consistency across environments. * **Repeatable Deployments:** Launch identical environments, such as development, testing, and production, multiple times with the same configuration, reducing errors and accelerating deployments. * **Version Control:** Track changes to your infrastructure over time using familiar version control systems like Git. Roll back to previous configurations with ease, ensuring infrastructure stability. * **Cost Optimization:** Define and manage resource dependencies effectively, allowing you to optimize resource allocation and potentially reduce cloud spending. * **Drift Detection:** Identify and address any discrepancies between your CloudFormation template and your deployed resources, ensuring your infrastructure remains compliant with your defined architecture. ### Use Cases for AWS CloudFormation CloudFormation's versatility makes it a valuable tool for a wide range of use cases. Let's delve into some prominent examples: #### 1. Deploying Serverless Applications CloudFormation is particularly well-suited for deploying serverless applications, simplifying the management of services like AWS Lambda, API Gateway, DynamoDB, and more. The template can define the functions, APIs, databases, and their configurations, automating the entire deployment pipeline. **Example Scenario:** Imagine deploying a serverless API using API Gateway, Lambda, and DynamoDB. Your CloudFormation template would define the API endpoints, integrate them with the appropriate Lambda functions, and create the DynamoDB table for data persistence. #### 2. Setting Up Network Infrastructure CloudFormation simplifies the process of setting up complex network topologies, including VPCs, subnets, route tables, and security groups. You can define the desired network layout in a template, ensuring consistency and repeatability across different environments. **Example Scenario:** Consider a multi-tier web application requiring a VPC with public and private subnets. CloudFormation allows you to define the VPC, subnets, routing rules, internet gateways, and security group configurations, automating the entire network setup. #### 3. Automating CI/CD Pipelines CloudFormation seamlessly integrates with popular CI/CD tools, enabling you to automate your entire software delivery pipeline. You can trigger CloudFormation deployments as part of your CI/CD workflow, ensuring your infrastructure is always in sync with your codebase. **Example Scenario:** In a CI/CD pipeline, you can define a stage where, upon successful code commits and testing, your CloudFormation template is automatically executed, updating or creating resources in your desired AWS environment. #### 4. Launching Standardized Environments CloudFormation allows you to define reusable infrastructure templates, enabling you to launch standardized environments for specific purposes like development, testing, or production. This ensures consistency and reduces configuration drift across different environments. **Example Scenario:** Imagine provisioning a standardized development environment that includes an EC2 instance, a relational database, and security configurations. With CloudFormation, you can define this setup once and easily replicate it for new developers, saving time and effort. #### 5. Managing Disaster Recovery CloudFormation facilitates disaster recovery by enabling you to define templates for your backup and recovery procedures. In the event of a failure, you can use these templates to quickly restore your infrastructure to a working state in a different region or availability zone. **Example Scenario:** You can define a CloudFormation template that creates a duplicate of your production environment in a separate Availability Zone or Region. If your primary environment experiences issues, you can quickly deploy this template to restore operations with minimal downtime. ### CloudFormation Alternatives While CloudFormation is a powerful IaC tool on AWS, several alternatives offer similar functionalities: * **Terraform:** HashiCorp's Terraform is a popular open-source IaC tool known for its platform-agnostic approach, allowing you to manage infrastructure across multiple cloud providers. * **Pulumi:** Pulumi allows you to define infrastructure as code using familiar programming languages like Python, JavaScript, and Go, providing greater flexibility for developers. * **AWS CDK:** AWS CDK (Cloud Development Kit) allows developers to define infrastructure using their preferred programming languages, leveraging the familiarity of existing codebases. ### Conclusion AWS CloudFormation empowers organizations to embrace Infrastructure as Code, enabling them to automate their infrastructure provisioning, management, and deployment processes. Its ability to handle simple to complex deployments, combined with features like version control, drift detection, and seamless integration with other AWS services, makes it an invaluable tool for businesses looking to enhance their cloud operations. ### Advanced Use Case: Blue/Green Deployment with CloudFormation and AWS CodeDeploy For this advanced use case, let's explore how we can leverage CloudFormation in conjunction with AWS CodeDeploy to achieve a Blue/Green deployment strategy, minimizing downtime and mitigating risks during application updates. #### Scenario Imagine you have a web application deployed on AWS, running on EC2 instances behind an Elastic Load Balancer (ELB). We want to update this application to a new version with minimal disruption to users. #### Architecture * **CloudFormation:** We'll use CloudFormation to manage the entire infrastructure for our application, including the EC2 instances, ELB, security groups, and other required resources. The CloudFormation template will define two identical environments, 'Blue' and 'Green', each with its own set of resources. * **AWS CodeDeploy:** This service will handle the application deployment to the EC2 instances. We'll configure CodeDeploy to update the 'Green' environment first while keeping the 'Blue' environment live. * **Elastic Load Balancer (ELB):** The ELB will route traffic between the 'Blue' and 'Green' environments. Initially, all traffic goes to 'Blue.' After the update and testing in the 'Green' environment, the ELB will shift traffic to 'Green,' making it the new live environment. #### Steps 1. **Define Infrastructure:** A CloudFormation template will define the entire infrastructure for both the 'Blue' and 'Green' environments, including the EC2 instances, ELB, Auto Scaling group, and other relevant resources. 2. **Initial Deployment:** The CloudFormation template is executed to create the 'Blue' environment. The ELB is configured to route traffic to the 'Blue' instances. 3. **Update and Deploy:** Developers update the application code. This new version is packaged and deployed to the 'Green' environment using AWS CodeDeploy. 4. **Testing and Validation:** Rigorous testing is performed on the 'Green' environment to ensure the new application version is working as expected. 5. **Traffic Shift:** Once the new version is validated in the 'Green' environment, the ELB configuration is updated to gradually shift traffic from 'Blue' to 'Green'. 6. **Monitoring:** Throughout the process, the application and infrastructure are continuously monitored for any errors or performance issues. 7. **Rollback (If Necessary):** In case of issues with the new version, the ELB can quickly revert traffic back to the 'Blue' environment, minimizing user impact. 8. **Cleanup:** After the successful deployment and traffic shift, the 'Blue' environment is decommissioned and its resources are released, optimizing costs. #### Benefits * **Zero Downtime:** By deploying updates to a separate environment and gradually shifting traffic, downtime during deployments is minimized or eliminated. * **Reduced Risk:** Testing the new version in a production-like environment ('Green') before going live significantly reduces the risk of issues impacting users. * **Rollback Capability:** The ability to quickly revert back to the previous version provides an extra layer of safety in case of unexpected problems. This advanced use case exemplifies how CloudFormation, combined with other AWS services like CodeDeploy and ELB, can orchestrate complex deployment strategies like Blue/Green deployments, ensuring application updates are deployed smoothly, minimizing user impact, and maintaining high availability.
virajlakshitha
1,887,873
The Big Three of UI/UX Design: Figma, Sketch, and Adobe XD
In today's digital landscape, crafting user-centric interfaces and experiences (UI/UX) is paramount....
0
2024-06-14T03:00:20
https://dev.to/epakconsultant/the-big-three-of-uiux-design-figma-sketch-and-adobe-xd-2nno
uiux
In today's digital landscape, crafting user-centric interfaces and experiences (UI/UX) is paramount. Thankfully, a trio of powerful design tools has emerged, empowering designers to bring their visions to life: Figma, Sketch, and Adobe XD. Each boasts unique strengths, making the choice between them a thoughtful consideration. Let's delve into what makes these three titans of UI/UX design stand out. ## Figma: The Collaborative Powerhouse Figma takes center stage for its robust collaboration features. This cloud-based platform allows designers to work seamlessly in real-time, fostering communication and streamlining workflows. Design handoffs become a breeze, with version control and commenting tools ensuring everyone's on the same page. [Unlock the Power of Mobile Money: Mastering the Fundamentals of M-Pesa and MTN Mobile Money](https://nocodeappdeveloper.blogspot.com/2024/06/unlock-power-of-mobile-money-mastering.html) Figma shines in prototyping as well. Designers can craft interactive prototypes with user flows, making it easy to test and iterate on ideas before development begins. Its extensive plugin library further expands its functionality, catering to diverse design needs. However, Figma has limitations. Its vector editing capabilities, while decent, aren't as advanced as some dedicated illustration tools. ## Sketch: The Design-Centric Darling Sketch has long been the darling of UI/UX designers, particularly for Mac users. Its intuitive interface and focus on design workflows make it a joy to use. Sketch excels in creating pixel-perfect mockups and incorporates powerful features like symbol libraries for maintaining design consistency across projects. However, Sketch's biggest drawback is its platform limitation. It's currently exclusive to macOS, leaving Windows users out in the cold. Additionally, collaboration features, while present, are not as robust as Figma's. For teams working across platforms, Sketch might not be the ideal choice. [AWS CloudWatch: Revolutionizing Cloud Monitoring with Logs, Metrics, Alarms, and Dashboards](https://www.amazon.com/dp/B0CPX2BXQ9) ## Adobe XD: The Creative Cloud Champion As part of the Adobe Creative Cloud suite, Adobe XD integrates seamlessly with other Adobe products like Photoshop and Illustrator. This makes it a natural fit for designers already invested in the Adobe ecosystem. XD offers a comprehensive set of UI/UX design tools, including prototyping, animation, and asset management. However, XD's learning curve can be steeper than Figma's or Sketch's. Additionally, the pricing structure within the Creative Cloud suite might not be ideal for individual designers or smaller teams with limited budgets. ## Making the Choice: Consider Your Needs The best tool ultimately depends on your specific needs. Here's a breakdown to help you decide: • For Collaborative Workflows: Figma is the clear winner. • For Design-Centric Work on Mac: Sketch excels in this area. • For Integration with Adobe Products: Adobe XD reigns supreme. ## Beyond the Big Three While Figma, Sketch, and Adobe XD dominate the UI/UX design landscape, other notable options exist. Affinity Designer offers a powerful and affordable alternative to both Illustrator and Sketch. For those new to design, Canva provides a user-friendly platform to create basic graphics and presentations. No matter your choice, remember that the ideal design tool empowers your creativity and streamlines your workflow. Experiment, explore, and find the one that best suits your design journey.
epakconsultant
1,887,870
Friction Liner/Gasket Manufacturers: Supporting Industrial Operations Worldwide
Friction Liner/Gasket Manufacturers: Supporting Industry Worldwide Introduction: Friction...
0
2024-06-14T02:52:16
https://dev.to/tahera_tabiya_2d8c7b907d0/friction-linergasket-manufacturers-supporting-industrial-operations-worldwide-poh
Friction Liner/Gasket Manufacturers: Supporting Industry Worldwide Introduction: Friction liner/gasket manufacturers are becoming required for commercial operations worldwide They offer an solution this really is gear definitely vital is ensuring helpful and production procedures run effortlessly This informative article speak informative the benefits of using friction liners/gaskets, their features which is revolutionary plus the security precautions taken with all the current them It shall furthermore protect their uses, application, quality, and servicing demands Characteristics of Friction Liners/Gaskets: Friction liners/gaskets could possibly be an unit  preserving protecting fantastic from harm They've been typically produced from top-notch materials developed to withstand force this is actually heat extreme Friction Liner supply a advantages being few These are typically very easy to create, reduce maintenance costs, while increasing equipment lifespan They may be able furthermore enhance effectiveness and satisfaction, making products which are yes effortlessly, and manufacturing continues without interruption Innovations in Friction Liners/Gaskets: Manufacturers of friction liners/gaskets are constantly innovating to help keep during the competition and meet with the demands for the customers Present innovations include better materials offering more utilize this is unquestionably extended better temperature opposition, and chemical opposition improved These liners/gaskets is likewise built to become more accurate, offering a tighter fit and better sealing They've better insulation that may withstand greater degrees of force, making equipment that's sure effortlessly and precisely Safety Precautions: Friction liners/gaskets may are most likely included is gear most certainly essential is ensuring correctly They prevent leakages, reduce vibration, and gives a specific area this is actually counter non-slip Manufacturers take care to make certain the liners/gaskets meet protection requirements and gives directions which can be clear how to benefit from and install them correctly They function suggestions about which type of gaskets to make use of for various applications and gives help with appropriate replacement and maintenance Uses: Friction liners/gaskets are utilized in several organizations for example aerospace, automotive, marine, and gas and oil These are typically used to seal bones, preventing the leakage of fluids, or even to reduce noise and vibration friction lining are acclimatized to provide a pillow between two areas that may enter into connection with your partner, reducing harm in regards to the gear Additionally accustomed manage the heat linked to equipment by dissipating heat, improving the effectiveness and expanding the total life total of gear Application: Choosing the friction correct with regards to application that's certain vital to making equipment sure precisely and effortlessly Manufacturers provide guidelines through which materials work very well in many applications and offer sizes which are standard a good product this is really custom-made in line with the consumer's demands Some friction liners/gaskets have adhesive backing, making them fairly simple to create Others must be bolted into spot, consequently manufacturers supply you with the gear this is actually needed for installation Quality and Servicing Needs: Manufacturers of friction liners/gaskets you need to take pride in producing items that are top-notch knowing that their customers' gear differs according for them They typically make use of the best materials available and also make utilization of strict quality control measures to be certain a regular, dependable Products Maintenance is important and needs evaluation this is actually regular of liners/gaskets to acknowledge any wear or tear Manufacturers offer servicing options to help customers change utilized pieces or upgrade to newer, more products being advanced Source: https://www.lyweika.com/Friction-liner
tahera_tabiya_2d8c7b907d0
1,887,869
Un buen libro para aprender y educar en programación
Hola. Recientemente he estado dandole forma a contenido educativo para enseñar programación y dar...
0
2024-06-14T02:51:23
https://dev.to/chema/un-buen-libro-para-aprender-y-educar-en-programacion-20li
books, programming, learning, spanish
Hola. Recientemente he estado dandole forma a contenido educativo para enseñar programación y dar unas mentorías. Y de paso, para recordar y reaprender los fundamentos. En un punto me dio por recordar cómo fue que yo empecé a aprender, y cómo fue mi proceso 🤔. Definitivamente no fue algo que podamos llamar "straightforward" 😅. Estaba muy confundido y siendo honesto, me aterraban los temas complejos. Muchos de esos temas complicados los evité por años con la esperanza de algún día entenderlos. En general, no sabía qué hacer para aprender a programar 🤷🏻. ## ✨ Un golpe de suerte Entre tantas cosas que fui tomando de internet y decifrando en las clases de la ing. en sistemas, un buen día tuve la suerte de ir con unos amigos a una feria del libro en Orizaba (FILO) y encontré un libro llamado **Programación Orientada a Objetos con C++ y Java** 😀. ![Cover del libro](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ba0myon6wzm9cmmcwzog.png) Estaba totalmente sellado y costaba unos 17 dólares (precio de hoy en 2024). Eso era un poco más de lo que disponía semanalmente, pero tenía la esperanza de que mis dudas se aclararían ahí, o al menos tener algo de dirección. ## 🌱 Una eventual y agradable sorpresa El chiste es que a los pocos días pude conseguir lo suficiente y lo compré. Llegué a casa y me llevé una sorpresa al abrirlo. ¡Traía un CD, que lujo! jaja. En realidad la sorpresa fue que el libro no enseñaba a programar directamente como otros recursos pretendían en la época (2016). Este libro más bien tenía la intención de involucrar las disciplinas esenciales de la programación y la ing. de software, para prepararnos como ingenieros de software y no solo como codificadores. ## 🌳 Cómo influyó en mi formación El nombre completo del libro es **Programación Orientada a Objetos con C++ y Java, un acercamiento interdisciplinario**, por los autores **Jose Luis Lopez Goytia y Angel Gutierrez Gonzalez**, profesores en el Instituto Politécnico Nacional (en México es una de las escuelas más importantes.) Hoy que lo he recordado, y vuelto a leer en The Internet Archive (porque lo presté y no ha vuelto a mis manos) me doy cuenta de que los capítulos 1-3 me "educaron" bajo su metodología al abordar problemas que necesitan ser resueltos con software. ![Screenshot en The Internet Archive](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1fl15rbybkt7pj9xxv79.png) No me había dado cuenta de que al día de hoy tengo interiorizados varios del los principios que el libro presenta. Sin darnos cuenta, el libro funje como un curso profesional en el que aprendemos a crear software metodológica y estratégicamente, llevando a cabo buenas prácticas al analizar, diseñar, codificar y verificar. ## 🧑🏻‍🏫 👩🏻‍💻 ¿Por qué y a quién lo recomiendo? Este es un libro que personalmente recomiendo para las personas que empiezan la carrera de ing. de software o de sistemas, y a quienes quieren aprender a programar profesionalmente. Para hacer el software de calidad que todos anhelamos y amamos, no basta con ser buen codificador, se necesitan formas, guías y disciplina a las cuáles recurrir cuando la duda de si estamos haciendo bien algo aparece, cuando la "motivación" o la "creatividad" se ausentan un momento. Y aún más, este libro lo recomiendo mucho para los docentes que quieren formar profesionales del software. De hecho, este es uno de los enfoques de la obra. Es casi una propueta de Framework para la educación de profesionales del software. ## 📚 🏛️ Leer gratis Puedes leerlo gratuitamente en The Internet Archive apartandolo por 1hra renovable si te creas una cuenta {% embed https://archive.org/details/programacionorie0000lope/mode/2up %} No olvides compartir tus comentarios, opiniones o dudas en los comentarios :) Si conoces o recomiendas otras obras puedes compartirlas también. Yo con gusto las leeré.
chema
1,887,868
Cube Therapy Billing - Stability and permanence
Cube Therapy Billing provides tailored RCM billing and credentialing solutions for ABA therapy...
0
2024-06-14T02:45:35
https://dev.to/cube-therapy-billing/cube-therapy-billing-stability-and-permanence-2730
consulting, b2b, billing
Cube Therapy Billing provides tailored RCM billing and credentialing solutions for ABA therapy providers. Highly recommended to help you focus on your business and accelerate your revenue by taking your business to higher ground. Our services include pre-service support, claims management, and insurance credentialing. [https://www.cubetherapybilling.com](https://www.cubetherapybilling.com)
cube-therapy-billing
1,887,867
Guide Wheel Liners for Mine Hoists: Precision Engineering for Reliability
photo_6284972798862540183_y.jpg Title: Guide Wheel Liners for Mine Hoists: Precision Engineering for...
0
2024-06-14T02:42:26
https://dev.to/tahera_tabiya_2d8c7b907d0/guide-wheel-liners-for-mine-hoists-precision-engineering-for-reliability-26id
photo_6284972798862540183_y.jpg Title: Guide Wheel Liners for Mine Hoists: Precision Engineering for Reliability Are you interested about guide wheel liners for mine hoists? Look no more. We'll explore the benefits of guide wheel linings, the development and safety they provide, their use and applications, in addition to how to use and service them. Advantages of Guide Wheel Liners: Guide wheel liners are an element that is essential of mine hoist system, as they help to guide and support the hoist rope. They have several benefits, consisting of enhanced dependability and maintenance are decreased. The guide wheel liners help to minimize endure the hoist rope, preventing damage and decreasing the possibility of rope failing. By decreasing the need for rope substitute and maintenance, guide wheel linings can conserve time sources valuable mine operators. Innovation and Safety: Guide wheel liners are an outcome of accuracy in design, a method that involves using technology advanced design and produce complex elements with extreme precision. This development has enabled the development of Liner Block for Head Sheave and Guide Wheel both more powerful and more durable compared to their precursors. The use top quality products and advanced methods in manufacturing guide wheel liners can endure the severe problems of mining procedures, ensuring they'll not fail in the center of a hoist procedure. Along with their resilience, guide wheel linings also provide enhanced safety for mine employees. By reducing deterioration on the hoist rope, guide wheel linings decrease the possibility of rope failing, which can threaten and possibly harmful. This safety feature can give drivers mine of mind, knowing they providing a much safer workplace for their workers. Use and Applications: Guide wheel liners are used in a variety of mine hoist systems, consisting of both rubbing hoists and drum hoists. They are generally installed on the sheaves or drums support the hoist rope. The guide wheel lining acts as a layer safety the rope and the sheave, decreasing the rubbing and endure the rope. This helps to extend the life filled with hoist rope, while also decreasing the possibility of rope failure. Along with their use on mine hoists, friction lining can also be used in various other applications, such as cable television cars and snowboarding raises. Any system depends on a hoist rope can take advantage of the safety included dependability provided by guide wheel linings. How to Use and Service Guide Wheel Linings: Guide wheel liners are fairly easy to use and install. They can be installed by trained experts, or with the help of a manufacturer's user's manual. It's important to ensure the guide wheel linings installed properly, as incorrect installation can lead to enhanced deterioration on the hoist rope. Proper maintenance is also important for ensuring the durability of guide wheel linings. Routine evaluations can help determine any indications of damage or wear, enabling prompt repairs or substitutes. Cleaning the Imported Liner can also help to remove any particles or pollutants may cause wear additional the rope. Source: https://www.lyweika.com/Liner-block-for-head-sheave-and-guide-wheel
tahera_tabiya_2d8c7b907d0
1,887,866
Benefits of Having a High Google Page Ranking
Hello, everyone! Achieving a high ranking on Google's search results pages is more than just a badge...
0
2024-06-14T02:39:28
https://dev.to/juddiy/benefits-of-having-a-high-google-page-ranking-1g1j
seo, google, discuss
Hello, everyone! Achieving a high ranking on Google's search results pages is more than just a badge of honor—it can significantly impact your online presence and business success. Here are several key benefits of having a high Google page ranking: 1. **Increased Visibility and Traffic**: Ranking higher on Google means your website is more likely to be seen by users searching for relevant keywords. This increased visibility translates into more organic traffic to your site, as users tend to click on top-ranked results. 2. **Enhanced Credibility and Trust**: High-ranking websites are perceived as more credible and trustworthy by users. Users often associate top-ranking positions with authority and expertise in their respective industries or niches. This credibility can lead to higher conversion rates and improved customer trust. 3. **Competitive Advantage**: Outranking competitors on Google can give you a competitive edge. Users are more likely to choose businesses and websites that appear at the top of search results, especially if they consistently provide valuable content and a positive user experience. 4. **Cost-Effective Marketing**: SEO (Search Engine Optimization) strategies that improve your Google ranking can provide long-term benefits without the ongoing costs associated with paid advertising. Once you establish a strong organic presence, you can continue to attract traffic and leads without significant additional investment. 5. **Better User Experience (UX)**: Google rewards websites that prioritize user experience. Websites that load quickly, are mobile-friendly, and provide relevant, easy-to-navigate content tend to rank higher. This focus on UX not only improves your ranking but also enhances user satisfaction and engagement. 6. **Increased Brand Awareness**: High visibility on Google means more users are exposed to your brand name, products, and services. Even if users don’t click through immediately, repeated exposure can lead to brand recall and recognition over time. 7. **Long-Term Growth and Sustainability**: A high Google page ranking is not just about short-term gains. By consistently optimizing your website for SEO and maintaining a top-ranking position, you can achieve sustainable growth. This ongoing visibility helps you attract new customers, build relationships, and establish a strong online presence. To achieve and maintain a high Google page ranking, consider using advanced tools like [SEO AI](https://seoai.run/). These tools can help you monitor your website's performance, analyze keyword trends, and refine your SEO strategies for optimal results. In conclusion, having a high Google page ranking offers numerous advantages that contribute to the overall success and growth of your business or website. By investing in SEO strategies, you can maximize the benefits of increased visibility, credibility, and customer engagement. --- Have you used SEO AI or similar tools to enhance your website’s performance? Share your experiences in the comments below!
juddiy
1,887,865
Punk Rave Introduction
DESIGN CONCEPT Decorated with costume, inspired with behavior to meet those people with same”No...
0
2024-06-14T02:38:27
https://dev.to/ninawind/punk-rave-introduction-2h21
DESIGN CONCEPT Decorated with costume, inspired with behavior to meet those people with same”No depressed, never slavish” punk feeling just like PUNK RAVE. PUNK RAVE Gothic romantic classicism feelings are spelled with the punk-like rebel’s strong personality, combined with the contemporary characteristics of the scenario to imagine, design a distinct personality fashion clothing, also hope to through fashion arts to inspire behavior, and to meet those people with same“ No depressed, never slavish” punk feeling just like PUNK RAVE. punk rave Gothic “ I like black, it’s because I love life more than anyone else” I was obsessed with all the dark elements, but this doesn’t mean to I chose it …But just use it to think of what is the real meaning of life. It reminds me treasure and enjoy current life at some point. #punkrave #punkraveofficial #punkraveclothing #goth #gothgoth #gothic #gothstyle #gothicstyle #gothfashion #gothicfashion #gothaesthetic #gothicaesthetic #victoriangoth #victoriangothic #victoriangothicstyle #darkaesthetic #darkfashion #darkstyle #darkaesthetic #gothicmodel #gothmodel Punk “Don’t care about other people’s eyes, I know who I am” No depressed, never slavish, have courage to express ourselves and to be wholly whatever we are. Listen roar to life status from inside mind, even though it sounds with some aggression. #punkrave #punkraveofficial #punkraveclothing #goth #gothgoth #gothic #gothstyle #gothicstyle #gothfashion #gothicfashion #gothaesthetic #gothicaesthetic #victoriangoth #victoriangothic #victoriangothicstyle #darkaesthetic #darkfashion #darkstyle #darkaesthetic #gothicmodel #gothmodel Lolita “The fairy tale was an own story made up by adults.” If learn to face the reality is the cruel we have to face when we grow up to adults, we just want to treasure the fairy tale color that was broken by the reality in a cruel world. The reason why we create a separate new brand “PYON PYON” for LOLITA style from PUNK RAVE Ashes series is that no matter the starting point is good or destroy, lively or quiet, she never changed the original intention which is hoping for a better thing.This is the obvious difference compared with another two styles. At design point of view, fixed form but used the emotion of color as well as lace to emphasize the girl yearning for beauty, to convey to the world the kind of morbid kind of paranoid. ”PYON PYON”, the name is taken from the Japanese pronunciation to be used as describe the sounds of the lovely rabbit jumping. ABOUT DAILY SERIES Daily Fashion Series #punkrave #punkraveofficial #punkraveclothing #goth #gothgoth #gothic #gothstyle #gothicstyle #gothfashion #gothicfashion #gothaesthetic #gothicaesthetic #victoriangoth #victoriangothic #victoriangothicstyle #darkaesthetic #darkfashion #darkstyle #darkaesthetic #gothicmodel #gothmodel Mainly for 16–35 oriented young people He/she always knew what they wanted to be out of the ordinary, intelligent and independent character made them different but not to be isolated with people. It is an ultimate embodiment of perfect integration into life and to be themselves at the same time.. J&PUNK RAVE Series To be your true self in life at the right time and place.The inspiration of fashion (daily) series comes from life, your state is effected at all times, it is a mental activity. Either accept or counter.To interpretation of all emotions right now by using the simplest acceptable way. Daily Vintage Fashion Series #punkrave #punkraveofficial #punkraveclothing #goth #gothgoth #gothic #gothstyle #gothicstyle #gothfashion #gothicfashion #gothaesthetic #gothicaesthetic #victoriangoth #victoriangothic #victoriangothicstyle #darkaesthetic #darkfashion #darkstyle #darkaesthetic #gothicmodel #gothmodel Fashion (daily) series divided both feeling of retro style and punk, also known as “vintage” and “Rock”. Retro style features: combined with the Punk style features: Combine with punk idea, unwillingness to mediocrity, eager to vindicate mind, through exaggerated profile design, changing details, Unique character of amplified fashion. Who wears it can give people a cool fashion attitude and indicate the current position or a certain state. COMPANY INTRODUCTION Founded in 2006, Punkrave co.,limited (Guangzhou Rui’er clothing co.,ltd), a firm whose roots are so deeply embedded into the PUNK and Gothic fashion clothing area, we are developing, producing and selling clothes of Punk, Lolita and Gothic styles. The products include T-shirts, Blouses, Jackets,coats, Sweaters,Dresses, Pants, Skirts, and Accessories. DESIGN TEAM INTRODUCTION Our own designer groups guarantee the originality of our styles; our own garment factory can ensure the quality of our clothes and accept garment processing of different types of clothes; PUNK RAVE and PYON PYON are our brands. And they have been appreciated by customers and won much popularity for them. We release more than 500pcs new designs each year, our hot sale designs are widely welcomed in the market. You can find whatever you like. FACTORY INTRODUCTION High quality, complete satisfaction of customers, good commercial credit and incessant innovation are what we pursue. We have been cooperating with people from America, Russia, France, England, Germany, Poland, Spain, Swiss, Finland, Norway, Italy, Japan and our customers are all over the world. Here we sincerely welcome friends home and abroad to cooperate with us and create a bright future on the basis of equality and mutual profits. We enjoy very convenient traffic, 30 minutes by car to the airport and 5 minutes by foot to the metro, you are welcome to visit our factory and our group. punk-rave.com
ninawind
1,887,824
Steps to create Azure Virtual Machine
First things first, you can create an Azure Virtual Machine through the Azure portal. A simple...
0
2024-06-14T02:36:56
https://dev.to/ikay/steps-to-create-azure-virtual-machine-5a74
azure, virtualmachine, networking, datacenter
First things first, you can create an Azure Virtual Machine through the Azure portal. A simple browser-based user interface can help you create virtual machines along with all the additional resources. With this quick step-by-step guide, you can deploy your own Azure Virtual Machine. Before getting started, make sure that you have an active Azure subscription. If not, then don’t worry! You can create a free account and then go about the following steps. As we move on from the basics, we will now focus on the actual creation of the Azure Virtual Machine. The following steps will deal with configuring the basic settings for your Azure Virtual Machine. **Step 1: Starting the Process of Setting up Azure Virtual Machine** Now that you have an active Azure subscription, open the Azure management portal through www.portal.azure.com and log in. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5eltvaim5rchpjxohvo2.png) **Step 2: Creating the Azure Virtual Machine** On the Azure portal, select Virtual machines. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wl36xusg8xvt0qs3univ.png) Open the Create a virtual machine screen with the help of Select + Add. In the Create a virtual machine panel, you can get distinct tabs like Basics, Disks, Networking, Advanced, Tags, and Review + create. Enter all your project details with information about the subscription, resource group, virtual machine name, region, availability options, image, and Azure Spot instance. If you have not chosen an image before, you can choose the right image from the dropdown list. You can also browse all available images for creating virtual machines, including both public and private images. It’s better to go for Windows 11 Pro, version 22H2 - x64 Gen2(free services eligible). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dgw31os504w6x4newa4v.png) Carefully select the size of the virtual machine to be deployed. Here you will find different virtual machine size options in accordance with your basic configuration. Note that you cannot find all virtual machine sizes in all the Azure regions. But a recommended choice can be Standard_D2s_v3 - 2 vcpus, 8 GiB memory ($70.08/month). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rfa5bghcmk0drxk3jzxa.png) Fill in your username and password under the Administrator account. Make sure that your password is at least 12 characters long and also meets the specified complexity requirement. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ds9wfrwyi02jk1mn7qws.png) In the Inbound port rules section, choose to Allow selected ports and then choose RDP (3389) from the drop-down menu. Under Licensing, check-I confirm I have an eligible Windows 10/11 license with multi-tenant hosting rights and move to next: Disks. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nt376xdhj2ja6dajzmxw.png) Since it's a simple Virtual Machine leave all other Tab on the default settings. Finally, select the Review + create button at the bottom of the webpage. Here you can get a basic impression of the virtual machine that you are going to create. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h9q66iyzprtl5kp8czqt.png) **Step 3: Final Deployment** After validation runs, the Create button at the bottom of the page will become active so select that. Once the deployment is complete, you can Go to resources. You must keep in mind that the process of creating an Azure Virtual Machine will take several minutes, so stay patient throughout the process and your virtual machine will be fully provisioned and ready to use. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/neb11087yhbknph8ssi9.png) Download RDP File, locate the file and click connect to the Windows 11 computer you just acquired. #SMILE# ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l75rpgt0hzs2mmsfi6n9.png) **SUMMING UP** Now you have understood the process of creating and maintaining Azure virtual machines. We have tried to formulate a comprehensive step-by-step guide so that you can understand the various phases of establishing an Azure virtual machine. Once you have gotten your basics of Azure virtual machine clear, you can grasp the process better too. If you want to create another Azure Virtual Machine, you can get started right away.
ikay
1,887,864
Exploring Material Innovations in Friction Liner/Gasket Manufacturing
photo_6284972798862540181_y.jpg Exploring the Wonders of the New Friction Liner/Gasket Material...
0
2024-06-14T02:31:07
https://dev.to/tahera_tabiya_2d8c7b907d0/exploring-material-innovations-in-friction-linergasket-manufacturing-3p2c
photo_6284972798862540181_y.jpg Exploring the Wonders of the New Friction Liner/Gasket Material Innovations Friction liner/gaskets are essential components in machines and equipment help prevent leakage and ensure a seal tight. A years few, friction liners/gaskets were made of traditional materials like rubber, asbestos, or cork. However, with technological advancements, new material innovations have emerged. These are new materials characterized by high strength, high-temperature range, low compression set, and chemical resistance excellent. Let's explore the advantages and applications of these material new. Advantages of New Material Innovations The materials that are new in manufacturing friction liners/gaskets offer several advantages. First, they have a longer service life, which means they require less maintenance. Second, they are more resistant to wear and tear, corrosion, and chemicals, which improves the safety and efficiency of equipment. Lastly, they have improved properties sealing reduce leakage and increase performance. Innovations in Friction Liner/Gasket Material One of the newest innovations in Friction Liner material is PTFE (Polytetrafluoroethylene). PTFE is a resin that's synthetic and resistant to chemicals, water, and most oils. It also has resistance excellent high temperature and friction coefficient low. Besides PTFE, other materials like graphite, aramid fibers, and expanded polytetrafluoroethylene (ePTFE) now used in friction liner/gasket manufacturing. Safety Safety is crucial in industrial equipment, and material that's new in friction liner/gasket manufacturing have made machines safer. These materials offer improved properties sealing leakage prevent of chemicals, acids, and gases. The materials are also resistant to high temperatures and wear and tear, reducing machine malfunctions can lead to accidents. Use and How to Use Friction liner/gaskets are used in machines and equipment require protection and sealing from leakage. Some applications of friction lining include engines, pumps, compressors, and systems hydraulic. When using friction liner/gaskets, it is crucial to ensure the proper material used for the application specific. The material is used must be compatible with the fluid or gas being sealed to avoid reactions chemical can damage the gasket. Service and Quality One of the factors primary to consider when selecting the material for friction liner/gaskets manufacturing service and quality. The materials are new in friction liner/gaskets offer improved service and quality, which translates to better performance and longer service life. When friction selecting, it is essential to consider the reputation and quality of the manufacturer. Applications New material innovations in High performance friction liner manufacturing have expanded the range of applications. The improved properties of these materials have made them suitable for high-temperature, high-pressure, and environments chemically reactive. Applications for these materials are now include aerospace, automotive, chemical, and gas and oil industries. Source: https://www.lyweika.com/Friction-liner
tahera_tabiya_2d8c7b907d0
1,887,863
Enhanced analysis tool based on Alpha101 grammar development
Summary The FMZ platform launched a trading factor analysis tool based on "WorldQuant...
0
2024-06-14T02:30:27
https://dev.to/fmzquant/enhanced-analysis-tool-based-on-alpha101-grammar-development-3ekp
analysis, trading, fmzquant, cryptocurrency
## Summary The FMZ platform launched a trading factor analysis tool based on "WorldQuant Alpha101", which provides a new weapon for developers of quantitative trading strategies. Through analysis factors, it helps everyone better understand the market and gain insight into the opportunities behind the financial market. ## What is Alpha101 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kqm34eu9ohzsp4ztcp89.png) Before introducing Alpha101, first understand what is Alpha? Alpha refers to excess returns. For example: buy 1 million index fund and keep it all the time. This is a Beta strategy to earn passive returns in the market. But if you use 10 million to buy 10 stocks, and earn 10% more by buying an index fund, then this 10% is Alpha excess returns. Don't underestimate this Alpha excess return. In fact, most traders in the market, including fund managers, can't beat the index, so many people rack their brains to improve Alpha's return. Of course, there are some excellent traders and fund companies. - Trading strategy excess return = passive (Beta) return + trading (Alpha) return In 2015, the "WorldQuant LLC" quantitative trading hedge fund, which is good at data mining, released the "WorldQuant Formulaic 101 Alphas" research report, which disclosed the 101 Alpha expressions they are or have used, whose purpose is to give trading strategy developers Provide more inspiration and ideas. Many people questioned the factors disclosed by WorldQuant, because after all, the Chinese stock market is different from foreign stock markets. But it turns out that most of these factors are still effective in the Chinese market. FMZ platform reduplicated and corrected of these factor formulas, and showed it to all traders. ## What are the factors in Alpha101 In the research report, Alpha is divided into three categories: price factor, volume factor, and dichotomy factor. - Price factor: The calculation formula only uses the price, including: opening price, highest price, lowest price, closing price, etc. The output is a specific value. - Volume and price factor: The calculation formula uses volume and price. The design idea is to determine the relationship between price changes and trading volume changes, and the output is a specific value. - Dichotomy factor: The calculation formula uses trading volume and price. It is the same as the volume and price factor, except that the output is 0 or 1. **Price factor** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3j1lmge3492aacpqig8u.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n8f9em9v0dm6x4k02nxr.png) **Volume-price Factor** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0kft9kappik5qk31kh1g.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gdgcsolcsrxvc4o96le8.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3um2vu5elnmlf0x6e43t.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k57fj3quh6xmojwdc0xz.png) **Dichotomy Factor** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vickghwukg9satw5evjm.png) ## Implemented on FMZ platform Open FMZ official website (FMZ.COM) to register and log in, click "Dashboard" on the upper left, and select "Analysis Tool" in the list on the left, as shown in the following figure: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nfcn9kdhj5n2d2ckd81l.png) On the analysis tool page, the top is the setting bar, which can be set in order from left to right: variety, start and end time, period, picture type. Below the settings bar is the formula editing area. If you can’t write formulas, you can click on the drop-down menu below and select the formula you have edited. There are many formula examples supported here. In addition, the FMZ platform analysis tools already support most of the official Alpha101 formulas, just click and use. Click the calculation formula to display the calculation results at the bottom, it supports multiple data export methods: pictures, tables (CSV), JSON, etc. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/apndwei59wszkf4zmx8c.png) ## Need to pay attention to The parameters in the factor formula are default and not the optimal parameters. Traders can choose the more appropriate parameters according to the symbol, period, and their own experience. The factors are independent of each other, and superimposing multiple factors on each other may not necessarily result in better results. When designing quantitative trading strategies, at least have their own logic, not mechanical patchwork. The factors are unlimited, Alpha101 is just a trick, I believe everyone can get inspiration from it and create more and better factors and quantitative trading strategies. ## To sum up In many trading factor formulas, the surface seems unreasonable, but there are certain ideas and reasons behind the formula. But the only constant in the market is that it is constantly changing, so the effectiveness of these factors has non-linear characteristics in practical applications. In other words, there is no effective and always effective factor, there is no universal trading method. As a quantitative trader, you should have an open mind, be good at summarizing, and use it to try and innovate to make profits in a constantly changing market.. From: https://blog.mathquant.com/2020/06/09/enhanced-analysis-tool-based-on-alpha101-grammar-development.html
fmzquant
1,887,862
Test
Test
0
2024-06-14T02:30:00
https://dev.to/testingwebfree/test-3a9j
webdev, beginners
#Test
testingwebfree
1,887,861
Synthesis of Peptide Nucleic Acids (PNA)
Peptide nucleic acid (PNA) synthesis is the process of creating PNA oligomers in the laboratory. The...
0
2024-06-14T02:26:14
https://dev.to/creativepeptides/synthesis-of-peptide-nucleic-acids-pna-3o5e
Peptide nucleic acid (PNA) synthesis is the process of creating PNA oligomers in the laboratory. The synthesis of peptide nucleic acid is a synthetic nucleic acid analogue that has a pseudopeptide backbone, making it resistant to enzymatic degradation and stable under a wide range of conditions. The [synthesis of PNA](https://www.creative-peptides.com/services/peptide-nucleic-acid-synthesis-service.html) involves solid-phase synthesis techniques, where each nucleotide monomer is added one at a time to the growing PNA chain. The process typically involves the use of protected nucleotide building blocks that are activated for coupling and deprotected after each coupling step. The PNA chain is elongated by repeating the coupling and deprotection steps until the desired PNA oligomer is obtained. Peptide Nucleic Acid Synthesis and Its Applications Peptide nucleic acids (PNAs) are synthetic mimics of DNA or RNA, where the sugar-phosphate backbone in DNA or RNA is replaced by a peptide-like backbone. PNA synthesis refers to the process of creating these synthetic molecules, typically in the laboratory setting. Here are some common applications of peptide nucleic acid synthesis: Molecular diagnostics: PNAs have shown potential as tools for molecular diagnostics due to their high binding affinity to complementary DNA or RNA sequences. They can be used in PCR assays, FISH (fluorescence in situ hybridization), and other diagnostic techniques. Antisense therapy: PNAs can be designed to bind specific RNA sequences and inhibit gene expression. This has potential applications in targeted therapy for various diseases, including cancer and genetic disorders. Biotechnology research: PNAs are utilized in various research applications, such as gene editing, gene expression regulation, and the development of molecular probes for studying DNA and RNA interactions. Drug development: PNAs can be modified to improve their stability, cellular uptake, and target specificity, making them attractive candidates for drug development. They can be used as drug carriers, anticancer agents, or in targeted therapy. Nanotechnology: PNAs can be incorporated into nanomaterials for various biomedical applications, including drug delivery systems, biosensors, and imaging agents. Gene editing: PNAs can be used in combination with nucleases (such as CRISPR/Cas9) to induce site-specific gene modifications by introducing targeted double-strand breaks in the DNA. Gene Silencing: PNAs can be designed to target specific sequences of messenger RNA (mRNA) to inhibit gene expression, a technique known as antisense technology. This is useful for studying gene function and potential therapeutic applications. DNA/RNA Detection: PNAs are used as probes in diagnostic assays to detect and identify DNA or RNA sequences of interest with high specificity and sensitivity. This is valuable in diagnostics for infectious diseases, genetic disorders, and molecular research. Genetic Engineering: PNAs can be used in gene editing techniques like CRISPR-Cas9 to improve the specificity and efficiency of targeted genome modifications. Custom PNAs can aid in precise gene editing for research or therapeutic purposes. Pharmacokinetics and Drug Development: PNAs are being explored as potential therapeutic agents due to their stability and ability to target specific gene sequences. Custom PNAs can be synthesized to investigate their efficacy for various diseases and conditions. Biophysical Studies: PNAs are used in structural and functional studies of nucleic acids, protein-nucleic acid interactions, and RNA interference mechanisms. Custom PNAs can be designed for advancing our understanding of molecular processes and interactions. Custom Peptide Nucleic Acid (PNA) Synthesis Services Custom peptide nucleic acid (PNA) synthesis refers to the process of designing and manufacturing PNAs with specific sequences as per the requirements of researchers or companies. This service provides tailored PNAs for various applications in molecular biology, diagnostics, therapeutics, and nanotechnology. We can provide high-quality custom PNA synthesis services to meet the specific needs of your clients in the research, pharmaceutical, and biotechnology industries.
creativepeptides
1,887,860
The relationship between 5G and LED display
Displays and 5G seem to have no direct relationship, but in fact, the two are closely related. LED...
0
2024-06-14T02:24:34
https://dev.to/sostrondylan/the-relationship-between-5g-and-led-display-50gl
5g, led, display
Displays and 5G seem to have no direct relationship, but in fact, the two are closely related. [LED display](https://sostron.com/products/) industry relies on the advancement of network technology to achieve today's development. With the advent of the 5G era, the full opening of the Internet of Everything will bring new opportunities and challenges to the LED display industry. This article will explore in detail the relationship between 5G and LED displays, as well as the huge potential brought by their combination. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3iuslz1gl2ibzfim49cw.png) 1. The Internet of Everything and the demand for display terminals The advent of the 5G era marks the full opening of the era of the Internet of Everything. The Internet of Everything requires display terminals to present various information and data, which brings huge market demand to the LED display industry. As an important part of the display terminal, LED displays will become an indispensable part of the Internet of Things environment and will be widely used in smart cities, smart transportation, smart homes and other fields. [Introducing outdoor traffic LED displays: market, cases and advantages. ](https://sostron.com/outdoor-traffic-led-display-market-cases-and-advantages/) 2. Improvement of display effect The high-speed data transmission capability of 5G technology will bring earth-shaking changes to the display effect of LED display screens. With the development and speed increase of network technology, various emerging technologies such as naked-eye 3D, VR, and AR will be more and more used in the field of stage design. The integration of these technologies will bring a more unique, high-precision and immersive visual experience to the audience, and also tap into more advantages of LED display screens. [What is an immersive exhibition? What LED display screens will be used? ](https://sostron.com/what-is-an-immersive-exhibition-what-led-displays-will-be-used/ ) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1z4qopkysvzn67hctlz0.png) 3. Interaction and resource sharing between screens The development of 5G technology will enable LED displays scattered across the country to be organically connected through the network, realizing resource sharing and interaction between screens. Through the high-speed and stable 5G network, LED displays can update and synchronize display content in real time, improving the efficiency and accuracy of information transmission. This is of great significance in the fields of advertising, information release, etc. [Take you to understand the working principle of LED interactive floor tile screen in 8 minutes. ](https://sostron.com/8-minutes-to-understand-the-working-principle-of-led-interactive-tile-screen/) 4. Smart city and smart transportation With the advancement of smart city construction, the application of LED display screens in smart transportation will also be equipped with 5G technology. The outdoor LED light pole screen is equipped with a large number of sensors, which can automatically adjust the brightness, display temperature, humidity, camera images, pedestrian flow, vehicle flow and other information in real time, and collect and transmit these data to the cloud, becoming one of the early entrances for smart city big data collection. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m5uh72mcxlz6lye7v4t9.png) 5. Advantages of 5G technology 5G, the fifth generation of mobile communication technology, has ultra-fast data transmission speed and higher capacity. Compared with 4G, 5G network can provide richer business functions, including higher data rate, lower end-to-end latency, lower overhead, large-scale device connection and consistent user experience quality. These technical advantages will bring more possibilities and innovation space for the application of LED display screens. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n8qxi2mjnyexm2fapuud.png) 6. Prospects of the integration of 5G and LED display screens At the 2020 Spring Festival Gala, CCTV realized the 5G network transmission of 4K ultra-high-definition content for the first time, which marked an important step in the practical application of 5G technology. For the LED display industry, although the application of 5G technology is still in its early stages, with the continuous maturity of technology and cross-border integration, the combination of 5G and LED display screens will bring revolutionary changes. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5qab8iq96jc0td5bcozb.png) 7. Response of LED display enterprises If LED display enterprises want to seize the opportunities of the 5G era, they must be proactive, conduct in-depth research on 5G technology, and explore its potential market. Enterprises need to recognize the importance of 5G technology, and continue to carry out technological innovation and product development to promote the deep integration of 5G and LED display. Only in this way can we occupy a favorable position in future competition and create more valuable creative displays. [6 reasons for enterprises to use LED commercial signs. ](https://sostron.com/6-reasons-for-businesses-to-use-led-commercial-signs/ ) Conclusion The arrival of the 5G era is the inevitable result of the continuous advancement of technology. LED display enterprises need to meet this challenge, seize opportunities, and actively explore innovative applications combining 5G and LED display. Through in-depth research and continuous innovation, the LED display industry will usher in a broader development prospect and achieve a leap from technological advancement to market expansion. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwflldll4euewpqciinj.png) Thank you for watching. I hope we can solve your problems. Sostron is a professional [LED display manufacturer](https://sostron.com/about-us/). We provide all kinds of displays, display leasing and display solutions around the world. If you want to know: [How to judge the quality of LED display screens.](https://dev.to/sostrondylan/how-to-judge-the-quality-of-led-display-screens-397g) Please click read. Follow me! Take you to know more about led display knowledge. Contact us on WhatsApp:https://api.whatsapp.com/send?phone=+8613570218702&text=Hello
sostrondylan
1,887,859
Guide Wheel Liners for Mine Hoists: Ensuring Smooth Movement and Control
photo_6284972798862540183_y.jpg Title: Making Mine Hoists Safer and More Efficient with Guide Wheel...
0
2024-06-14T02:22:55
https://dev.to/tahera_tabiya_2d8c7b907d0/guide-wheel-liners-for-mine-hoists-ensuring-smooth-movement-and-control-3pdg
photo_6284972798862540183_y.jpg Title: Making Mine Hoists Safer and More Efficient with Guide Wheel Liners Introduction Guide Wheel Liners are an innovative development in the area of mine technology hoisting. They provide a variety of benefits, such as ensuring movement control smooth improving safety, and prolonging the life of your equipment. We'll discuss the benefits of Guide Wheel Liners and how they can improve procedures in the mining industry. Advantages of Guide Wheel Liners With Guide Wheel Liners, mine operators have better control over their hoisting equipment, as the linings provide a trip smoother material being transferred backwards and forwards the mine shaft. This outcomes in less interruptions to procedures and makes the equipment easier to manage. Enhancing Safety Guide Wheel Liners cover the wheels guide the hoists backwards and forwards the shaft, which reduces the risk of entanglement and accidents. This outcomes in a much safer and more environment employees efficient run in. Prolonging Equipment LifeGuide Friction Liner protect the wheels from deterioration, reducing the need for expensive repairs, and extending the equipment's life expectancy. Innovations in Guide Wheel Liners Guide Wheel Liners are made of top quality products ensure resilience and efficiency long-lasting also in severe mining environments. Easy to Use Guide Wheel Liners are easy to install and use, significantly decreasing the correct time and cost associated with repair and maintenance. How to Use Guide Wheel Liners The installation process for Liner Block for Head Sheave and Guide Wheel can differ depending upon the make and model of the hoist. Still, generally, it's simple and can be done by trained workers. Maintenance Guide Wheel Liners require minimal maintenance once installed. Still, it's suggested to inspect them regularly to ensure they remain in great problem and change them when necessary. Quality and Service Guide Wheel Liners are made to requirements top quality designed to endure the tear and wear of mining procedures. They are carefully evaluated to ensure they fulfill the requirements necessary. Service Guide Wheel Liners are sustained by a unified group of experts that well-informed in the application of these linings and can provide assistance on installation, maintenance, and safety. Application of Guide Wheel Liners Guide Wheel Liners are designed clearly for mine hoists. High performance friction liner are a service that is highly effective ensuring movement smooth control, improving safety, and prolonging the life of your hoisting equipment. While Guide Wheel Liners are mainly used for mine hoists, they can also be used in various other markets involve the use wheels, such as aviation, transportation, and manufacturing. Source: https://www.lyweika.com/Friction-liner
tahera_tabiya_2d8c7b907d0
1,887,853
Unveiling URI, URL, and URN
This guide provides an overview of URI, URL, and URN, explaining their differences and use...
0
2024-06-14T02:12:55
https://blog.logto.io/unveiling-uri-url-and-urn/
webdev, coding, programming, opensource
This guide provides an overview of URI, URL, and URN, explaining their differences and use cases. --- When developing web apps, we often need to call different web services. When configuring the communication and connection of different web services, we frequently encounter the concepts of URI, URL, and URN. Usually, users find it difficult to distinguish between them, leading to mixed or incorrect usage. In this article, we will provide examples and explain the differences between them to help everyone better understand these concepts and correctly interpret and use them when reading technical blogs, documentation, or communicating with other engineers. # What is a URL? A URL (Uniform Resource Locator) provides the web address or location of resources on the internet. It is typically used to specify the location of web pages, files, or services. A URL provides a standardized format for accessing resources on the web. It is a key component of web browsing, linking, and internet communication. A URL consists of several parts that together define the address of the resource and the protocol used to access it. The typical structure of a URL is as follows: We parse a URL `https://example.logto.io:8080/blogs/index.html?param1=value1&param2=value2#introduction` as an example and explain the function of each part one by one. 1. Scheme This specifies the protocol or scheme used to access resources, such as HTTP (Hypertext Transfer Protocol), HTTPS (HTTP Secure), FTP (File Transfer Protocol), or [others](https://en.wikipedia.org/wiki/List_of_URI_schemes). The scheme in `https://example.logto.io:8080/blogs/index.html?param1=value1&param2=value2#introduction` is `https`. 2. Host Host specifies the domain name or IP address of the server that hosts the resources. The host in `https://example.logto.io:8080/blogs/index.html?param1=value1&param2=value2#introduction` is `example.logto.io` . 3. Port (Optional) Port represents a specific port number on the host accessing the resource. If no port is specified, it defaults to the standard port for the given scheme. The default port for HTTP is 80, while the default port for HTTPS is 443. The port in `https://example.logto.io:8080/blogs/index.html?param1=value1&param2=value2#introduction` is `8080`. 4. Path (Optional) Path indicates the specific location or directory on the server where the resource is located, which can include directories and file names. The path in `https://example.logto.io:8080/blogs/index.html?param1=value1&param2=value2#introduction` should be `/blogs/index.html`. 5. Query Parameters (Optional) Query parameters are additional parameters passed to a resource, typically used in dynamic web applications. They appear after the path and are separated by the `?` symbol. The path in `https://example.logto.io:8080/blogs/index.html?param1=value1&param2=value2#introduction` is `params1=value1&param2=value2`, which is often represented in the form of key-value pairs, with pairs separated by the & symbol. In real usage scenarios, encoding is often required to avoid characters such as spaces. 6. Fragment Identifier (Optional) It can also be called an anchor, used to locate a specific position in the resource. The anchor in `https://example.logto.io:8080/blogs/index.html?param1=value1&param2=value2#introduction` is `#introduction` . Additionally, for example, using file services or many "Contact Us" buttons on web pages are linked to URLs, such as: - `ftp://documents.logto.io/files/legal/soc_ii.pdf` - `mailto:contact@logto.io?subject=Enterprise%20quota%20request` # What is a URI? URI stands for Uniform Resource Identifier. It is a string of characters that identifies a specific resource, such as a webpage, file, or service. URI provides a way to uniquely identify and locate resources using a standardized format. A URI mainly consists of two components: 1. Scheme Indicates the protocol or scheme used to access the resource. 2. Resource Identifier Identifies the specific resource being accessed or referenced. The format of the resource identifier depends on the scheme used. From a grammatical perspective, URIs mostly follow the same format as URLs, as specified in [RFC 3986](https://datatracker.ietf.org/doc/html/rfc3986). Although this URI format is similar to that of URLs, it does not guarantee access to any resource on the Web. Using this format can reduce namespace name conflicts. In the section above, we introduced URLs, which not only identify a resource but also help locate that resource. So, in fact, URLs are a proper subset of URIs. # What is a URN? URN may not be as common as URL and URI. It stands for Uniform Resource Name, and its scope is to identify resources in a persistent manner, even if such resources no longer exist. Unlike a URL, a URN does not provide any information on how to locate the resource; it merely identifies it, much like a pure URI. Specifically, a URN is a type of URI with the scheme "urn" and has the following structure, as described in [RFC 2141](https://datatracker.ietf.org/doc/html/rfc2141): `<URN>:<NID>:<NSS>` 1. URN Usually `urn`. 2. Namespace Identifier (NID) Represents a unique namespace or identifier system that defines and manages the URN. It provides context and ensures the uniqueness of the identifier. Examples of namespaces include ISBN (International Standard Book Number), etc. 3. Namespace Specific String (NSS) It is a string of characters that uniquely identifies a resource within the specified namespace. The identifier itself does not convey any information about the location or access method of the resource. For example, a very famous book introducing computer systems [CSAPP](https://www.isbns.net/isbn/9780134092669/) has its ISBN number represented as `URN urn:isbn:9780134092669`. URNs are often used in various standard protocols, such as the assertion in the SAML protocol, which corresponds to the URN `urn:oasis:names:tc:SAML:2.0:assertion`. In software engineering, we can also define URNs for specific purposes in our own systems according to the URN naming rules. For instance, in Logto, to enable Organization, you need to add the `urn:logto:scope:organizations` scope in the config when using the SDK. Each Organization also has its own dedicated URN `urn:logto:organization:{orgId}`. # Conclusion The relationship between URI, URL, and URN can be illustrated using the following Venn diagram: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c5voqz0uzfdsnsmmhypu.png) URI, URL, and URN can all be used to identify different resources, but only URL can precisely locate the position of the resource. URI and URL can support various schemes, such as HTTP, HTTPS, FTP, but URN can be considered to only support the `urn` scheme. All URLs or URNs are URIs, but not all URIs are URLs or URNs. {% cta https://logto.io/?ref=dev %} Try Logto Cloud for free {% endcta %}
palomino
1,887,850
Ready, Set, LAUNCH!! How to Launch and Connect to an AWS EC2 Instance.
AWS EC2 (Amazon Elastic Compute Cloud) is a popular service that provides scalable computing capacity...
0
2024-06-14T01:49:07
https://dev.to/techgirlkaydee/ready-set-launch-how-to-launch-and-connect-to-an-aws-ec2-instance-1jbj
aws, compute, ec2, cloudcomputing
[AWS EC2 (Amazon Elastic Compute Cloud)](https://aws.amazon.com/ec2/) is a popular service that provides scalable computing capacity in the cloud. Whether you're setting up a server for the first time or just need a refresher, this guide will walk you through launching an EC2 instance and connecting to it. --- ## **Prerequisites** - An AWS account. If you don't have one, you can [create a free account](https://portal.aws.amazon.com/billing/signup?nc2=h_ct&src=header_signup&redirect_url=https%3A%2F%2Faws.amazon.com%2Fregistration-confirmation#/start/email). - Basic understanding of cloud computing concepts. - A terminal application for SSH (Linux and macOS). --- ## **Step 1: Launch an EC2 Instance** 1.. **Log in to AWS Management Console** Go to the [AWS Management Console](https://aws.amazon.com/console/) and log in with your credentials. 2.. **Navigate to EC2 Dashboard** From the AWS Management Console, type "EC2" in the search bar and select EC2 from the list of services. 3.. **Launch an Instance** 3.1. In the EC2 Dashboard, click **Launch Instance**. 3.2. **Name and Tags**: Enter a name for your instance (e.g., "MyFirstInstance"). Tags are optional. 3.3. **Choose an Amazon Machine Image (AMI)**: Select an AMI. For beginners, the Amazon Linux 2 AMI (HVM), SSD Volume Type is a good choice as it is free tier eligible. 3.4. **Choose an Instance Type**: Select the instance type. t2.micro is free tier eligible and sufficient for most small applications or testing purposes. 3.5. **Key Pair (login)**: Select Create a new key pair. Give it a name, and download the '.pem' file. Keep this file secure as it is necessary for SSH access. 3.6. **Network Settings**: - Click on **Edit**. - Under **VPC** and **Subnets**, the default settings are usually fine. - Ensure Auto-assign public IP is enabled. - **Firewall (security groups)**: Create a new security group. - Add a rule to allow SSH access. Set the type to **SSH**, the protocol to **TCP**, the port range to **22**, and the source to **My IP** to restrict access to your IP address.' 3.7. **Configure Storage**: The default 8 GB is usually enough. **Click Next: Add Tags.** 3.8. **Advanced Details**: You can leave this section with default settings for now. 3.9. Review all the configurations and click **Launch Instance**. 4.. **View Your Instance** Click **View Instances** to go to the EC2 Dashboard where you can see your instance initializing. Wait for the instance state to turn to **running** and the status checks to show **2/2 checks passed.** --- ## **Step 2: Connect to Your EC2 Instance** 1.. **Get the Public DNS** In the EC2 Dashboard, click on your running instance and find the **Public DNS (IPv4)** field. Copy this address; you will need it to connect to your instance. 2.. **Connect Using SSH (Linux/Mac)** 2.1. Open a terminal. 2.2. Navigate to the directory where your '.pem' file is located. 2.3. Change the permissions of your .pem file to ensure it's not publicly viewable by using the following script: - _chmod 400 your-key-pair.pem_ 2.4. Connect to your instance by using the following script: _- ssh -i "your-key-pair.pem" ec2-user@your-public-dns_ Note: Replace 'your-key-pair.pem' with the name of your key pair file and 'your-public-dns' with the Public DNS you copied earlier --- ## **Outcome** Once connected, you’ll be in the terminal of your new EC2 instance. You can now use this instance as a remote server to deploy applications, run scripts, or set up a web server. AWS EC2 provides a flexible platform for all your cloud computing needs. Feel free to share your thoughts and questions in the comments below. Happy cloud computing!
techgirlkaydee
1,887,849
Help! my husband is Spying on My iPhone from his Computer - What should I do?
Yesterday, after work, I made a shocking discovery that left me feeling completely violated and...
0
2024-06-14T01:49:07
https://dev.to/brittany_jones/help-my-husband-is-spying-on-my-iphone-from-his-computer-what-should-i-do-1m4a
news
Yesterday, after work, I made a shocking discovery that left me feeling completely violated and betrayed. I found out that my husband has been remotely spying on my iPhone from his Mac computer. I never imagined that my own spouse would be invading my privacy in such a blatant manner. It has left me feeling confused and unsure of how to handle the situation. I'm from Perth in Australia. Should I confront him about it? Seek counseling? Or perhaps even consider legal action? what are the implications? I stumbled upon an article (Torvalid Mag Article Here - https://torvalid.com/2024/06/09/are-you-safe-on-your-iphone/ that suggests this type of behavior is becoming a concerning trend in Australia. I was intrigued by the article because the description looks and matches exactly like the software I saw him using. Should I spy on him to gather evidence too? or maybe i am too paranoid? I'm reaching out to the community for advice and support. Has anyone else experienced a similar breach of trust? How did you handle it? What do you think I should do in this situation? Any guidance or insight would be greatly appreciated.
brittany_jones
1,887,846
SQLynx-A web-based SQL IDE that can run centrally
A web-based SQL IDE that can run centrally provides the convenience of accessing database management...
0
2024-06-14T01:37:43
https://dev.to/concerate/sqlynx-a-web-based-sql-ide-that-can-run-centrally-3ni1
A web-based SQL IDE that can run centrally provides the convenience of accessing database management tools through a web browser, allowing for centralized administration and collaboration without the need for local installations. SQLynx is not a online SQL IDE(SaaS), You still need to deploy SQLynx in your own environment. Features: **•Intelligent Code Completion and Suggestions:** Utilizes AI technology to provide advanced code completion, intelligent suggestions, and automatic error detection, significantly enhancing the efficiency of writing and debugging SQL queries. **•Cross-Platform:** Supports access across multiple platforms, including Windows, macOS, and Linux, ensuring users can efficiently manage databases regardless of their location. **•Robust Security Measures:** Offers enhanced encryption, multi-factor authentication, and strict access control to protect sensitive data from unauthorized access and network threats. •**Enterprise-Level Collaboration: **Web-based support for large-scale team collaboration, offering detailed permission management, version control, and approval processes. **•Top-Tier Security Measures: **Provides enterprise-grade data encryption, multi-factor authentication, fine-grained access control, and security auditing to ensure comprehensive data protection. **Download Link:** http://www.sqlynx.com/en/#/home/probation/SQLynx
concerate
1,887,684
Laravel 11.x Sanctum SPA authentication with Postman!
For those who are unfamiliar with Laravel, it is a very popular monolithic PHP web framework similar...
0
2024-06-14T01:32:37
https://dev.to/prismlabsdev/laravel-11x-sanctum-spa-authentication-with-postman-3ji0
laravel, authentication, postman
For those who are unfamiliar with [Laravel](https://laravel.com/), it is a very popular monolithic PHP web framework similar to others like [Ruby on Rails](https://rubyonrails.org/). It is known for its ease of use, rapid development and making PHP development far more enjoyable haha! ![PHP being a bad language meme](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iy6ebh4kve352rt6kegy.jpg) [Sanctum](https://github.com/laravel/sanctum) is an official Laravel package that is designed to provides a featherweight authentication system for SPAs and simple APIs. Sanctum is by far the easiest way to add authentication to a Laravel application. Sanctum offers an API token approach designed for authenticating APIs and mobile applications, which is fairly easy to setup. On the other hand the cookie bases SPA authentication method can be quite confusing to configure. Which is unfortunate as this is the method most people will be using. So here I am going to show you how to configure the Sanctum SPA Authentication in Laravel 11.x and test it with Postman. I will also include an Axios configuration to easily authenticate from your SPA also! **NOTE:** The SPA authentication method only works if the request is coming from the same domain, but can come from different subdomains. For example `website.com` can authenticate to `api.website.com`. Follow along with the Sanctum documentation [here](https://laravel.com/docs/11.x/sanctum). ## Installation & Configuration of Sanctum By default Laravel does not include authentication, but provides artisan commands to install and scaffold Sanctum. It will generate a few files you will need, like a database migration and a config file specifically for Sanctum. ``` bash php artisan install:api ``` ### Configuring Your First-Party Domains In your `.env` file you will want to add the domain which you will be accepting requests from. This will be used to verify the `Referer` header, and if not found the `Origin` header. ``` env SANCTUM_STATEFUL_DOMAINS=website.com - or - SANCTUM_STATEFUL_DOMAINS=spa.website.com ``` ### Sanctum Middleware Inside `bootstrap/app.php` you want to add the stateful API middleware to the entire application, this can be done with the following. Without this middleware any route protected by the `auth:sanctum` middleware will fail. ``` PHP ->withMiddleware(function (Middleware $middleware) { // $middleware->statefulApi(); }) ``` ### CORS and Cookies For almost any application you will want CORS enabled, Sanctum requires it to function properly. You can easily scaffold its configuration with the following: ``` bash php artisan config:publish cors ``` This will generate the `config/cors.php` file. In this file you will simply need to set the `supports_credentials` to `true`. This will prevent the cookies from being blocked by CORS. Lastly for this section you will to edit the `SESSION_DOMAIN` in your `.env` file. You will want to change it to the domain of your SPA or you can prefix it with a `.` to make it accept all subdomains. For example `.website.com` will allow any subdomain. ### Logging In So now we have everything in place to verify out tokens, but we need to create the login route which will verify our email and password and associate that with our session. You can easily just add the following to your `routes/web.php` ``` PHP use Illuminate\Support\Facades\Auth; Route::post('/login', function () { $credentials = $request->validate([ 'email' => ['required', 'email'], 'password' => ['required'], ]); if (Auth::attempt($credentials)) { $request->session()->regenerate(); return Response([ 'message' => 'Successful login!' ], 200); } return Response([ 'message' => 'Mismatch email and password!' ], 401); }); ``` You can also follow the [example provided by Laravel](https://laravel.com/docs/11.x/authentication#authenticating-users) however it does not return JSON like above, but a redirect response. ## Configure Postman Collection Before we configure Postman I want to briefly discuss the operations required for logging in. First Sanctum needs us to hit the `/sanctum/csrf-cookie` endpoint with a `GET` request. This will return a 204 request meaning it was successful but has no content, it only contains our session cookies. Then we can make a `POST` request to our newly created `/login` route with an an `email` and `password` in the body to authenticate and associate the authenticated user with the session matching the issued token in our previous step. Now we are fully authenticated and can use any sanctum protected route by passing the cookie along with the header `X-XSRF-TOKEN` containing the parsed value of the cookie in the request. This can be tested by making a request to the `/api/user` route that is included by default. Don't worry Postman and Axios can easily manage the cookies and headers for us! ### The postman collection You can download this [Laravel 11.x Sanctum SPA Collection](https://github.com/PrismLabsDev/laravel_11.x_sanctum_spa_collection) to follow along. The key to the postman collection lies in our collection Pre-request script: ``` JavaScript const jar = pm.cookies.jar(); jar.get("http://localhost:5174", "XSRF-TOKEN", (err, cookie) => { pm.request.addHeader({ key: "X-XSRF-TOKEN", value: cookie }); pm.request.addHeader({ key: "Origin", value: "http://localhost:5174" }); pm.request.addHeader({ key: "Referer", value: "http://localhost:5174" }); }); ``` This script will run before every request, parsing the cookies found for the given origin and adding them to the `X-XSRF-TOKEN` header. It also sets both the `Referer` and `Origin` header which sanctum requires. You may notice that I have set the `Referer` and `Origin` header to `localhost:5174` this is simply because that is a common port for SPA applications built with Vite! NOTE: If this fails at any point it is probably because Postman requires you to give access to the cookies that belong to the origin. You can do it with the following process: ![How to view cookies in postman](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1h6y4ds97vwxxgqgxh9c.png) ![How to allow cookies in postman](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hy75835tdn6jvurg3wq7.png) ![How to add cookie origins in postman](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ywtp1m9clnukp3nqn6ii.png) ## Axios configuration Axios is a promise based HTTP client for JavaScript giving you a really easy way to work with HTTP requests. In our case it offers configuration points that make working with Sanctum extremely easy. ``` JavaScript import axios from 'axios'; let axiosInstance = axios.create({ baseURL: 'http://localhost', timeout: 1000, headers: { Accept: 'application/json' }, withCredentials: true, withXSRFToken: true, xsrfCookieName: "XSRF-TOKEN", xsrfHeaderName: "X-XSRF-TOKEN", }); export default axiosInstance; ``` By examining the above configuration, if you are familiar with Axios, you will see that its a pretty simple Axios configuration except for 4 configurations: - withCredentials - withXSRFToken - xsrfCookieName - xsrfHeaderName Lets first discuss the `withCredentials`. This option tells Axios to attach cookies found in the browser batching the origin to the request. Next is `withXSRFToken`. This option tells Axios to set the XSRF header only for requests of the same origin. Finally `xsrfCookieName` and `xsrfHeaderName` tell Axios which cookie to parse for the header, and the name of the header to attach. Both options default to the values given in the above configuration which Sanctum expects. Now any requests made with this Axios instance will be compatible with Laravel Sanctum SPA authentication!
jwoodrow99
1,887,837
Why Relying on Manual Tolerance or Thresholding in Visual Testing Tools is a Bad Idea
Ensuring that applications look and function as expected is crucial in today’s competitive market....
0
2024-06-14T01:28:30
https://dev.to/jackrover/why-relying-on-manual-tolerance-or-thresholding-in-visual-testing-tools-is-a-bad-idea-260i
visualtesting, ai
Ensuring that applications look and function as expected is crucial in today’s competitive market. Visual testing tools have become essential in this process, helping teams catch visual discrepancies that automated functional tests might miss. However, not all visual testing tools are created equal, and one significant drawback is the reliance on manual tolerance or thresholding settings. This approach can lead to numerous issues, ultimately increasing the effort required to maintain the application. ##The False Positives Nightmare Setting the tolerance or threshold too strictly in a visual testing tool often results in a high number of false positives. A false positive occurs when the tool identifies a change that isn't a defect. For example, a slight shift in the position of an element or a minor color difference due to browser rendering could be flagged as an issue. These unnecessary alerts can quickly become overwhelming for the development team. Every false positive requires manual inspection to determine whether it's a real issue or just an innocuous change. This process is time-consuming and frustrating, pulling developers away from more critical tasks and reducing overall productivity. Instead of streamlining the testing process, strict thresholding can make it more cumbersome and error-prone. ##The Dreaded False Negatives On the other hand, setting the tolerance or threshold too loosely can lead to false negatives, where real defects go unnoticed. This scenario is equally problematic, as it means that visual bugs can slip through the cracks and make it into production. Imagine deploying an application with a misaligned button or incorrect image, only to have end-users discover these issues. This not only affects the user experience but can also damage the reputation of the application and the company behind it. The relaxed threshold setting gives a false sense of security, as the tool appears to be working perfectly while critical issues remain undetected. ## The Human-in-the-Loop Problem One of the primary goals of automated testing is to reduce the need for human intervention, allowing teams to focus on more complex and value-adding tasks. However, when using a visual testing tool with manual tolerance or thresholding, human intervention becomes inevitable. Each time the tool flags a potential issue, someone must step in to verify whether it’s a false positive or a genuine defect. This requirement not only slows down the testing process but also introduces the potential for human error. Over time, the reliance on manual checks can lead to fatigue and inconsistency in issue resolution. ## Blinded by the Threshold Using manual tolerance or thresholding can create a scenario where the end-user gets blinded by the settings. Instead of providing clear, actionable insights, the visual testing tool becomes a source of confusion and inefficiency. The goal of visual testing is to provide confidence that the application is visually correct, but with manual settings, this confidence is often misplaced. ## The Solution: Intelligent Visual Testing To overcome these challenges, it's essential to adopt a visual testing tool that leverages intelligent algorithms to automatically detect and ignore minor, non-critical changes while highlighting true visual defects. Tools with smart auto-exclude features can significantly reduce the number of false positives and false negatives, ensuring that only relevant issues are flagged for review. These advanced tools use machine learning and AI to understand the context of changes, differentiating between acceptable variations and real defects. By doing so, they eliminate the need for manual tolerance or threshold adjustments, streamlining the testing process and increasing the accuracy of results. ## AI in Visual Testing: A Game Changer The application of AI in visual testing is one of the earliest and strongest examples of AI’s transformative power in the software testing domain. Tools like Imagium and Applitools have been at the forefront of this revolution, leveraging AI to automate and enhance visual testing processes. By using AI, these tools can intelligently identify and filter out minor, non-impactful changes, while accurately detecting real visual defects, thus significantly improving the efficiency and reliability of visual testing. ## Summary Relying on manual tolerance or thresholding in visual testing tools is a practice fraught with challenges. It can lead to an overwhelming number of false positives, unnoticed defects, and an increased need for human intervention. Instead of simplifying the testing process, it complicates it, reducing efficiency and effectiveness. Embracing intelligent visual testing tools that automatically handle minor variations and focus on true defects is the way forward. These tools not only save time and effort but also provide more reliable results, ensuring that your applications are visually flawless and ready for end-users. Investing in such technology can significantly enhance the quality and efficiency of your testing process, ultimately leading to better software and happier customers. You can use tools like Imagium at no cost, which sets you free from manual configuration and saves you a lot of effort. With Imagium, you can trust that your visual testing is both efficient and accurate, giving you peace of mind and freeing up your team to focus on delivering exceptional software.
jackrover
1,887,844
Understanding API Keys, JWT, and Secure Authentication Methods
In today's digital age, secure authentication methods are crucial for protecting user data and...
0
2024-06-14T01:24:55
https://dev.to/mochafreddo/understanding-api-keys-jwt-and-secure-authentication-methods-11af
authentication, security, jwt, apikeys
In today's digital age, secure authentication methods are crucial for protecting user data and maintaining the integrity of web applications. This post will explore API keys, JWT (JSON Web Token), and best practices for secure authentication. ### What is an API Key? An API key is a unique identifier used to authenticate a user, developer, or calling program to an API. It serves several purposes: 1. **Authentication**: Verifies the identity of the user or application making the request. 2. **Tracking and Limiting**: Monitors API usage and enforces usage limits. 3. **User-specific Settings**: Allows API providers to configure specific settings or permissions for each user. API keys are typically included in requests as either a URL parameter or a request header. For example: ```http GET https://api.example.com/data?apikey=YOUR_API_KEY ``` or ```http GET https://api.example.com/data Headers: Authorization: ApiKey YOUR_API_KEY ``` ### What is JWT? JWT is a compact, URL-safe means of representing claims to be transferred between two parties. The claims in a JWT are encoded as a JSON object and digitally signed. JWTs are commonly used in stateless authentication mechanisms. #### Structure of JWT: - **Header**: Contains the type of token and the signing algorithm. - **Payload**: Contains the claims, which are statements about an entity (typically, the user) and additional data. - **Signature**: Ensures that the token has not been altered. #### Example of a JWT: ``` eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImV4YW1wbGVfdXNlciIsImlhdCI6MTUxNjIzOTAyMn0.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c ``` ### API Key vs. JWT **API Keys**: - Simple to implement and use. - Typically used for basic authentication and rate limiting. - Does not contain user information or claims; it's just an identifier. **JWT**: - More complex and versatile. - Used for stateless authentication, meaning the server does not need to keep session information. - Contains user information and can be verified without storing any state on the server. ### Using JWT in Authentication JWTs can be included in the HTTP headers for secure transmission: ```http GET /data HTTP/1.1 Host: api.example.com Authorization: Bearer <JWT> ``` ### Best Practices for Secure Authentication 1. **Multi-Factor Authentication (MFA)**: Adds an additional layer of security by requiring multiple forms of verification. 2. **Strong Password Policies**: Enforce complex passwords and regular updates. 3. **Password Hashing and Salting**: Store hashed and salted versions of passwords to protect against breaches. 4. **OAuth and OpenID Connect**: Use these protocols for secure, scalable authentication and authorization. 5. **SSL/TLS**: Encrypt all communications between the client and server to prevent eavesdropping. 6. **Secure Session Management**: Implement measures such as session timeouts and secure cookies (HttpOnly and Secure flags). ### Combining JWT with Session Cookies JWT can be stored in session cookies to combine the benefits of both approaches: - **HttpOnly and Secure Cookies**: Prevent access to the cookie via JavaScript and ensure it is only sent over HTTPS. - **Session Management**: Allows for easy session invalidation and management on the server side. #### Example of storing JWT in a session cookie: 1. **Login**: User logs in and the server creates a JWT. 2. **Set Cookie**: The JWT is set in a cookie with HttpOnly and Secure flags. 3. **Subsequent Requests**: The cookie is sent with each request, and the server validates the JWT. ### Conclusion Both API keys and JWTs are essential tools for securing web applications, each with its own strengths and use cases. By understanding their differences and implementing best practices, developers can create robust and secure authentication systems that protect user data and enhance the user experience.
mochafreddo
1,887,843
Teach you to upgrade the market collector backtest the custom data source
Previous article Teach you to implement a market quotes collector taught you how to implement a...
0
2024-06-14T01:24:51
https://dev.to/fmzquant/teach-you-to-upgrade-the-market-collector-backtest-the-custom-data-source-efm
market, data, backtest, fmzquant
Previous article [Teach you to implement a market quotes collector](https://www.fmz.com/digest-topic/5692) taught you how to implement a market collector. We have implemented a robot program that collects market quotes together. How to use market data after we collected it? it will be using for the backtest system. Relying on the custom data source function of FMZ platform backtest system, we can directly use the collected data as the data source of the backtest system, so that we can let the backtest system uses in any market where we want to backtest historical data. Therefore, we can give the "Market Quote Collector" an upgrade! let the market collector can also serve as a custom data source to provide data to the backtest system. ## Get Ready It is different from the preparation work in the last article. The last time was a docker program running on my local MAC computer, installing the mongodb database to start the database service. This time we changed the operating environment to VPS and used the Alibaba Cloud Linux server to run our set of programs. - Mongodb Database As in the previous article, we need install the mongodb database on the device where the market collector program is running and start the service. It is basically the same as installing mongodb on a MAC computer. There are a lot of tutorials on the Internet, you can google it, it is very simple. - Install Python 3 The program uses python3, pay attention to the use of some python libraries, if they are not installed, you need to install them first. pymongo http urllib - Docker A FMZ docker running will be enough. ## Transform the "Market Quotes Collector" The market quotes collector is this: https://www.fmz.com/strategy/199120 (RecordsCollecter) strategy. Let's make some modifications to it: Before the program enters the while loop for collecting data, a multi-threaded library is used, and concurrent execution starts a service to monitor the data request of the FMZ platform backtest system. (Other details can be ignored) [RecordsCollecter (upgrade to provide custom data source function)](https://www.fmz.com/strategy/205143) ``` import _thread import pymongo import json import math from http.server import HTTPServer, BaseHTTPRequestHandler from urllib.parse import parse_qs, urlparse def url2Dict(url): query = urlparse(url).query params = parse_qs(query) result = {key: params[key][0] for key in params} return result class Provider(BaseHTTPRequestHandler): def do_GET(self): try: self.send_response(200) self.send_header("Content-type", "application/json") self.end_headers() dictParam = url2Dict(self.path) Log("The custom data source service receives the request, self.path:", self.path, "query parameter:", dictParam) # At present, the backtesting system can only select the exchange name from the list. When adding a custom data source, set it to Binance, that is: Binance exName = exchange.GetName() # Note that period is the bottom K-line period tabName = "%s_%s" % ("records", int(int(dictParam["period"]) / 1000)) priceRatio = math.pow(10, int(dictParam["round"])) amountRatio = math.pow(10, int(dictParam["vround"])) fromTS = int(dictParam["from"]) * int(1000) toTS = int(dictParam["to"]) * int(1000) # Connect to the database Log("Connect to the database service to obtain data, the database:", exName, "table:", tabName) myDBClient = pymongo.MongoClient("mongodb://localhost:27017") ex_DB = myDBClient[exName] exRecords = ex_DB[tabName] # Request data data = { "schema" : ["time", "open", "high", "low", "close", "vol"], "data" : [] } # Construct query condition: greater than a certain value{'age': {'$gt': 20}} Less than a certain value{'age': {'$lt': 20}} dbQuery = {"$and":[{'Time': {'$gt': fromTS}}, {'Time': {'$lt': toTS}}]} Log("Query conditions:", dbQuery, "Number of inquiries:", exRecords.find(dbQuery).count(), "Total number of databases:", exRecords.find().count()) for x in exRecords.find(dbQuery).sort("Time"): # Need to process data accuracy according to request parameters round and vround bar = [x["Time"], int(x["Open"] * priceRatio), int(x["High"] * priceRatio), int(x["Low"] * priceRatio), int(x["Close"] * priceRatio), int(x["Volume"] * amountRatio)] data["data"].append(bar) Log("data:", data, "Respond to backtest system requests.") # Write data reply self.wfile.write(json.dumps(data).encode()) except BaseException as e: Log("Provider do_GET error, e:", e) def createServer(host): try: server = HTTPServer(host, Provider) Log("Starting server, listen at: %s:%s" % host) server.serve_forever() except BaseException as e: Log("createServer error, e:", e) raise Exception("stop") def main(): LogReset(1) exName = exchange.GetName() period = exchange.GetPeriod() Log("collect", exName, "Exchange K-line data,", "K line cycle:", period, "second") # Connect to the database service, service address mongodb://127.0.0.1:27017 See the settings of mongodb installed on the server Log("Connect to the mongodb service of the hosting device, mongodb://localhost:27017") myDBClient = pymongo.MongoClient("mongodb://localhost:27017") # Create a database ex_DB = myDBClient[exName] # Print the current database table collist = ex_DB.list_collection_names() Log("mongodb ", exName, " collist:", collist) # Check if the table is deleted arrDropNames = json.loads(dropNames) if isinstance(arrDropNames, list): for i in range(len(arrDropNames)): dropName = arrDropNames[i] if isinstance(dropName, str): if not dropName in collist: continue tab = ex_DB[dropName] Log("dropName:", dropName, "delete:", dropName) ret = tab.drop() collist = ex_DB.list_collection_names() if dropName in collist: Log(dropName, "failed to delete") else : Log(dropName, "successfully deleted") # Start a thread to provide a custom data source service try: # _thread.start_new_thread(createServer, (("localhost", 9090), )) # local computer test _thread.start_new_thread(createServer, (("0.0.0.0", 9090), )) # Test on VPS server Log("Open the custom data source service thread", "#FF0000") except BaseException as e: Log("Failed to start the custom data source service!") Log("Error message:", e) raise Exception("stop") # Create the records table ex_DB_Records = ex_DB["%s_%d" % ("records", period)] Log("Start collecting", exName, "K-line data", "cycle:", period, "Open (create) the database table:", "%s_%d" % ("records", period), "#FF0000") preBarTime = 0 index = 1 while True: r = _C(exchange.GetRecords) if len(r) < 2: Sleep(1000) continue if preBarTime == 0: # Write all BAR data for the first time for i in range(len(r) - 1): bar = r[i] # Write line by line, you need to determine whether the data already exists in the current database table, based on timestamp detection, if there is the data, then skip, if not write in retQuery = ex_DB_Records.find({"Time": bar["Time"]}) if retQuery.count() > 0: continue # Write bar to the database table ex_DB_Records.insert_one({"High": bar["High"], "Low": bar["Low"], "Open": bar["Open"], "Close": bar["Close"], "Time": bar["Time"], "Volume": bar["Volume"]}) index += 1 preBarTime = r[-1]["Time"] elif preBarTime != r[-1]["Time"]: bar = r[-2] # Check before writing data, whether the data already exists, based on time stamp detection retQuery = ex_DB_Records.find({"Time": bar["Time"]}) if retQuery.count() > 0: continue ex_DB_Records.insert_one({"High": bar["High"], "Low": bar["Low"], "Open": bar["Open"], "Close": bar["Close"], "Time": bar["Time"], "Volume": bar["Volume"]}) index += 1 preBarTime = r[-1]["Time"] LogStatus(_D(), "preBarTime:", preBarTime, "_D(preBarTime):", _D(preBarTime/1000), "index:", index) # adding drawing display ext.PlotRecords(r, "%s_%d" % ("records", period)) Sleep(10000) ``` ## Test Configure the robot ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g7ps4i56b6ssa70eh4r6.png) Run the robot, run the market quotes collector. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2dd8eb8aqdkxcjl59szi.png) Open a test strategy for backtest. For example: ``` function main() { Log(exchange.GetRecords()) Log(exchange.GetRecords()) Log(exchange.GetRecords()) Log(exchange.GetRecords()) Log(exchange.GetRecords()) Log(exchange.GetRecords()) Log(exchange.GetRecords().length) } ``` Configure the backtest option, set the exchange to Binance because the temporary custom data source cannot yet formulate an exchange name by itself, you can only borrow one of the exchange configurations in the list, the backtest shows that Binance, the actual It is the data of the simulation market of WexApp. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ouiiuz53swqly7cv3spp.png) Compare whether the chart generated by the backtest system based on the market quotes collector as a custom data source is the same as the 1-hour K-line chart on the wexApp exchange page. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ypqh2p9m18qlb9debj7y.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mzn4ibzo1n72xgul2gvg.png) In this way, the robot on the VPS can collect K-line data by itself, and we can obtain the collected data at any time and backtest directly in the backtest system. You can continue to expand, for example, try the real-level backtest custom data sources, and multi-variety, multi-market data collection and other functions. From: https://blog.mathquant.com/2020/06/06/teach-you-to-upgrade-the-market-collector-backtest-the-custom-data-source.html
fmzquant
1,885,209
Mastering PostgreSQL JSONB type in one article
Learn how to use PostgreSQL's JSONB type to efficiently store, query, and manipulate JSON...
0
2024-06-14T01:19:31
https://blog.logto.io/mastering-postgresql-jsonb/
webdev, programming, postgressql, database
Learn how to use PostgreSQL's JSONB type to efficiently store, query, and manipulate JSON data. --- In modern application development, handling and storing unstructured data is becoming increasingly common. PostgreSQL's JSONB type provides a powerful tool for developers to efficiently store and query JSON data. In this article, we will delve into the concept and usage of the JSONB type and demonstrate its powerful features through specific code examples. # What is JSONB? JSONB is a PostgreSQL data type that stores JSON data in a binary format. Unlike the regular JSON type, JSONB is more efficient in querying and manipulation. It supports a rich set of operators and functions that simplify JSON data handling. Additionally, JSONB supports multiple index types, including B-tree, Hash, GIN, and GiST indexes, further enhancing query performance. # Creating JSONB type ### Creating a table with JSONB type First, let's create a table with a JSONB column. Suppose we have a `products` table to store product information, with product details stored using the JSONB type. ``` CREATE TABLE products ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, details JSONB ); ``` ### Inserting JSONB data We can insert JSON data into a JSONB field using a simple `INSERT` statement. ``` INSERT INTO products (name, details) VALUES ( 'Laptop', '{ "brand": "Dell", "model": "XPS 13", "specs": { "cpu": "i7", "ram": "16GB", "storage": "512GB SSD" } }' ); -- Query result -- id | name | details -- ---+----------+---------------------------------------------------------------- -- 1 | Laptop | {"brand": "Dell", "model": "XPS 13", "specs": {"cpu": "i7", "ram": "16GB", "storage": "512GB SSD"}} ``` # Querying JSONB data ### Extracting values from JSONB fields We can use the `->>` operator to extract text values from a JSONB field. The following example shows how to extract the brand information of a product. ``` SELECT name, details->>'brand' AS brand FROM products; -- Query result -- name | brand -- --------+------- -- Laptop | Dell ``` ### Extracting nested values with JSONB We can use a combination of `->` and `->>` operators to extract values nested within a JSON structure. The following example shows how to extract the CPU information of a product. ``` SELECT name, details->'specs'->>'cpu' AS cpu FROM products; -- Query result -- name | cpu -- --------+----- -- Laptop | i7 ``` ### Extracting values with JSONB paths Using the `#>>` operator, we can extract values from specific paths within the JSON data. The following example shows how to extract storage information. ``` SELECT name, details#>>'{specs,storage}' AS storage FROM products; -- Query result -- name | storage -- --------+------------ -- Laptop | 512GB SSD ``` ### Using `@>` operator for containment queries The `@>` operator checks if a JSONB object contains another JSONB object. The following example shows how to query products of a specific brand. ``` SELECT name, details FROM products WHERE details @> '{"brand": "Dell"}'; -- Query result -- name | details -- --------+---------------------------------------------------------------- -- Laptop | {"brand": "Dell", "model": "XPS 13", "specs": {"cpu": "i7", "ram": "16GB", "storage": "512GB SSD"}} ``` # Modifying JSONB data ### Updating JSONB data We can use the `jsonb_set` function to update JSONB data. This function updates the value at a specified path. The following example shows how to update the storage information of a product. ``` UPDATE products SET details = jsonb_set(details, '{specs,storage}', '"1TB SSD"') WHERE name = 'Laptop'; -- Query updated data SELECT name, details FROM products; -- Query result -- name | details -- --------+---------------------------------------------------------------- -- Laptop | {"brand": "Dell", "model": "XPS 13", "specs": {"cpu": "i7", "ram": "16GB", "storage": "1TB SSD"}} ``` ### Deleting top-level fields in JSONB data We can use the `-` operator to remove top-level fields from JSONB data. This operator deletes the specified key from a JSONB object. The following example shows how to remove a top-level field from product details. ``` UPDATE products SET details = details - 'model' WHERE name = 'Laptop'; -- Query updated data SELECT name, details FROM products; -- Query result -- name | details -- --------+---------------------------------------------------------------- -- Laptop | {"brand": "Dell", "specs": {"cpu": "i7", "ram": "16GB", "storage": "1TB SSD"}} ``` ### Deleting nested fields in JSONB data We can use the `#-` operator to remove specific path elements from JSONB data. This operator deletes the key at the specified path. The following example shows how to delete the CPU information from product specifications. ``` UPDATE products SET details = details #- '{specs,cpu}' WHERE name = 'Laptop'; -- Query updated data SELECT name, details FROM products; -- Query result -- name | details -- --------+---------------------------------------------------------------- -- Laptop | {"brand": "Dell", "model": "XPS 13", "specs": {"ram": "16GB", "storage": "1TB SSD"}} ``` # Advanced JSONB queries ### Using JSONB arrays JSONB data can include arrays, and we can use array-related operators to handle these data. The following example shows how to store and query products with multiple tags. ``` INSERT INTO products (name, details) VALUES ( 'Smartphone', '{ "brand": "Apple", "model": "iPhone 12", "tags": ["electronics", "mobile", "new"] }' ); -- Query products with specific tags SELECT name, details FROM products WHERE details @> '{"tags": ["mobile"]}'; -- Query result -- name | details -- ------------+---------------------------------------------------------------------------- -- Smartphone | {"brand": "Apple", "model": "iPhone 12", "tags": ["electronics", "mobile", "new"]} ``` In the above example, you can also use the `?` operator to check if a JSONB array contains a specific element: ``` SELECT name, details FROM products WHERE details->'tags' ? 'mobile'; ``` ### Merging JSONB data We can use the `||` operator to merge two JSONB objects. This operator combines two JSONB objects into one. The following example shows how to merge new specifications into existing product details. ``` UPDATE products SET details = details || '{"warranty": "2 years"}' WHERE name = 'Laptop'; -- Query updated data SELECT name, details FROM products; -- Query result -- name | details -- --------+---------------------------------------------------------------------------- -- Laptop | {"brand": "Dell", "model": "XPS 13", "specs": {"ram": "16GB", "storage": "1TB SSD"}, "warranty": "2 years"} ``` ### Aggregating JSONB data We can use aggregation functions to process JSONB data, such as counting the number of products for each brand. ``` SELECT details->>'brand' AS brand, COUNT(*) AS count FROM products GROUP BY details->>'brand'; -- Query result -- brand | count -- --------+------- -- Dell | 1 -- Apple | 1 ``` # JSONB indexing and performance Optimization To improve query performance with JSONB data, we can create indexes on specific keys within the JSONB field. Choosing the right type of index is crucial for different query needs. - **GIN Index:** Suitable for complex queries involving multivalued data, arrays, and full-text search. - **B-tree Index:** Suitable for simple key-value pair queries, range queries, and sorting operations. Properly selecting and creating indexes can significantly boost the performance of JSONB data queries. ### Creating GIN index GIN indexes are ideal for speeding up queries involving multivalued data and arrays. For example, we can create a GIN index for the `tags` array mentioned earlier to accelerate queries on array elements. ``` -- Create GIN index CREATE INDEX idx_products_details_features ON products USING GIN ((details->'tags')); -- Query products with specific features using the index SELECT name, details FROM products WHERE details->'tags' ? 'electronics'; -- Query result -- name | details -- ------------+---------------------------------------------------------------------------- -- Smartphone | {"brand": "Apple", "model": "iPhone 12", "tags": ["electronics", "mobile", "new"]} ``` ### Creating B-tree index For simple key-value pair queries, a B-tree index can also improve performance significantly. B-tree indexes are suitable for range queries and sorting operations. The following example shows how to create a B-tree index for the `model` key in the `details` field. ``` -- Create B-tree index CREATE INDEX idx_products_details_model ON products ((details->>'model')); -- Query using the index SELECT name, details FROM products WHERE details->>'model' = 'XPS 13'; -- Query result -- name | details -- --------+---------------------------------------------------------------------------- -- Laptop | {"brand": "Dell", "model": "XPS 13", "features": ["Touchscreen", "Backlit Keyboard"], "specs": {"ram": "16GB", "storage": "1TB SSD"}, "warranty": "2 years"} ``` # Conclusion PostgreSQL's JSONB type provides a powerful and flexible solution for handling JSON data. With the introduction and examples in this article, you should now have a comprehensive understanding of how to use the JSONB type. By mastering these techniques, you can efficiently handle and query unstructured data, enhancing the performance and flexibility of your applications. {% cta https://logto.io/?ref=dev %} Try Logto Cloud for free {% endcta %}
palomino
1,887,838
Un octet : la brique élémentaire du numérique
Un octet, symbolisé par "o", est l'unité de base de l'information en informatique. Il représente 8...
0
2024-06-14T01:15:37
https://dev.to/marc14444/un-octet-la-brique-elementaire-du-numerique-f9k
devchallenge, cschallenge, computerscience, beginners
Un octet, symbolisé par "o", est l'unité de base de l'information en informatique. Il représente 8 bits, soit 8 chiffres binaires (0 ou 1). Combinés, ces 8 bits permettent de coder 256 valeurs différentes, ce qui suffit pour représenter des caractères, des nombres, des instructions et bien plus encore. C'est l'unité fondamentale pour mesurer la taille de la mémoire, du stockage et des fichiers numériques. En résumé, l'octet est la pierre angulaire du langage numérique, permettant de représenter et stocker l'information qui alimente nos ordinateurs et nos vies numériques.
marc14444
1,887,836
I don't know if that's illegal or not (Sorry FBI)
const accountCheck = () =&gt; { if (mybalance &lt; moneyneeded) { const money =...
0
2024-06-14T01:14:18
https://dev.to/shafayeat/i-dont-know-if-thats-illegal-or-not-sorry-fbi-hfo
discuss, javascript
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z4g2ma5au5l7d1dbti90.PNG) ```JavaScript const accountCheck = () => { if (mybalance < moneyneeded) { const money = getbankaccountof("Bill Gates"); transferMoneyToMyAccount(money); sendThankYouNoteTo("Bill Gates", "Thanks for the loan, Bill!"); } } ``` Apologies, FBI! Just having some harmless fun here. No need to take anything seriously! 😊😊
shafayeat
1,887,781
FastAPI for Data Applications: From Concept to Creation. Part I
In this blog post, we'll explore how to create an API using FastAPI, a modern Python framework...
27,835
2024-06-14T00:58:11
https://dev.to/felipe_de_godoy/fastapi-for-data-applications-from-concept-to-creation-part-i-15ia
fastapi, dataengineering, datascience, llmops
In this blog post, we'll explore how to create an API using FastAPI, a modern Python framework designed for building APIs with high performance. We will create a simple API that allows users to add, update, and query items stored temporarily in memory. Alongside this, we'll discuss how you can extend this example to expose machine learning models, perform online processing in decision engines, and ensure best practices for a robust, secure API. ### Pre-requisites: Installation of FastAPI and Uvicorn Before diving into the code, we need to install `FastAPI` and `Uvicorn`, an ASGI server to run our application. Run the following command in your terminal: ```sh pip install fastapi uvicorn ``` If you prefer, you can run in a virtual environment to isolate your execution. ### Project Structure We'll start by creating a file named `main.py`, which will contain all the code for our application. ### 1. Setting Up FastAPI The first step is to import the necessary modules and initialize the FastAPI app. ```python from fastapi import FastAPI, HTTPException from pydantic import BaseModel from typing import List app = FastAPI() ``` - **FastAPI**: The core library for building the API. - **HTTPException**: This handles HTTP exceptions. - **BaseModel**: Part of Pydantic, used for data validation. - **List**: Used for type hinting. ### 2. Defining the Data Model Next, we'll define the data model using Pydantic's `BaseModel`. This model will describe the structure of the data we want to store and process. ```python class Item(BaseModel): id: int name: str description: str = None ``` - **Item**: Our data model with three fields: `id`, `name`, and `description`. The `description` field is optional. ### 3. In-Memory Database For simplicity, we'll use a list to simulate a database. Here you can insert the connectors for your specific use case. ```python fake_db: List[Item] = [] ``` - **fake_db**: This list will store instances of `Item` during the application's runtime. ### 4. Define the Root Endpoint We'll start by defining a simple root endpoint to ensure our application is running. ```python @app.get("/") async def root(): return {"message": "Index Endpoint"} ``` This endpoint returns a JSON message saying "Index Endpoint" when accessed. ### 5. Implementing API Endpoints #### a) Retrieve All Items To retrieve all items in our "database", we'll define a GET endpoint. ```python @app.get("/items/", response_model=List[Item]) async def get_items(): return fake_db ``` This endpoint returns the list of all items. #### b) Retrieve a Specific Item To retrieve a specific item by its ID: ```python @app.get("/items/{item_id}", response_model=Item) async def get_item(item_id: int): for item in fake_db: if item.id == item_id: return item raise HTTPException(status_code=404, detail="Item not found") ``` - `item_id`: The item's ID passed as a path parameter. - This function searches the list for an item with the given ID and returns it if found. If not, it raises a 404 error. #### c) Create a New Item To add new items to our list: ```python @app.post("/items/", response_model=Item) async def create_item(item: Item): for existing_item in fake_db: if existing_item.id == item.id: raise HTTPException(status_code=400, detail="Item already exists") fake_db.append(item) return item ``` - **create_item**: This function checks if an item with the same ID already exists. If it does, it raises a 400 error. Otherwise, it adds the item to the list. #### d) Update an Existing Item To update an existing item by its ID: ```python @app.put("/items/{item_id}", response_model=Item) async def update_item(item_id: int, updated_item: Item): for idx, existing_item in enumerate(fake_db): if existing_item.id == item_id: fake_db[idx] = updated_item return updated_item raise HTTPException(status_code=404, detail="Item not found") ``` - **update_item**: The function updates an item if it exists, otherwise raises a 404 error. ### Running the Application Save your file as `main.py`. To run the application, execute the following command: ```sh uvicorn main:app --reload ``` The `--reload` flag enables auto-reloading, meaning the server will restart whenever you make changes to the code. This is useful during development. ### Testing Your API You can test the API using tools like `curl`, Postman, bash terminal, or even a web browser. Here are some `curl` commands for testing: #### a) Add New Items ```sh curl -X POST "http://127.0.0.1:8000/items/" -H "Content-Type: application/json" -d '{"id": 1, "name": "Item 1", "description": "This is item 1"}' ``` ```sh curl -X POST "http://127.0.0.1:8000/items/" -H "Content-Type: application/json" -d '{"id": 2, "name": "Item 2", "description": "This is item 2"}' ``` #### b) Update an Item ```sh curl -X PUT "http://127.0.0.1:8000/items/1" -H "Content-Type: application/json" -d '{"id": 1, "name": "Updated Item 1", "description": "This is the updated item 1"}' ``` #### c) Retrieve All Items ```sh curl -X GET "http://127.0.0.1:8000/items/" ``` #### d) Retrieve a Specific Item ```sh curl -X GET "http://127.0.0.1:8000/items/1" ``` ### Advanced Use Cases #### Exposing a Machine Learning Model One of the most exciting applications of FastAPI is exposing machine learning models. Imagine you want to serve a trained model for real-time predictions. Here’s an example of how to load a machine-learning model and expose it via an endpoint. Firstly, ensure you have a trained machine-learning model stored in a pickled file. For this example, we'll assume you have a `model.pkl` file. ```python import pickle from sklearn.ensemble import RandomForestClassifier # Load the model with open("model.pkl", "rb") as f: model = pickle.load(f) @app.post("/predict/") async def predict(data: List[float]): prediction = model.predict([data]) return {"prediction": prediction[0]} ``` - **Data Validation**: Validate input data properly to avoid unexpected errors. Pydantic models can help structure the input data for the prediction endpoint. - **Performance**: Be mindful of the model's loading time and how often it's accessed. For high-frequency endpoints, consider model optimization and caching mechanisms. - **Risks**: Make sure only validated and sanitized data is fed into your model to avoid security vulnerabilities like input injection. #### Online Processing in a Decision Engine Another practical scenario is implementing online data processing using an API. For example, in a financial application, you might need to decide based on streaming data, such as approving or declining a transaction. ```python @app.post("/process/") async def process_transaction(transaction: dict): # Perform some real-time computation/decision decision = "approved" if transaction["amount"] < 1000 else "declined" return {"transaction_id": transaction["transaction_id"], "decision": decision} ``` - **Efficiency**: Ensure your processing logic is efficient since it could potentially process numerous transactions in real-time. - **Consistency**: Maintain the consistency and idempotence of your decision logic to handle scenarios where the same request might be processed multiple times. - **Security**: Always validate incoming data to protect against malicious payloads. ### Conclusion In this blog post, we've built a simple API using FastAPI that allows you to add, update, and fetch items stored in memory. Additionally, we discussed how you can extend this architecture to expose machine learning models and perform online processing in decision engines. This example serves as a foundational understanding of how to use FastAPI for basic CRUD operations and more advanced use cases. For more complex scenarios, consider connecting to a real database, implementing robust authentication, and ensuring high availability. Feel free to explore and expand upon this example to suit your needs. Happy coding! ### Looking Ahead to Part 2: Dockerizing and Running in Kubernetes In the next part of our series, we will take this FastAPI application to the next level by preparing it for production deployment. We will walk you through the process of Dockerizing the application, ensuring it is containerized for consistency and portability. Then, we'll delve into orchestrating the application using Kubernetes, providing the scalability and reliability required for production environments. This includes setting up Kubernetes manifests, deploying the application to a Kubernetes cluster, and managing the application lifecycle in a cloud-native environment. Stay tuned to learn how to transform your development setup into a robust, enterprise-ready deployment! Keep an eye out for Part 2 of this series as we dive into Docker and Kubernetes. Github repo: https://github.com/felipe-de-godoy/FastAPI-for-Data-Delivery Credits: Image from "Real Python"
felipe_de_godoy
1,887,832
vipsosyalbayim smm panel ınstagram tiktok twitter
Takipçi Satın Al Sosyal Medya Takipçi Satın Al Ucuz Takipçi Satın Al Güvenilir Takipçi Satın...
0
2024-06-14T00:53:08
https://dev.to/takipcisatinal34/e-4c66
Takipçi Satın Al Sosyal Medya Takipçi Satın Al Ucuz Takipçi Satın Al Güvenilir Takipçi Satın Al Organik Takipçi Satın Al Instagram Takipçi Satın Al Instagram Ucuz Takipçi Satın Al Instagram Organik Takipçi Satın Al Instagram Aktif Takipçi Satın Al Instagram Hızlı Takipçi Satın Al
takipcisatinal34
1,887,774
Create an App with EXPO (1)
Hi,guys,after finishing my fullstack project with Nest.j/Typeorm/Mysql/Aws/Vue3, I am going to start...
0
2024-06-13T23:43:42
https://dev.to/alexander6/create-an-app-with-expo-1-28g9
Hi,guys,after finishing my fullstack project with Nest.j/Typeorm/Mysql/Aws/Vue3, I am going to start a new application with React Native. what is this app for? I don't know for now. the first function that comes to my mind is that it should support chatting and posting. ### So let't start it. As my past experience, we'd better do anything from zero unless for a studying purpose. After searching for many starter template, I decided to use [this starter bio](https://starter.obytes.com/overview/). As for the node package manager,I very recommend using `pnpm` instead of `npm` or `yarn`.for the reason, you can just google it.***but*** pay attention here, sometimes the project maybe failed when running 'pnpm install' or 'pnpm dev'. in that times you should fix errors accourding to console outputs, and you have to change back to `npm` to resolve that problem. it's confusing but work. And there are also many unexpected problems when using biotemplate like [obytes](https://starter.obytes.com/overview/), because it has combined so many libs, so you may have to deal with dependencies installing error. Here is one error I met when running `pnpm ios`, it will exec `pod install` ,and the MMKV used by react-native-mmkv blocked my way. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjtmzao56gd2wveu2i6m.png) `error: 11397 bytes of body are still expected fetch-pack: unexpected disconnect while reading sideband packet fatal: early EOF` this error is often when installing a javascript dependency whose link is in this pattern: "https://github.com/xxxx/dependency.git". ChatGPT told me that I can change the protocol from `https` to `git`.But this MMKV lib is called by `react-native-mmkv`, I can't just change it's link. Here is the solution - first ,change the global git config by running this command `git config --global url."git@github.com:".insteadOf "https://github.com/"` - check the command runs successfully,and the URL rewrite rule has been added correctly: `git config --global --get-regexp url` - you will see an output similar to this `url.git@github.com:.insteadof https://github.com/` ### After doing this, I tried `pnpm ios`, wating for the installation Fine, I stuck here for 1 hour ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kbqrnta7dop3513mc4fx.png) it looks like a problem of my network, after changing a WIFI, the installation finished and successfully run in iOS simulator. Now, we have some basic features - Login Page ![login page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eacas7np4oor3ffrfxd1.png) - post list & tabs ![post list & tabs](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j6805rq8qpnb2bdf6cyf.png) - issue post ![issue post](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4956l9hbbpe3yd3wycin.png) we can modify the interface and add new tabs according our specific needs. I will summarize main features about the application.
alexander6
1,887,680
Create Window 11 virtual machine in Azure portal
Table of contents Sign in to Azure Create virtual machine Connect to virtual machine Install...
0
2024-06-14T00:49:36
https://dev.to/emeka_moses_c752f2bdde061/create-window-11-virtual-machine-in-azure-portal-4p53
azure, virtualmachine, window11, cloud
Table of contents - Sign in to Azure - Create virtual machine - Connect to virtual machine - Install windows 11 - View the windows 11 welcome page - Clean up resources. - Delete a reources. Azure virtual machines (VMs) can be created through the Azure portal. This method provides a browser-based user interface to create VMs and their associated resources. ## Sign in to Azure https://portal.azure.com/ ## Create Virtual Machine 1. Enter virtual machines in the search. 2. Under Services, select Virtual machines. 3. In the Virtual machines page, select Create and then Azure virtual machine. The Create a virtual machine page opens. 4. Under Instance details, enter myVM for the Virtual machine name and choose Windows 11 pro, Version 22H2 - x64 Gen 2 for the Image. Leave the other defaults ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/esgnzn3j6fki3ymyvswz.PNG) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/emzmdo0kmsee7st9vfae.PNG) 5.Under Administrator account, provide a username, such as azureuser and a password. The password must be at least 12 characters long and meet the defined complexity requirements. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g0lwini7vxdidhy2xkv9.PNG) 6.Under Inbound port rules, choose Allow selected ports and then select RDP (3389) and HTTP (80) from the drop-down. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zfbinenp0y0x2c4i0ft2.png) 7.Leave the remaining defaults and then select the Review + create button at the bottom of the page. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xkzhavsbmevgjhh85ksb.png) 8.After validation runs, select the Create button at the bottom of the page. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kgp4u7o4uvn1g27dsh9c.PNG) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r2j77cw4924z223tu3ir.PNG) 9.After deployment is complete, select Go to resource. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rrsunqq87cswqruz0j6l.PNG) ## Connet to Virtual Machine Create a remote desktop connection to the virtual machine. These directions tell you how to connect to your VM from a Windows computer. On a Mac, you need an RDP client such as this Remote Desktop Client from the Mac App Store. 1.On the overview page for your virtual machine, select the Connect > RDP ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ydyww2hmf01s4r9c6jye.PNG) 2.In the Connect with RDP tab, keep the default options to connect by IP address, over port 3389, and click Download RDP file ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gpktiro9qn9he2lkk0xq.PNG) 3.Open the downloaded RDP file and click Connect when prompted. 4.In the Windows Security window, select More choices and then Use a different account. Type the username as localhost\username, enter the password you created for the virtual machine, and then click OK. 5.You may receive a certificate warning during the sign-in process. Click Yes or Continue to create the connection. ## Install windows To see your VM in action, install the windows 11 pro. Open a PowerShell prompt on the VM and run the following command: When done, close the RDP connection to the VM. ## View the windows 11 welcome page In the portal, select the VM and in the overview of the VM, hover over the IP address to show Copy to clipboard. Copy the IP address and paste it into a browser tab. The default welcome page will open, and should look like this: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s98qa9xmeyu760uu4z0s.PNG) ## Clean up resources ## Delete resources When no longer needed, you can delete the resource group, virtual machine, and all related resources. 1. On the Overview page for the VM, select the Resource group link. 2. At the top of the page for the resource group, select Delete resource group. 3.A page will open warning you that you are about to delete resources. Type the name of the resource group and select Delete to finish deleting the resources and the resource group. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lxm41vrrjp75t96mfw9v.PNG)
emeka_moses_c752f2bdde061
1,887,826
Turbocharge Your OpenDevin Development: A Deep Dive into a Time-Saving Bash Script
Introduction Embarking on your OpenDevin journey? This innovative platform opens doors to endless...
0
2024-06-14T00:42:04
https://dev.to/tosin2013/turbocharge-your-opendevin-development-a-deep-dive-into-a-time-saving-bash-script-3p31
opendevin, ai, opensource, development
**Introduction** Embarking on your [OpenDevin](https://opendevin.github.io/OpenDevin/) journey? This innovative platform opens doors to endless possibilities, but the initial setup can sometimes be a time sink. What if I told you there's a way to skip the tedious configuration and dive straight into coding? Meet our Bash script, your new best friend in the world of OpenDevin development. **What This Script Does (and Why You'll Love It)** Let's break down the magic behind this script and how it can supercharge your workflow: 1. **Effortless Dependency Management:** - **Say goodbye to manual installations:** The script does the heavy lifting by automatically checking for and installing essential tools like Docker, Node.js, Conda (a package and environment manager), and Poetry (a Python dependency manager). - **Why this matters:** No more scouring the web for installation instructions or troubleshooting compatibility issues. Your development environment is ready to roll in minutes. 2. **Seamless Environment Configuration:** - **Your OpenDevin sanctuary:** The script sets up a dedicated Conda environment exclusively for [OpenDevin](https://opendevin.github.io/OpenDevin/). This keeps your project dependencies organized and prevents conflicts with other projects. - **Git integration:** It effortlessly clones the OpenDevin repository (if you don't already have it), saving you a manual step. - **Ollama LLM setup (optional):** For those interested in working with Large Language Models (LLMs), the script can even help you get started with Ollama, a powerful LLM service. - **Why this matters:** You get a clean, isolated workspace where you can experiment and build without worrying about messing up your system. 3. **Configuration and Workspace Automation:** - **No more missing files:** The script ensures you have the essential configuration files (`config.toml`) and a designated workspace directory. - **Why this matters:** You won't waste time tracking down missing components or wondering where to store your project files. 4. **Dockerized Development Bliss:** - **Build and run with a single command:** The script leverages the power of Docker to build and launch your OpenDevin application within a container. - **Consistency is key:** Docker guarantees that your development environment mirrors the production environment, minimizing those frustrating "it works on my machine" bugs. - **Why this matters:** Docker streamlines testing and deployment, giving you the confidence that your application will behave as expected wherever it runs. **Unleash the Script's Potential** Getting started is a breeze: 1. **Grab the Script:** Head over to [https://raw.githubusercontent.com/tosin2013/OpenDevin/live/opendevin.sh](https://raw.githubusercontent.com/tosin2013/OpenDevin/live/opendevin.sh) and download it. 2. **Make it Executable:** Open your terminal and run `chmod +x opendevin.sh`. 3. **Command Your Environment:** Execute the script with different flags to perform specific actions: - `./opendevin.sh -i -b` (Install dependencies and build) - `./opendevin.sh -r` (Run the project) - `./opendevin.sh -h` (View all options) **Who Should Use This Script?** * **OpenDevin Newcomers:** Hit the ground running without getting bogged down in setup. * **Seasoned Developers:** Automate repetitive tasks and reclaim your precious time. * **Teams:** Ensure everyone on your team has a consistent development environment, leading to smoother collaboration. **Going Beyond the Basics** Ready to customize? This script is your starting point. Dive into the code, tweak it to match your preferences, and even contribute back to the OpenDevin community! **Let's Get Developing!** Don't let setup slow you down. Let this Bash script be your trusty sidekick as you explore the exciting world of OpenDevin development. **What are you waiting for? Share your OpenDevin creations in the comments below!**
tosin2013
1,887,827
Introduction to Ethical Hacking with Kali Linux
Introduction Ethical hacking is the practice of using advanced computer skills and...
0
2024-06-14T00:32:13
https://dev.to/kartikmehta8/introduction-to-ethical-hacking-with-kali-linux-1fl4
webdev, javascript, programming, beginners
## Introduction Ethical hacking is the practice of using advanced computer skills and techniques to identify potential vulnerabilities in a computer system. It is carried out with the permission of the system owner in order to find and fix any security flaws. Kali Linux is a popular operating system used by ethical hackers due to its extensive built-in security testing tools. ## Advantages of Using Kali Linux 1. **Widely Used:** Kali Linux is one of the most widely used operating systems for ethical hacking. It is based on Linux and is constantly updated with new tools, making it a preferred choice for professionals. 2. **Extensive Tools:** Kali Linux comes with over 600 pre-installed tools for various types of testing, making it a comprehensive and efficient choice for ethical hacking. 3. **User-friendly:** Despite its complex nature, Kali Linux has a user-friendly interface, making it easy for even beginners to learn and use. ## Disadvantages of Using Kali Linux 1. **Steep Learning Curve:** Kali Linux is not a beginner-friendly operating system and requires a certain level of technical proficiency to use effectively. 2. **Potential for Misuse:** As with any tool, there is a potential for misuse of Kali Linux by unscrupulous individuals, which can lead to security breaches. ## Features of Kali Linux Kali Linux offers advanced features such as wireless attacks, social engineering tools, and password cracking capabilities. It also provides a customizable desktop environment and supports multiple languages. ### Highlighting Key Features: - **Wireless Attacks:** Tools like Aircrack-ng and Reaver are included to test the security of wireless networks. - **Social Engineering:** Includes tools like Social Engineering Toolkit (SET) for crafting phishing attacks to test human-based security loopholes. - **Password Cracking:** Tools such as John the Ripper and Hydra are available for testing the strength of passwords within a network. ```bash # Example of using Aircrack-ng aircrack-ng -a2 -b [router bssid] -w [path to wordlist] [input capture files] ``` ## Conclusion Kali Linux is a powerful and versatile operating system that has become an indispensable tool for ethical hacking. With its extensive tools and capabilities, it allows professionals to identify and fix security flaws in a system, making it an essential component for any ethical hacking toolkit. However, it should be used responsibly and with the permission of the system owner to avoid any potential misuse.
kartikmehta8
1,887,825
My Pen on CodePen
Check out this Pen I made!
0
2024-06-14T00:22:26
https://dev.to/aditya_singh2109/my-pen-on-codepen-27ln
codepen
Check out this Pen I made! {% codepen https://codepen.io/adjmcvgz-the-typescripter/pen/qBGVOpr %}
aditya_singh2109
1,887,783
BSides Knoxville 2024: A Community Celebrating A Decade of Cybersecurity
Knoxville is home to multiple famous institutions, such as the University of Tennessee, the Tennessee...
0
2024-06-14T00:03:49
https://dev.to/gitguardian/bsides-knoxville-2024-a-community-celebrating-a-decade-of-cybersecurity-2mpf
security, cybersecurity, ai, llms
Knoxville is home to multiple famous institutions, such as the [University of Tennessee](https://www.utk.edu/?ref=blog.gitguardian.com), the [Tennessee Valley Authority (TVA)](https://www.tva.com/?ref=blog.gitguardian.com), and [Dolly Parton](https://en.wikipedia.org/wiki/Dolly_Parton?ref=blog.gitguardian.com). Visually, you know you are in Knoxville when you see [The Sunsphere](https://www.visitknoxville.com/listing/the-sunsphere/567/?ref=blog.gitguardian.com), a 23-meter gold-colored glass sphere sitting atop an 81-meter tall tower of steel. The monument, created for the 1982 World's Fair and in the spirit of celebrating advancements in energy, watched over us as we gathered to celebrate advancements in cybersecurity at BSides Knoxville 2024. This year marked the 10th anniversary of this Tennessee-based BSides, and it was the biggest one yet. Over 400 attendees gathered to exchange knowledge, share experiences, and celebrate a decade of cybersecurity innovation. It would be hard to cover all the sessions across two speaking tracks, the lock pick pillage, the remote control robot hacking, and all the other little things that went into this event at the [Mill and Mine](https://themillandmine.com/?ref=blog.gitguardian.com). Here are just a few of the highlights. [![](https://lh7-us.googleusercontent.com/RBedLJxlgfjFRScN4ue8HWshQoDe1mpNnVk4uUuBTJHD9V8c2bxBu9vysR0z3_fJG6Fj-f2zso6M2wvZzjqQOLMKhPH_3gY25K_LHjcCZ2wB8WtpMx71be-TiZkznoqjbgujkqgbWKw1BT5xjUEYYA)](https://www.linkedin.com/posts/bjwithrow_bsides-community-makingsecuritygreatagain-activity-7199845292886683649-V8qE?ref=blog.gitguardian.com) BSides Knoxville 2024 Context is king when it comes to vulnerability management --------------------------------------------------------- One of the younger presenters at any BSides, [Meghna Vikram](https://www.linkedin.com/in/meghna-vikram-5572832aa/?ref=blog.gitguardian.com), a high school junior, presented her research findings in her session, "Who Makes the Rules?" She walked us through her efforts to leverage machine learning to fix vulnerabilities in JavaScript code in a more thorough way. Her first step was to find a vulnerability analysis tool with output in an easy-to-manipulate format. She settled on a static analysis tool that produces YAML, a format defined by indentation that is easy for machines and humans to read. Her hypothesis was that while most SAST tools did provide generally valuable findings, the version she was using stopped short of giving specifics about how the vulnerability could be introduced. For example, out of the box, the tools she ran showed a cross-site scripting (XSS) vulnerability present, but further research into the vulnerability database showed over 39000 ways that XSS could have been introduced. While it is helpful to know the vulnerability is present, she is working on a tool that shows exactly how each one was introduced into the code.  Meghna said she was surprised that after multiple passes at prompt training and experimentation, the AI she was relying on, ChatGPT, began to give deeper context around each issue and even examples of what the affected code would look like. While her research is in the early stages, it shows clear promise that AI-assisted security tools will play a role in training and giving developers a helping hand with their work.   [![](https://lh7-us.googleusercontent.com/subGafiCIqTJh0vEqTkz--JglZre6nEoaHf8i_BQhgij67KTBV2djyWxF7AG3oPIJqvllqYb0zRR4lIuTfi6eTKWLHj6QYmndQbrbmmj4sXnbX-zxsRCyTOqRNAvqa2wMlArMyGxeh1Jmw1_wH4quQ)](https://www.linkedin.com/posts/dwaynemcdaniel_bsidesknoxville-activity-7199772233483837441-pPy-?ref=blog.gitguardian.com) Meghna Vikram shared her research findings at #BSidesKnoxville with her session "Who Makes the Rules?" Elementary, my dear cybersecurity professional! ----------------------------------------------- Very few times has a speaker fully leaned into the overall tone of a conference as [Joshua Jones,  Senior Compliance Consultant at Contextual Security Solutions](https://www.linkedin.com/in/joshua-jones-23073558/?ref=blog.gitguardian.com), did with his Sherlock Holmes-themed talk "The Compromise of the Baskervilles: Holistic Testing in the "Automagic" Era." Not only did Joshua work in many a great quote from Sir Arthur Conan Doyle's classic cannon of work, but he also explored AI's role in security testing and the importance of getting the basics right. Joshua started by walking us through history, from the Trojan Horse to the use of [the Enigma machine](https://www.youtube.com/watch?v=ybkkiGtJmkM&ref=blog.gitguardian.com) in World War 2, to illustrate how security has evolved over time. Any tool, whether it involves AI or not, ultimately requires human oversight and strategic deployment. Defenders need to understand the nuances of the systems they protect, and AI should be seen as a means to bolster defenses, not as a standalone solution. Drawing parallels to the classic detective story "The Hound of the Baskervilles," Joshua showed how Watson played a role similar to that of AI, performing the menial data collection tasks that allowed Holmes, or us, to step in at just the right moment and crack the case. AI, like Watson, can play a pivotal role but lacks the self-awareness to make the needed conclusions or take the needed actions. He wrapped up by saying that while AI changes the "how" of security, the "why" remains rooted in fundamental human values of protection and resilience. [![](https://lh7-us.googleusercontent.com/uez2dMAb9uUzGRE5XnQ9VH4riajPEtiuPZwwR1cmbqQor3PySGbj6GhsBb2gZ5kmjgZU0BAEJThXI4qB5MURHB-hRgftia3vbxfJdjOEIVofiqinFKL_0Y0XwkIYsUhptZKvJZvLctZbHCj_g3YkyA)](https://lh7-us.googleusercontent.com/uez2dMAb9uUzGRE5XnQ9VH4riajPEtiuPZwwR1cmbqQor3PySGbj6GhsBb2gZ5kmjgZU0BAEJThXI4qB5MURHB-hRgftia3vbxfJdjOEIVofiqinFKL_0Y0XwkIYsUhptZKvJZvLctZbHCj_g3YkyA) Joshua Jones of Contextual Security Solutions To understand the history of hacking, you must understand 'phreaking' --------------------------------------------------------------------- The always-colorful [Matt Scheurer, ThreatReel podcaster and VP of Security at a major US organization](https://www.linkedin.com/in/mattscheurer/?ref=blog.gitguardian.com), took attendees on a nostalgic journey through the history of telephony and its intersection with hacking in his session, "Lies, Telephony, and Hacking History." Bringing along a suitcase filled with various bits of tech from the last 60 years, he walked us through a history of telephone systems and how the subculture of phone phreaking evolved.  The talk started with how we went from switchboards to dial tones and how ingenious hackers explored and exploited these systems. He said in those early days, the scene was driven by a thirst for knowledge, mischief, and the desire to make free calls, especially because in the 1970s and 80s, long-distance calls could cost upwards of $1.00 per minute. Along the way, he showed off his collection of tools, including an [acoustic coupler](https://en.wikipedia.org/wiki/Acoustic_coupler?ref=blog.gitguardian.com) as seen in the movie War Games, a "[blue box](https://en.wikipedia.org/wiki/Blue_box?ref=blog.gitguardian.com)" that generated 2600Hz tones to let people make long-distance calls for free, and some other interesting gear that allows a person to tap directly into phone lines. Matt also discussed modern telephony abuse, including vishing (voice phishing), SMiShing (SMS phishing), and the dangers of mobile SIM swapping. The techniques attackers use today have been evolving for decades; while technology has changed, the principles of social engineering and exploitation remain relevant. We must stay vigilant in securing communication systems, as they are the backbone of our IT systems. [![](https://lh7-us.googleusercontent.com/HDs5W1K_HCHtVJE9JF0-pgvk73neUfggQzLJAyMBiEasnSuhtsryk9DdfAoboIlFF2-N1V8vvtLM0USg60MHsXnyU9Bmoqa8SXqMZOXiP_OnRUisK7yPy7go9lGWORpoJXytYq10jnXBUtoKku-SNw)](https://www.linkedin.com/posts/dwaynemcdaniel_bsidesknoxville-activity-7199863807081398274-43uJ?ref=blog.gitguardian.com) Lies, Telephony, and Hacking History from Matt Scheurer Community and collaboration in the era of AI -------------------------------------------- While many of the talks at BSides Knoxville spotlighted the role of AI, overall, the theme of holistic security, getting the basics right, shined through as the focus. Your author gave a talk about the importance of elevating security awareness across all teams throughout the organization and the role security champions can play in [keeping us all secure at every level.](https://sched.co/1bJO6?ref=blog.gitguardian.com) While the tools keep on changing, the mission and the core ideas remain constant. BSides Knoxville 2024, in its 10th year, celebrated a vibrant and dynamic cybersecurity community. It was a great day to look at the past, understand where we came from, and collectively discuss the challenges we see on the horizon. Fortunately, one of the things we can all look forward to is more BSides in the future. We hope to see you at [your local one soon](https://bsides.org/w/page/12194156/FrontPage?ref=blog.gitguardian.com).
dwayne_mcdaniel
1,888,781
Using FFmpeg & the Command Line To Repurpose Video Content into a Podcast — Rob Kleiman
The Adventure Begins Embark on a thrilling journey with me as we delve deep into the heart...
0
2024-06-14T15:43:37
https://medium.com/@rkconnections/using-ffmpeg-the-command-line-to-repurpose-video-content-into-a-podcast-rob-kleiman-c97592058e4f
--- title: Using FFmpeg & the Command Line To Repurpose Video Content into a Podcast — Rob Kleiman published: true date: 2024-06-14 00:00:58 UTC tags: canonical_url: https://medium.com/@rkconnections/using-ffmpeg-the-command-line-to-repurpose-video-content-into-a-podcast-rob-kleiman-c97592058e4f --- ### The Adventure Begins Embark on a thrilling journey with me as we delve deep into the heart of podcast hacks, armed with a tool known as FFmpeg. In this post, I’ll guide you through a quick way I pulled multiple MP3 files from a YT playlist and merged them to seamlessly string together cohesive episodes for my audio podcast, Megashift. So here’s a scenario: you recorded video interviews, published them into a playlist on YouTube, but now you want to turn those video snippets into a series of longer audio files so you can upload them publish it as an audio-only podcast. Well, here’s the solution of the day-using FFmpeg and the command line to do exactly that. ### The Journey Starts with Your First Step ### Step 1: Grab Your Audio Did you know PullTube is a macOS application that allows users to download videos and audio from various websites? It provides a way to paste the URL of the video or audio they want to download and then choose the desired format and quality. PullTube supports various video and audio formats, including MP4, FLV, WebM, MP3, and M4A. You can potentially use a tool like PullTube or another to grab your media-granted you have the usage and copyright rights to do so. Grab your files and save them in the order in which you want them to be compiled. I recommend adding incremental numbers to the filenames (me-1, me-2, me-3, etc.). ### Unveiling FFmpeg Our adventure with FFmpeg now begins — it’s a versatile and powerful multimedia processing tool revered by developers and audio enthusiasts alike. But before we can wield its awesome power, we must first summon it into our realm. Utilize the FFmpeg tool to merge multiple audio files into a single MP3 file for easier uploading. Here’s a breakdown of how to do it in the terminal. ### Installation of FFmpeg Install FFmpeg using package managers like apt-get and brew. This is necessary to have the required tools for audio processing. bash sudo apt-get install ffmpeg brew install ffmpeg With a few keystrokes, we install FFmpeg using the trusty apt-get and brew package managers, ensuring that we have the necessary arsenal at our disposal. ### Charting Our Course: Navigating the Audio Seas **Navigating to directories** : We navigate to various directories using the cd command where each of ther audio files were stored. Armed with FFmpeg, we set sail down the river of audio files on our comptuter. With deft navigation skills, we can traverse directories in search of the audio treasures that await us. bash cd /Users/myMachine/Downloads/enas With each cd command, we draw closer to our destination, ready to unearth the raw materials that will soon be transformed into podcasting gold. ### Uniting the Fragments: Merging MP3 Files with FFmpeg **Merging audio files** : They used the ffmpeg command with the -f concat option to specify the format as concatenated. - The -safe 0 flag is used to disable safe mode, which allows the use of absolute paths in the concat demuxer. - The -i [filename] specifies the input file, and they provided a text file (e.g., enas.txt) that contains the list of audio files to be merged. - The -c copy flag specifies that the codec should be copied from the input to the output without re-encoding, which helps to retain the original quality. - They provided an output file name (e.g., enasaudio.mp3) to store the merged audio. At long last, we stand at the precipice of greatness, armed with FFmpeg’s legendary concatenation prowess. With a flick of the wrist and a command on the lips, we merge individual MP3 files into cohesive episodes that will captivate audiences far and wide. bash ffmpeg -f concat -safe 0 -i enas.txt -c copy enasaudio.mp3 **Repeating the process** : We then repeate the merging process for multiple sets of audio files located in different directories. bash ffmpeg -f concat -safe 0 -i enas.txt -c copy enasaudio2.mp3 With a symphony of commands, we orchestrate the merging of audio files, seamlessly weaving together fragments of sound into a tapestry of auditory delight. ### Ascension to the Podcasting Pantheon Armed with our newly compiled episodes, we’re ready to share them with the world. With a few final commands, we upload our files to the platform of choice. Who’s ready to inspire some listeners? ### Conclusion: FFmpeg, the Hero of Our Story In conclusion, FFmpeg emerged as the unsung hero of our podcasting jounrey and solving this specific technical issue. We were empowered to transcend the boundaries of audio production with its cool capabilities. With FFmpeg by our side, we addressed the weird challenge of merging MP3 files into one. The adventure awaits-are you ready to seize it? Listen here! [Acast Embed Player (377a84a64c1c2ad341d434d7b5fd740d87cc6381)](https://embed.acast.com/65ff13e1da1b9c0016000eec?theme=light&feed=true) _Originally published at_ [_https://robkleiman.net_](https://robkleiman.net/writing/using-ffmpeg-amp-the-command-line-to-repurpose-video-content-into-a-podcast) _on June 14, 2024._
rkrevolution
1,888,964
Docker Dadbod Ui
In normal mode, invoke :DBUI, establish a new connection with A. When prompted to enter a connection...
0
2024-06-28T17:31:56
https://blog.waysoftware.dev/blog/docker-dadbod-ui/
--- title: Docker Dadbod Ui published: true date: 2024-06-14 00:00:00 UTC tags: canonical_url: https://blog.waysoftware.dev/blog/docker-dadbod-ui/ --- In normal mode, invoke `:DBUI`, establish a new connection with `A`. When prompted to enter a connection string, follow the[PG spec](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING-URIS). e.g. `postgresql://<db-user>:<db-password>@localhost:5432/<db-name>`
johnmcguin
1,888,965
Shell into Containerized PostgreSQL Database
Shell into the running container: docker exec -it &lt;container-id&gt; /bin/bash Enter...
0
2024-06-28T17:32:14
https://blog.waysoftware.dev/blog/docker-psql-shell/
--- title: Shell into Containerized PostgreSQL Database published: true date: 2024-06-14 00:00:00 UTC tags: canonical_url: https://blog.waysoftware.dev/blog/docker-psql-shell/ --- Shell into the running container: ``` docker exec -it <container-id> /bin/bash ``` Run psql to connect to the database: ``` psql -d <db-name> -U <user> ```
johnmcguin
1,888,687
When "The Best" isn't good enough
I am a beekeeper. This won't be of much surprise to anyone who's spent any time with me, or who...
0
2024-07-02T09:29:09
https://simonemms.com/blog/2024/06/14/when-the-best-isnt-good-enough
culture, development, questions
--- title: When "The Best" isn't good enough published: true date: 2024-06-14 00:00:00 UTC tags: culture,development,questions canonical_url: https://simonemms.com/blog/2024/06/14/when-the-best-isnt-good-enough --- ![When "The Best" isn't good enough](https://simonemms.com/img/blog/beekeeping.jpg) I am a beekeeper. This won't be of much surprise to anyone who's spent any time with me, or who follows [@TheShroppieBeek](https://twitter.com/theshroppiebeek) on the[Information Superhighway](https://en.wikipedia.org/wiki/Information_superhighway). I'll bore for England on the subject of beekeeping. One thing beekeepers often say to each other is: > Ask a beekeeper a question and get two answers A question that might seem simple, such as "what's the best way of raising a new queen?", will come with a multitude of opinions, folklore and experience. You might ask a fan of the [Miller Method](http://www.dave-cushman.net/bee/millermethod.html), or a lover of [Grafting](https://www.youtube.com/watch?v=PJ_79D1ASlg) or someone who likes using any of the other methods. The problem here is asking for "The Best". How do we know it's "The Best"? When I say "The Best", I'm looking for the easiest way of doing it. When you hear "The Best", you might think I'm looking for the most reliable way of doing it. These are subtly different things. My setup will be different to the person I'm asking: - I have six hives over two apiaries - I'm a hobbyist - it's not my main income, so I don't need to turn a profit - my bees are all fairly sheltered from the wind - my bees all have south or east-facing entrances - my bees are all around ~130 metres above sea-level - my bees are all around 52°N - I select my bees for calmness rather than productivity - I use wooden, National boxes The person I ask almost certainly won't keep bees in an identical fashion to me. And, even if they did, they will have a natural odour, style and demeanour which the bees will pick up on and react differently to. So if I ask for "The Best", I'm assuming that they will know all about my bees and that there's will be the same. Which is why you end up getting two answers from one beekeeper. ## "The Best" in software engineering We see a similar behaviour in software engineering. I have been regularly asked for "The Best" without giving any other answers. Let's examine the question "what's The Best cloud provider?", which is a question I'm asked with depressing regularity. Defining "The Best" is actually quite difficult: - do you need a truly global application (including Africa and South America), or do you just need Europe and North America? - are you using a VPN, or will everything be accessed over the public internet? - are you going to be having vast quantities of traffic/data, or is it only going to be a few gigabytes per month? - do you need to use Windows machines, or is everything on Linux? - do you need managed Kubernetes, or are you comfortable using K3s? - do you have any specialist requirements (like a [satellite](https://aws.amazon.com/ground-station)), or are you just deploying some containers and storing data? - is money no object, or do you need to watch the pennies? These are just some of the questions that need to be answered in order to give an answer to the question. In my mind, I divide up the cloud providers into "The Establishment" (AWS, GCP, Azure) and "The Challengers" (DigitalOcean, Civo, Hetzner and others). For (most of) the questions I asked, the first part is something that The Establishment caters for (and does well), but the second part is something that everyone does well. Tell me the things that matter to you and I'll be able to give an intelligent answer. ## Define your parameters I've regularly run post-mortems in my capacity as a technical leader. I often end them by talking of better questions that we can ask in future to avoid the situation that's gone wrong. Telling me how you just "The Best" is useful in helping me understand the question you're actually asking. Instead of asking for "The Best", trying asking a better question: - my goal is _X_. What are the different ways I could achieve this? - am I on the right tracks with this? Are there any things I need to watch out for? - I want to avoid _X_ - how might you do it? - I'm doing a proof of concept that I want to move into production - what should I watch out for? - I want to put this application on the internet - how are we doing it for other things like this? ## The exception that proves the rule Of course, there are always exceptions. If you ask an[eager-to-please comedian](https://www.joshwiddicombe.com/) to buy The Taskmaster "the best present", you might just get them to get a tattoo whilst filming an unbroadcast show on a channel known for just showing repeats. And comedy gold ensues. **NB.** If you're [Little Alex Horne](https://en.wikipedia.org/wiki/Alex_Horne)reading this, please keep asking comedians/intelligent idiots for "The Best". The vagueness is what actually means makes for varied, interesting and funny TV. <iframe style="display: block; margin: 0 auto" width="560" height="315" src="https://www.youtube.com/embed/KLSXsxRQyhE?si=U-nupvYMnj6Saz8T&amp;start=426" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
mrsimonemms
1,888,886
FIX: Git Bash is Slow and has Strange Random Characters in VS Code
TLDR: If you're using git bash in VS Code and notice it slows down and/or starts printing random...
0
2024-06-19T17:58:04
https://www.davegray.codes/posts/git-bash-vs-code-slow-strange-random-characters
gitbash, vscode
--- title: FIX: Git Bash is Slow and has Strange Random Characters in VS Code published: true date: 2024-06-14 00:00:00 UTC tags: gitbash,vscode canonical_url: https://www.davegray.codes/posts/git-bash-vs-code-slow-strange-random-characters cover_image: https://raw.githubusercontent.com/gitdagray/my-blogposts/main/images/git-bash-vs-code-slow-strange-random-characters.png --- **TLDR:** If you're using git bash in VS Code and notice it slows down and/or starts printing random characters, here's how to fix it. ## The Problem After a recent VS Code update, I noticed git bash really slowed down. Sometimes it even output random weird characters like the ones you see in my terminal window below. ![git bash with strange characters](https://raw.githubusercontent.com/gitdagray/my-blogposts/main/images/git-bash-vscode-issue-1200x675.png) ## How to Fix It (for now) Press `Ctrl+,` or click the cog icon in the bottom left of VS Code to open Settings. Use the search bar and search for `terminal integrated shell`. The setting for `Terminal > Integrated > Shell Integration > Enabled` should show up. Uncheck the box for this setting. If you don't find that setting, you can open the JSON settings file and add this line: `"terminal.integrated.shellIntegration.enabled": false,` ## Check Back This seems to be a bug after a recent VS Code update. I don't think this setting provides anything I will miss, but if you do, you may want to check back and re-enable this in the future. <hr /> ## Let's Connect! Hi, I'm Dave. I work as a full-time developer, instructor and creator. If you enjoyed this article, you might enjoy my other content, too. **My Stuff:** [Courses, Cheat Sheets, Roadmaps](https://courses.davegray.codes/) **My Blog:** [davegray.codes](https://www.davegray.codes/) **YouTube:** [@davegrayteachescode](https://www.youtube.com/davegrayteachescode) **X:** [@yesdavidgray](https://x.com/yesdavidgray) **GitHub:** [gitdagray](https://github.com/gitdagray) **LinkedIn:** [/in/davidagray](https://www.linkedin.com/in/davidagray/) **Patreon:** [Join my Support Team!](patreon.com/davegray) **Buy Me A Coffee:** [You will have my sincere gratitude](https://www.buymeacoffee.com/davegray) Thank you for joining me on this journey. Dave
gitdagray
1,888,286
Modules Status Update
Hello Again!! As another Friday draws to a close, I hope this message finds you well....
0
2024-06-14T17:36:43
https://puppetlabs.github.io/content-and-tooling-team/blog/updates/2024-06-14-modules-status-update/
puppet, community
--- title: Modules Status Update published: true date: 2024-06-14 00:00:00 UTC tags: puppet,community canonical_url: https://puppetlabs.github.io/content-and-tooling-team/blog/updates/2024-06-14-modules-status-update/ --- ## Hello Again!! As another Friday draws to a close, I hope this message finds you well. Reflecting on the past week, it has been relatively calm with no significant updates or notable events in our modules. Our focus has remained on ongoing tasks, and we have continued to make steady progress. May the sun’s warm rays bring positivity and joy to your days. Wishing you a pleasant and fulfilling time ahead. ## Community Contributions We want to express our gratitude to the community contributors for their valuable contributions, even though last week saw a relatively quiet period in terms of new contributions.
puppetdevx
1,888,487
Boost Your Git Productivity with Aliases: Start Saving Time Today!
Tired of spending precious time typing out those long, repetitive Git commands? You're not alone. Git...
26,070
2024-06-14T11:25:45
https://ionixjunior.dev/en/boost-your-git-productivity-with-aliases-start-saving-time-today/
git
--- title: Boost Your Git Productivity with Aliases: Start Saving Time Today! published: true date: 2024-06-14 00:00:00 UTC tags: git canonical_url: https://ionixjunior.dev/en/boost-your-git-productivity-with-aliases-start-saving-time-today/ cover_image: https://ionixjuniordevthumbnail.azurewebsites.net/api/Generate?title=Boost+Your+Git+Productivity+with+Aliases%3A+Start+Saving+Time+Today%21 series: mastering-git --- Tired of spending precious time typing out those long, repetitive Git commands? You're not alone. Git aliases offer a powerful solution to streamline your workflow and boost your productivity. Imagine effortlessly navigating your Git repository with shortcuts for common commands, saving time and reducing the risk of errors. This blog post will introduce you to the world of Git aliases, demystifying their functionality and demonstrating their immense power. We'll guide you through creating your own custom shortcuts, showcasing practical examples for common Git tasks like log, fetch, commit, and more. By the end, you'll be ready to embrace the efficiency and speed that Git aliases bring to your development process. Get ready to master Git and unleash your inner coding ninja! ## What are Git Aliases? Git aliases are essentially shortcuts for Git commands. They allow you to define custom names for frequently used Git commands, making your workflow faster and more efficient. Imagine you frequently use the command `git status` to check the state of your repository. With an alias, you could define `git st` to represent `git status`, saving you keystrokes every time you need to check your repository’s status. Think of Git aliases as personalized macros for Git. You create them to map a custom name (your alias) to a specific Git command or combination of commands. This means you can essentially create custom commands tailored to your specific workflows and needs. This is good, isn’t it? Let’s see in practice how to create it. ## How to Create Git Aliases Creating Git aliases is very simple. You can set them up globally, making them available across all your Git projects, or locally for a specific repository. Here’s a step-by-step guide to get you started. The core command for creating Git aliases is: ``` git config --global alias.ALIAS_NAME COMMAND ``` Replace `ALIAS_NAME` by the alias you want and the `COMMAND` by the command. Let’s create an alias named “st” that will represent `git status`: ``` git config --global alias.st status ``` Now, whenever you type `git st` in your terminal, Git will execute `git status` behind the scenes. This was very simple, but you can create more complex commands, with a lot of parameters. Let me show you an example about the “log” command. I like so much to see the log on graph and in one line, so I’ve created an alias for this: ``` git config --global alias.lg "log --oneline --graph" ``` This alias is definitely a time saver! If you prefer, you can change your alias manually editing the `~/.gitconfig` file. Also, you can type `git config --list | grep alias` to see all alias you already configure in your machine. ## Aliases That I Use Here’s a collection of my favorite Git aliases that help streamline my workflow. I’ve organized them by functionality to make it easier for you to see how they can be applied: ### Status and Basic Navigation - `alias.st=status`: This is a classic, saving you from typing `git status` every time you want to check the current state of your repository. - `alias.br=branch`: A quick way to list your current branches or create new ones, replacing `git branch`. - `alias.co=checkout`: A simple way to switch to a different branch, replacing `git checkout`. - `alias.sw=switch`: Another alias for “checkout”, providing a more concise alternative. ### Committing and Amending - `alias.ci=commit`: Short for “commit,” this alias streamlines the process of creating a new commit. - `alias.cia=commit --amend`: Use this for amending the last commit, adding changes or modifying the commit message. - `alias.cian=commit --amend --no-edit`: Similar to “cia” alias, but it skips the commit message editor, allowing you to quickly amend the commit without message changes. ### Diffing & Comparing - `alias.df=diff -w`: This alias creates a diff with whitespace ignored, simplifying the comparison of code changes. - `alias.dfword=diff -w --word-diff`: This alias shows word-by-word differences in the diff, making it easier to pinpoint specific changes. - `alias.dft=!f() { GIT_EXTERNAL_DIFF=difft git diff -w --ext-diff $@; }; f`: This alias uses the “difft” program (a custom program) to generate the diff, making the output more concise and informative. - `alias.sh=!f() { GIT_EXTERNAL_DIFF=difft git show -p --ext-diff $@; }; f`: This alias uses a custom diff tool (defined by difft) to generate the diff output using `git show` command. ### Cleaning Up the Workspace - `alias.cl=clean -dfX`: This alias removes untracked files and ignored files from your workspace, keeping things tidy. ### Enhanced Log Viewing - `alias.lg=log --oneline --graph`: This alias displays a concise and graphical log, making it easier to visualize commit history. - `alias.lga=log --oneline --graph --all`: Similar to “lg”, but it shows all branches in the log, providing a more complete picture. - `alias.lgd=log --pretty=format:'%h %ad | %s%d [%an]' --date=short`: This alias provides a detailed commit log, including the commit hash, date, subject, and author. ### Cherry-Picking and Fetching - `alias.cp=cherry-pick`: A shortcut for cherry-picking specific commits from other branches. - `alias.ft=fetch origin -p`: This fetches changes from the “origin” remote. ### Finding the Parent Branch - `alias.parent=!git show-branch | grep '*' | grep -v "$(git rev-parse --abbrev-ref HEAD)" | head -n1 | sed 's/.*\[\(.*\)\].*/\1/' | sed 's/[\^~].*//' #`: This complex alias finds the parent branch of the current branch, which can be helpful when working with feature branches. Believe me: this simple alias help me a lot every day to working with Git. Here is the alias part of my `.gitconfig` file: ``` [alias] st = status ci = commit br = branch cia = commit --amend cian = commit --amend --no-edit df = diff -w dfword = diff -w --word-diff dft = "!f() { GIT_EXTERNAL_DIFF=difft git diff -w --ext-diff $@; }; f" co = checkout sw = switch cl = clean -dfX lg = log --oneline --graph lga = log --oneline --graph --all cp = cherry-pick ft = fetch origin -p parent = "!git show-branch | grep '*' | grep -v \"$(git rev-parse --abbrev-ref HEAD)\" | head -n1 | sed 's/.*\\[\\(.*\\)\\].*/\\1/' | sed 's/[\\^~].*//' #" lgd = log --pretty=format:'%h %ad | %s%d [%an]' --date=short sh = "!f() { GIT_EXTERNAL_DIFF=difft git show -p --ext-diff $@; }; f" ``` ## Conclusion Mastering Git is essential for any developer, and Git aliases are your secret weapon for unlocking a more efficient and enjoyable workflow. By replacing lengthy commands with simple, personalized shortcuts, you can save countless hours, reduce errors, and gain a deeper understanding of your repository’s history. We’ve explored the basics of Git aliases, and demonstrated practical examples. Now, it’s time to put your newfound knowledge into practice. Start by creating a few aliases for your most frequently used commands, and experiment with more complex combinations as you become more comfortable. Remember, the power of Git aliases lies in their ability to adapt to your specific needs. Embrace the flexibility, experiment with different approaches, and personalize your Git experience to maximize your productivity. So, don’t waste another minute on tedious commands! Start using Git aliases today and experience the joy of a streamlined, efficient workflow that empowers you to achieve more.
ionixjunior
1,888,287
DevX Status Update
Belfast Summer Party The team had a great afternoon out with all of our colleages...
0
2024-06-14T17:37:39
https://puppetlabs.github.io/content-and-tooling-team/blog/updates/2024-06-14-devx-status-update/
puppet, community
--- title: DevX Status Update published: true date: 2024-06-14 00:00:00 UTC tags: puppet,community canonical_url: https://puppetlabs.github.io/content-and-tooling-team/blog/updates/2024-06-14-devx-status-update/ --- ## Belfast Summer Party The team had a great afternoon out with all of our colleages completing a tour around the city with some cocktails and food. It was a great day had by all! Can you spot any of the DevX team, Lukas is currently in Peru living his best life, therefore unfortunately he missed out this time. ![The Belfast Puppet crew in front of City Hall](https://puppetlabs.github.io/content-and-tooling-team/images/belfast_summer_party.jpg) ## dsc\_lite timeouts are now configurable! Jordan has been working hard on dsc\_lite and ruby-pwsh. We are delighted that the timeout can now be configured. ## Community Contributions We’d like to thank the following people in the Puppet Community for their contributions over this past week: - [`puppet-modulebuilder#77`](https://github.com/puppetlabs/puppet-modulebuilder/pull/77): “dependabot: check for github actions and bundler”, thanks to [bastelfreak](https://github.com/bastelfreak) - [`puppetlabs_spec_helper#459`](https://github.com/puppetlabs/puppetlabs_spec_helper/pull/459): “Add Ruby 3.3 to CI matrix”, thanks to [bastelfreak](https://github.com/bastelfreak)
puppetdevx
1,887,780
Exploring ssh
Most people who have worked on remote servers are probably familiar with ssh, which let's us securely...
0
2024-06-13T23:58:55
https://dev.to/georg4313/exploring-ssh-20oj
ssh, servers
Most people who have worked on remote servers are probably familiar with [ssh](https://medium.com/@aqeelabbas3972/introduction-to-ssh-secure-shell-0d07e18d3149), which let's us securely communicate and control our remote environments. Other than basic use, I think it is important to dive deeper and also use the special tricks to save ourselves a lot of time or improve our control over the system, like is shown in the [Linux blog for ssh](https://www.linux.com/training-tutorials/advanced-ssh-security-tips-and-tricks/) and the [tipsblog with ssh tips and tricks](https://www.blunix.com/blog/interesting-things-to-do-with-ssh.html). Any thoughts? Did I miss any good ones?
georg4313
1,887,740
Github - Organization(Demo)
Step by Step approach to create an organization in GitHub Step 1 : Click on the icon of...
27,667
2024-06-13T22:44:59
https://dev.to/learnwithsrini/github-organizationdemo-3p89
github, organization, teams
###Step by Step approach to create an organization in GitHub **Step 1** : Click on the icon of your profile and which is highlighted in the below picture ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/58c0p9f2e8olutibqsk9.png) **Step 2** : Select organization ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ygaxx5br41ly9irtamjf.png) **Step 3** : You will see list of organizations already existing and if you want you create a new organization by clicking on New Organization button ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4gx3r5idn3jh9x89s80l.png) **Step 4** : Then you will get a option to pick the plans ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wqt614xyey5bbhqzsd19.png) **Step 5**: Select the free and proceed further. GitHub Organization are globally unique and we cant able to give same organization name once again. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6v8xnbm1bwoeiq9fyvpb.png) **Step 6**: Following fields are mandatory for creating a new organization - Organization name - Contact Email - The organization belongs to My personal account / A business or institution Just accept the terms of service and click on Next button ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yfxc9w9cvemw48gkpe4z.png) **Step 7** : If you want you can add organization members ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4wgaslctwxy6akeu4dws.png) then click on complete setup, if you are using Multi factor authentication then ask you to enter code from your MFA device ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ib4ijssiymmyiosxhhn9.png) Once MFA code verified then you will see the screen shown below ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4dvhrshi99d20ks5th3q.png) **Step 8** : Invite members by clicking on **Invite your first member** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vocfm457ou5nlvqjll8k.png) **Step 9** : Click on invite member ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/re7v4t6eh4st6vzsyx6h.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/chguzis22j4axyi5fiz3.png) **Step 10**: Type user name in the text field shown below and it will search globally in github, select the user. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hkaav6n08s9ggh3pab5i.png) **Step 11**: Give the appropriate permission and then click on invite ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u8tpejynrdnzankavqpo.png) **Step 12** : User invited will get an email for confirmation and once confirmed user can able to see organization. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y3um4duele29dbz5bdyk.png) **Conclusion:** 💬 If you enjoyed reading this blog post about GitHub Organization and found it informative, please take a moment to share your thoughts by leaving a review and liking it 😀 and follow me in [dev.to](https://dev.to/srinivasuluparanduru) , [linkedin ](https://www.linkedin.com/in/srinivasuluparanduru)
srinivasuluparanduru
1,880,861
gRPC Quick start - Coding with streams and bidirectional streaming
Perhaps you've already heard about gRPC. A few years ago, just the mention of "gRPC" gave me...
0
2024-06-13T23:51:43
https://dev.to/andrefsilveira1/grpc-quick-start-coding-with-streams-and-bidirectional-streaming-4dkd
microservices, grpc, go, development
Perhaps you've already heard about gRPC. A few years ago, just the mention of "gRPC" gave me goosebumps. I used to think it was the most complex monster created by the world of microservices. But it's not as scary as I once thought (really far from it). Let's see a quick start coding guide using Go, Evans, and gRPC Attention! This guide step isn't for you if: - You do not know Go - You're a beginner programmer - You like Star Wars The Last Jedi If you do not fit into any of the previous cases, keep going ## What is gRPC? In a few words, gRPC is a modern framework created by Google (now maintained by Cloud Native Computing) that implements Remote Procedure Call (RPC) that can run in any environment with high performance. It uses protocol buffers (a simplified XML) through HTTP/2. Thus, with gRPC we got binary files smaller than JSON, lower network consumption, and faster data transmission between services. ![This is a chart that explain the communication between microservices using gRPC](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ktl957bhpmkd8mfuuxsl.png) Notice that the main concept of gRPC is communication between microservices, not between browsers x backend. I recommend that you read the official [gRPC website](https://grpc.io/docs/what-is-grpc/). There is a lot of good content and documentation, I'll not be wordy cause already exists better references ## Let's Start In this article I'll use Go and Docker. So, You'll have to install the gRPC plugins. Follow up on this tutorial: [Tutorial](https://grpc.io/docs/languages/go/quickstart/) But... What will we create? Let's suppose antennas spread across a land running a service that calculates and returns the distance in kilometers between the antenna and the client. ![gif illustrating a circle connecting to antennas](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cctznw3np0ued4jbza32.gif) Then, an object, user, person, extraterrestrial, or anything else wanna contact an antenna to track a register from your moving position using latitude and longitude and receive a response from the antenna (remember that this is an example). Now that our domain is set up, in our workspace environment, create a folder called "proto". Here we gonna define the contracts of our gRPC service using proto buffer. Create a file called "registers.proto" inside the proto folder. First, we gonna define the settings from proto (using proto3) and then, create an object that will represent our Register object. ``` syntax = "proto3"; package pb; option go_package = "internal/pb"; message Register { string id = 1; string latitude = 2; string longitude = 3; string distance = 4; } ``` Those numbers represent only the order they should be sent. With this message contract, we defined that Register must be created with an id, latitude, longitude, and distance attributes. With the proto file defined, run: `protoc --go_out=. --go-grpc_out=. proto/register.proto`. This command is used to generate Go source files from a Protocol Buffers Documentation [here](https://protobuf.dev/reference/go/go-generated/). Great! Now we have our base object. But supposing we are the extraterrestrial contact, what should be inside of our request to the antenna? Well, our latitude and longitude, right? Thus: ``` message CreateRequest { string latitude = 1; string longitude = 2; } ``` Here we defined our request. Quite different from REST. Finally, let's define the register service on proto ``` service RegisterService { rpc CreateRegister(CreateRequest) returns (Register){} } ``` Here we define our service **RegisterService** with the rpc method `CreateRegister` receiving a `CreateRequest` and returning `Register` Calm down, don't worry, we're almost there! create a folder called `service` and inside, create `registers.go` file. Here, we have to define the register service ``` type RegisterService struct { pb.UnimplementedRegisterServiceServer } func NewRegisterService() *RegisterService { return &RegisterService{} } var latitude = -5.8623992555733695 var longitude = -35.19111877919574 ``` As we don't have a database or anything else, let's create just with `pb.UnimplementedRegisterServiceServer`. Notice that I defined global variables from latitude and longitude. It will represent the localization from a specific antenna. You're free to decide your latitude and longitude. I decided to set the coordinates from my favorite restaurant :) Finally, let's create our server. I'll create a folder `cmd` containing another folder called `grpc` that contains the `main.go` file. ``` package main import ( "andrefsilveira/grpc-quick-start/internal/pb" "andrefsilveira/grpc-quick-start/service" "fmt" "net" "google.golang.org/grpc" "google.golang.org/grpc/reflection" ) func main() { registerService := service.NewRegisterService() grpcServer := grpc.NewServer() pb.RegisterRegisterServiceServer(grpcServer, registerService) reflection.Register(grpcServer) var port = ":50051" listen, err := net.Listen("tcp", port) if err != nil { panic(err) } fmt.Printf("Server starting on port %s", port) if err := grpcServer.Serve(listen); err != nil { panic(err) } } ``` At `main.go` we are instantiating the register service and also creating a new grpc server. Again, the details are on the documentation. Remember to import all of the necessary packages. But, are we forgetting something? Yes, we are. Look, we create the proto file, generate the files from proto buffers, and create the connection and the constructors. But, we have to do something with this request. A few months ago I created a go package that calculates the distance between two coordinates and returns the distance in kilometers. You can read about it [here](https://github.com/andrefsilveira1/go-haversine). So, let's use it to calculate our distance from an antenna. at `service/registers.go`, create: ``` func (c *RegisterService) CreateRegister(ctx context.Context, in *pb.CreateRequest) (*pb.Register, error) { register := &pb.Register{} lat, err := strconv.ParseFloat(in.Latitude, 64) if err != nil { return nil, err } lon, err := strconv.ParseFloat(in.Longitude, 64) if err != nil { return nil, err } result := haversine.Calculate(latitude, longitude, lat, lon) value := strconv.FormatFloat(result, 'f', -1, 64) register = &pb.Register{ Id: uuid.New().String(), Latitude: register.Latitude, Longitude: register.Longitude, Distance: value, } return register, nil } ``` Here, we are receiving the register from `input` and parsing to float to finally use it in the haversine package. Then, a `Register` is returned. Notice that the method name should match the method name declared in the proto file: ``` service RegisterService { rpc CreateRegister(CreateRequest) returns (Register){} } ``` So far so good, let's do it! run: `go run cmd/grpc/main.go` You should see the `Server starting on port :50051` message. To interact with the gRPC server, we will use Evans, which is an interactive command-line client for gRPC. You can read more about it [here](https://github.com/ktr0731/evans) and can install using docker `docker run --rm --network host -it ghcr.io/ktr0731/evans --host localhost --port 50051 --reflection`. This method is not recommended, but in our case, it will be fine. If everything worked correctly, you must see something like this in your terminal: ![A print from terminal showing up the Evans start](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iswowqz8jb2ryehctxks.png) first, we have to select the package and the service. Remember that we defined at proto file? ``` syntax = "proto3"; package pb; option go_package = "internal/pb"; [...] service RegisterService { rpc CreateRegister(CreateRequest) returns (Register){} } ``` The package is `pb`and the service is `RegisterService`. Select them using respectively: - `package pb` - `service RegisterService` Now, we can call the create register method. Run `call CreateRegister` and type any latitude and longitude. If everything works fine, this should be your output: ``` pb.RegisterService@localhost:50051> call CreateRegister latitude (TYPE_STRING) => 150 longitude (TYPE_STRING) => 600 { "distance": "10831.805049673263", "id": "5421f456-9b77-495e-b358-1184fcc8dc3b" } ``` But, let's suppose a scenario that we do not have to receive the response after the request. Take the same example, and let be the "tracking" a non vital information. Thus, we could receive the response only when our connection was closed. To do this, we gonna create a `stream`. Streams allow the client or the server to send multiple requests over a unique connection. But keep calm it is very simple. Create another message called `Registers`. Notice that it is the same `Register` but with the tag `repeated`. This means that we are gonna receive a list of `Register` messages. ``` message Registers { repeated Register regiters = 1; } ``` Then, add the method in `RegisterService` ``` service RegisterService { rpc CreateRegister(CreateRequest) returns (Register){} rpc CreateRegisterStream(stream CreateRequest) returns (Registers) {} } ``` And finally, create the `CreateRegisterStream` method: ``` func (c *RegisterService) CreateRegisterStream(stream pb.RegisterService_CreateRegisterStreamServer) error { registers := &pb.Registers{} for { register, err := stream.Recv() if err == io.EOF { return stream.SendAndClose(registers) } if err != nil { return err } lat, err := strconv.ParseFloat(register.Latitude, 64) if err != nil { return err } lon, err := strconv.ParseFloat(register.Longitude, 64) if err != nil { return err } result := haversine.Calculate(latitude, longitude, lat, lon) value := strconv.FormatFloat(result, 'f', -1, 64) registers.Registers = append(registers.Registers, &pb.Register{ Id: uuid.New().String(), Latitude: register.Latitude, Longitude: register.Longitude, Distance: value, }) } } ``` Notice that this method is quite different from the previous one. It's opening a stream with `register, err := stream.Recv()` and receiving values inside a for loop. This loop will exit only if an error occurs or it reaches the end of file `io.EOF`. Great. Save it and restart Evans and your gRPC server. - `package pb` - `service RegisterService` - `call CreateRegisterStream` Type some coordinates. When you finish, type: `cntrl + d`, and you should receive something like this: ``` pb.RegisterService@localhost:50051> call CreateRegisterStream latitude (TYPE_STRING) => 156 longitude (TYPE_STRING) => 616 latitude (TYPE_STRING) => 616 longitude (TYPE_STRING) => 99 latitude (TYPE_STRING) => 864 longitude (TYPE_STRING) => 151 latitude (TYPE_STRING) => { "registers": [ { "distance": "12422.520903261275", "id": "f083993e-2c73-438e-a906-d1b19b26476f", "latitude": "156", "longitude": "616" }, { "distance": "8286.547137287573", "id": "8ca6bc74-af0d-4c38-af54-b1f36bbdecf0", "latitude": "616", "longitude": "99" }, { "distance": "4699.522659470381", "id": "af354a5f-91df-4573-b5ad-5fb736187942", "latitude": "864", "longitude": "151" } ] } ``` Nice! Moreover, let's suppose another case. Assume that now it is necessary a continuous communication between the user and the antenna. Both must keep in touch with each other, and the information should be transmitted through a binary connection `client <-> server`. In this case, we can use bidirectional streams. This type of communication is related to a two-way conversation where both parties can continuously send and receive messages. Similarly to the previous method, let's create the `CreateRegisterBidirectional` method: ``` service RegisterService { rpc CreateRegister(CreateRequest) returns (Register){} rpc CreateRegisterStream(stream CreateRequest) returns (Registers) {} rpc CreateRegisterBidirectional(stream CreateRequest) returns (stream Register) {} } ``` Notice that we now put the `stream` tag in the return statement too. When it has a `stream` at input and output, it defines a bidirectional method. Thus, let's create the method: ``` func (c *RegisterService) CreateRegisterBidirectional(stream pb.RegisterService_CreateRegisterBidirectionalServer) error { for { register, err := stream.Recv() if err == io.EOF { return nil } if err != nil { return err } lat, err := strconv.ParseFloat(register.Latitude, 64) if err != nil { return err } lon, err := strconv.ParseFloat(register.Longitude, 64) if err != nil { return err } result := haversine.Calculate(latitude, longitude, lat, lon) value := strconv.FormatFloat(result, 'f', -1, 64) err = stream.Send(&pb.Register{ Id: uuid.New().String(), Latitude: register.Latitude, Longitude: register.Longitude, Distance: value, }) if err != nil { return err } } } ``` Notice that looks like with the previous method. But instead send and closing only when the connection close, now we gonna receive and send simultaneously. Restart Evans and the gRPC server again. - `package pb` - `service RegisterService` - `call CreateRegisterBidirectional` Type how many coordinates you want. The output should look like: ``` pb.RegisterService@localhost:50051> call CreateRegisterBidirectional latitude (TYPE_STRING) => 161 longitude (TYPE_STRING) => 66 latitude (TYPE_STRING) => { "distance": "9052.814145827013", "id": "197ed428-3a18-4548-888a-9d2ca4930995", "latitude": "161", "longitude": "66" } latitude (TYPE_STRING) => 6489 longitude (TYPE_STRING) => 116 latitude (TYPE_STRING) => { "distance": "16820.47617639521", "id": "d356e97f-2a1d-491a-b2ef-37e88bfafb59", "latitude": "6489", "longitude": "116" } latitude (TYPE_STRING) => 616 longitude (TYPE_STRING) => 888 latitude (TYPE_STRING) => { "distance": "7930.192513515994", "id": "ba561afe-3a92-4f19-8a14-9b6c7ae3de26", "latitude": "616", "longitude": "888" } ``` Notice that now we are sending and receiving information at the same time. To stop it, type `cntrl + d`. Uff, so this is it. If you have any doubts or recommendations do not hesitate to reach out at: freitasandre38@gmail.com Also, you can find the source code [here](https://github.com/andrefsilveira1/grpc-quick-start)
andrefsilveira1
1,887,778
Computer Science Concept: Hash Function
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-13T23:51:33
https://dev.to/sweta_kangurisonulkar_/computer-science-concept-hash-function-gm5
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer **Computer Science Concept: Hash Function** A hash function converts input data of any size into a fixed-size value, typically used in hash tables for fast data retrieval. It's crucial for efficient indexing and storage, ensuring quick access to data by assigning unique keys to inputs. ## Additional Context The explanation provides a concise overview of hash functions, highlighting their role in data structures and emphasizing their importance in ensuring efficient data retrieval and storage. Hash functions are foundational in many computer science applications, from databases to cryptography, making their understanding crucial for CS students and professionals. <!-- Thanks for participating! -->
sweta_kangurisonulkar_
1,887,776
Tree Loving Care, LLC
For expert tree care services in Viroqua, WI, turn to Tree Loving Care, LLC. With our passion for...
0
2024-06-13T23:44:11
https://dev.to/treelovingcarellc/tree-loving-care-llc-1ald
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3lp1ys0pvkelxzr3rog6.png) For expert tree care services in Viroqua, WI, turn to Tree Loving Care, LLC. With our passion for trees and dedication to excellence, we ensure your trees receive the utmost care. Contact us today for a complimentary consultation and let us nurture your trees with love and expertise! Tree Loving Care, LLC Address: [Viroqua, WI](https://www.google.com/maps?cid=8843803038585622623) Phone: (608) 615-7740 Website: [https://treelovingcarellc.com/](https://treelovingcarellc.com/) Contact email: info@treelovingcarellc.com Visit Us: [Tree Loving Care, LLC Facebook ](https://www.facebook.com/Treelovingcarellc) Our Services: Emergency Tree Service Stump Grinding Tree Cabling & Bracing Tree Removal Tree Trimming Tree Pruning Tree Care
treelovingcarellc
1,887,775
Tree Loving Care, LLC
For expert tree care services in Viroqua, WI, turn to Tree Loving Care, LLC. With our passion for...
0
2024-06-13T23:44:00
https://dev.to/treelovingcarellc/tree-loving-care-llc-163m
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3lp1ys0pvkelxzr3rog6.png) For expert tree care services in Viroqua, WI, turn to Tree Loving Care, LLC. With our passion for trees and dedication to excellence, we ensure your trees receive the utmost care. Contact us today for a complimentary consultation and let us nurture your trees with love and expertise! Tree Loving Care, LLC Address: [Viroqua, WI](https://www.google.com/maps?cid=8843803038585622623) Phone: (608) 615-7740 Website: [https://treelovingcarellc.com/](https://treelovingcarellc.com/) Contact email: info@treelovingcarellc.com Visit Us: [Tree Loving Care, LLC Facebook ](https://www.facebook.com/Treelovingcarellc) Our Services: Emergency Tree Service Stump Grinding Tree Cabling & Bracing Tree Removal Tree Trimming Tree Pruning Tree Care
treelovingcarellc
1,887,772
01 - to tech Html,Css,Javascript project
1- https://youtu.be/eUuDmIrhW1k?si=PeVva2ZQXIF45Ars 2- https://youtu.be/-FAjw3aLP80?si=FMiE0gICPzxf-...
0
2024-06-13T23:37:52
https://dev.to/hussein09/first-day-to-tech-htmlcssjavascript-project-2h89
html, css, javascript, webdev
1- https://youtu.be/eUuDmIrhW1k?si=PeVva2ZQXIF45Ars 2- https://youtu.be/-FAjw3aLP80?si=FMiE0gICPzxf-GLo **I will upload more than 150 projects** from the series of programming languages: html, css, js, and we will develop them at a later time.
hussein09
1,887,771
Gemika's Awesome Git Adventures: A Fun Guide to Coding Magic! 🧙‍♂️✨"
Hey, Little Buddy! Meet Uncle Gerry and the Magic World of GitHub 🪄✨ Hello there, little...
0
2024-06-13T23:27:44
https://dev.to/gerryleonugroho/gemikas-awesome-git-adventures-a-fun-guide-to-coding-magic--25go
git, github, webdev, beginners
### Hey, Little Buddy! Meet Uncle Gerry and the Magic World of GitHub 🪄✨ Hello there, little one! Let me tell you a story. I'm Uncle Gerry, the father of the amazing Gemika Haziq Nugroho. I do some really cool stuff as a data-driven marketer and software engineer. That means I help people use numbers and magic computer codes to do awesome things on the internet. But you know what keeps my mind happy and clear? Writing software, like telling a story with my computer! And of course, I love my son Gemika so much and always want to share the magic with him. 🌟❤️ I also love teaching kids like you about the wonders of technology and programming, making it as easy and fun as possible! 🧑‍🏫✨ So let's dive into this magical journey together, with lots of smiles and excitement! 😄🚀 ### What is Git? 🧙‍♂️🧩 Before we dive into the magic tricks, let’s talk about what Git is. Git is like a magic notebook where developers (those are people who make cool computer stuff) can write down their work and share it with friends. Imagine a giant playground where everyone can build and play together, but instead of sand and swings, there are codes and computers. Git helps keep all the drawings and LEGO projects organized so nobody loses their cool ideas. We use something called a terminal to talk to Git, which is like using a magic wand to give it commands. Ready to learn some magic tricks? Let’s go! 🚀✨ ### 1. Git Init: Starting the Magic 🪄 Imagine you have a brand new coloring book, and you want to start a new picture. But first, you need to let everyone know you're beginning a masterpiece. `git init` is like telling everyone, "Hey, I'm starting a new project!" **Real-Life Example:** Think of it like standing in front of a big blank chalkboard and saying, "I'm going to draw something amazing here!" It’s the first step to creating something wonderful. ```bash git init ``` 🎨 **Uncle Gerry's Tip:** It’s like opening a brand new coloring book. Exciting, right? 🖍️ ### 2. Git Clone: Copying the Playground 📋 Sometimes, you see an amazing drawing your friend made, and you want a copy so you can color it yourself. `git clone` helps you do just that. It copies all the magic from your friend’s project to your own computer. **Real-Life Example:** Imagine your friend has a super cool LEGO castle, and you want to build one just like it. `git clone` gives you all the same pieces so you can start building your own castle! ```bash git clone https://github.com/friend/project ``` 🏰 **Uncle Gerry's Tip:** It’s like copying your friend’s awesome LEGO creation. Now you both have cool castles! 🏰✨ ### 3. Git Add: Choosing What to Save ✅ Imagine you've colored a part of your drawing and you want to show it to everyone. `git add` is like picking the parts you want to show off. **Real-Life Example:** Think of it like picking out the best parts of your LEGO castle to show your parents. "Look, I built the drawbridge and the towers!" When you use `git add`, you're saying, "This part is ready to be saved and shared!" ```bash git add my-drawing.png ``` 🎨 **Uncle Gerry's Tip:** It’s like saying, "Look at this cool part I just made!" 😎🖌️ ### 4. Git Commit: Saving Your Work 💾 After choosing what you want to show, you need to save it in your special drawing book. `git commit` does that. It saves your chosen parts with a little note about what you did. **Real-Life Example:** Imagine you take a photo of your LEGO castle and write a note, "Finished the drawbridge today!" `git commit` is like putting that photo and note in your scrapbook. This way, you’ll always remember what you worked on and can look back at it later. ```bash git commit -m "Finished coloring the sky" ``` 📸 **Uncle Gerry's Tip:** It’s like saving a picture of your LEGO castle in your scrapbook with a note about what you built! 📖✨ ### 5. Git Push: Sharing with Friends 🚀 Now that you've saved your drawing, you want to share it with all your friends. `git push` sends your saved work to the playground (GitHub) for everyone to see. **Real-Life Example:** It’s like putting your LEGO castle on display at the park so all your friends can see it and admire your hard work. When you use `git push`, you’re saying, "Hey everyone, look at what I made!" ```bash git push ``` 🌟 **Uncle Gerry's Tip:** It’s like showing off your awesome LEGO castle to all your friends! 🏞️🧑‍🤝‍🧑✨ ### 6. Git Pull: Getting Updates 📥 Sometimes, your friends might add new cool things to their drawings, and you want to get those updates. `git pull` brings those changes to your drawing so you always have the latest version. **Real-Life Example:** Imagine your friend adds a secret tunnel to their LEGO castle, and you want to add it to yours too. `git pull` helps you get that update and add it to your own castle. ```bash git pull ``` 🔄 **Uncle Gerry's Tip:** It’s like updating your LEGO castle with the new secret tunnel your friend built! 🚇✨ ### 7. Git Status: Checking Your Work 📝 If you're ever confused about what's happening with your drawing, `git status` helps you check if everything's in order and if you need to do anything next. **Real-Life Example:** It’s like taking a step back and looking at your LEGO castle to see what you’ve built so far and what pieces you might need to add next. When you use `git status`, you’re checking on your project to see what’s done and what’s not. ```bash git status ``` 🔍 **Uncle Gerry's Tip:** It’s like taking a quick look at your LEGO project to see what’s done and what’s next! 🧩👀 ### 8. Git Branch: Trying New Things 🌿 Imagine you want to try drawing something different without messing up your main drawing. `git branch` lets you create a new piece of paper where you can experiment. **Real-Life Example:** Think of it like starting a new page in your coloring book to try out a different color scheme for your castle. If you like it, you can add it to your main picture later. With `git branch`, you can try new things without messing up your original work. ```bash git branch new-idea ``` 🖌 **Uncle Gerry's Tip:** It’s like trying new colors on a separate piece of paper before adding them to your masterpiece. 🖍️✨ ### 9. Git Checkout: Switching Papers 📄 When you want to switch between your main drawing and your new experiment, `git checkout` helps you do that easily. **Real-Life Example:** It’s like flipping back and forth between pages in your coloring book to work on different drawings. You can switch back to your main drawing anytime you want and continue working on it. ```bash git checkout new-idea ``` 📖 **Uncle Gerry's Tip:** It’s like turning the pages in your coloring book to work on different pictures. 📚✨ ### 10. Git Merge: Combining Drawings 🔀 If you like your new experiment and want to add it to your main drawing, `git merge` combines them into one beautiful piece. **Real-Life Example:** Imagine you tried a new color scheme for your castle on a different page, and now you want to add those colors to your main drawing. `git merge` makes that happen, bringing all your ideas together. ```bash git merge new-idea ``` 🌈 **Uncle Gerry's Tip:** It’s like combining your favorite parts from different drawings into one amazing masterpiece! 🎨✨ ### 11. Git Log: Storybook of Changes 📚 To remember everything you've done, `git log` shows you a storybook of all the changes you and your friends have made to your drawings. **Real-Life Example:** It’s like keeping a diary of your LEGO building adventures, with pictures and notes about every cool thing you added or changed. Every time you make a change, you can look back at your diary and see what you did. ```bash git log ``` 📖 **Uncle Gerry's Tip:** It’s like reading a diary that tells the story of your LEGO castle’s creation! 📓✨ ### 12. Git Revert: Fixing Mistakes 🧹 Sometimes, we make mistakes, and that’s okay! `git revert` lets you go back and fix those mistakes, just like erasing a part of your drawing to make it better. **Real-Life Example:** Imagine you accidentally knocked over part of your LEGO castle. `git revert` is like having a magic power to rebuild it just the way it was before! It helps you undo mistakes and keep everything looking great. ```bash git revert bad-change ``` 🧩 **Uncle Gerry's Tip:** It’s like having a magic eraser to fix any mistakes in your LEGO castle! 🧽✨ And there you have it, kiddo! These are the magical commands that help developers like Uncle Gerry make awesome things every day. Remember, practice makes perfect, so keep trying these out and one day, you might become a tech wizard too! 🌟🚀✨ ### Islamic Inspiration: Wisdom and Knowledge 🌙📖 In our journey of learning and creating, it's important to remember some beautiful Islamic teachings. The Prophet Muhammad (peace be upon him) said, "Seek knowledge from the cradle to the grave." 🌟📚 As we learn new things, we should always say, "Alhamdulillah" (الحمد لله) which means "Praise be to Allah" for giving us the ability to learn and create. Remember, every piece of knowledge is a gift from Allah. 🌟🙏 Love and Barakah (blessings), Uncle Gerry 💖 ### Let's Recap with Lots of Fun Emojis! 🎉 1. **Git Init** 🪄: Start your magic coloring book! 🎨🖍️ 2. **Git Clone** 📋: Copy your friend's awesome LEGO castle! 🏰✨ 3. **Git Add** ✅: Choose the best parts to show off! 😎🖌️ 4. **Git Commit** 💾: Save your cool creations with a note! 📸✨ 5. **Git Push** 🚀: Share your masterpiece with friends! 🌟🏞️ 6. **Git Pull** 📥: Get the latest updates for your project! 🔄✨ 7. **Git Status** 📝: Check what’s done and what’s next! 🔍👀 8. **Git Branch** 🌿: Try new things on a separate page! 🖍️✨ 9. **Git Checkout** 📄: Switch between different drawings! 📚✨ 10. **Git Merge** 🔀: Combine all your cool ideas! 🎨✨ 11. **Git Log** 📚: Read the story of your creation! 📓✨ 12. **Git Revert** 🧹: Fix mistakes with a magic eraser! 🧽✨ ### Embracing the Journey Always remember, little buddy, learning new things and creating awesome projects is like a wonderful adventure. Just like building a LEGO castle or drawing your favorite superheroes, coding is a fun way to use your imagination and make amazing things. 🌟🚀 Keep practicing, keep learning, and one day, you'll be an inspiring tech developer, just like Uncle Gerry! Remember to always seek knowledge, say Alhamdulillah, and have fun with every step. 🌟📚✨ ### Final Words Insha'Allah (God willing), you'll continue to explore the wonders of technology and coding. With each command you learn, you're opening a door to new possibilities and adventures. 🌟🚪✨ JazakAllah Khair (جزاك الله خيراً) which means "May Allah reward you with goodness" for joining me on this magical journey. May your path be filled with knowledge, creativity, and lots of fun! 🌟📚✨ Love, Uncle Gerry 💖
gerryleonugroho
1,887,770
Movers and Packers in Dubai Marina
Relocating homes inside Dubai Marina is an example of accepting change in the middle of a busy city....
0
2024-06-13T23:23:39
https://dev.to/scarlett_6341cae31964617a/movers-and-packers-in-dubai-marina-1876
Relocating homes inside Dubai Marina is an example of accepting change in the middle of a busy city. But with the careful attention to detail provided by **[movers and packers Dubai Marina](https://madomovers.com/movers-in-dubai-marina/)**, the transfer can go as smoothly as the peaceful seas around this energetic location. **Creating effortless move** The skill of movers in Dubai extends beyond simple possession transportation to include error-free transfer coordination. These professionals movers guaranteeing that every step is as peaceful as the Marina's beautiful views. They do this through strict planning and perfect execution. **Trustworthy Partners** Movers are reliable partners in a city that values dependability highly. Dreams, memories, and goals are entrusted to them when you assign them the responsibility of relocation. Every client feels more confident as a result of their dedication to professionalism. **Embracing Technology for easy relocation** Experts use technology to improve the moving experience by providing clear communication channels and faster booking procedures. Real-time tracking technologies provide clients with a sense of strength and control over the relocation process. **Environmentally aware Moving Services** **[Movers and packers in Dubai](https://madomovers.com/)** value sustainability beyond anything else. Reducing their carbon footprint during relocations, using eco-friendly procedures and using sustainable materials is in line with the Marina's dedication to environmental conscience. **Making Memorable Experiences at Dubai Marina** They leave a lasting effect; they are more than just mover. Their objective is to not just meet but exceed clients' expectations by bringing joy and contentment to each and every house they assist in building in Dubai Marina. **Conclusion** Movers and packers in Dubai seamlessly merge professionalism, empathy, and expertise. They preserve the essence of home in the middle of Dubai Marina's advancement, turning what may be a scary transfer into a memorable experience.
scarlett_6341cae31964617a
1,887,746
Building a Real-Time Streaming Chatbot with Kotlin and Ollama AI
In the ever-evolving landscape of artificial intelligence, creating a chatbot capable of handling...
0
2024-06-13T23:18:49
https://dev.to/josmel/building-a-real-time-streaming-chatbot-with-kotlin-and-ollama-ai-2m7h
ollama, kotlin, chatbot, streaming
In the ever-evolving landscape of artificial intelligence, creating a chatbot capable of handling real-time conversations with streaming responses is a fascinating challenge. In this blog post, we'll walk you through the process of building a Kotlin-based chatbot using Ollama AI. We'll cover everything from setting up the project to implementing and testing the streaming capabilities. Let's dive in! **Project Overview** Our goal is to create a chatbot that leverages Ollama AI to provide real-time streaming responses to user inputs. We'll use Kotlin, OkHttp, and MockWebServer for testing. The chatbot will handle streaming responses, displaying them as they are received, ensuring a smooth and interactive user experience. **1- Initialize the Kotlin Project:** - Start by creating a new Kotlin project in IntelliJ IDEA. - Add the necessary dependencies in your `build.gradle.kts` file: ``` dependencies { implementation("org.jetbrains.kotlin:kotlin-stdlib") implementation("com.squareup.okhttp3:okhttp:4.9.1") implementation("org.json:json:20210307") implementation("org.jetbrains.kotlinx:kotlinx-coroutines-core:1.5.2") // Dependencies to test testImplementation(kotlin("test")) testImplementation("org.junit.jupiter:junit-jupiter-api:5.9.0") testRuntimeOnly("org.junit.jupiter:junit-jupiter-engine:5.9.0") testImplementation("org.jetbrains.kotlin:kotlin-test:1.9.10") testImplementation("org.jetbrains.kotlin:kotlin-test-junit:1.9.10") testImplementation("com.squareup.okhttp3:mockwebserver:4.9.1") } ``` **2- Install Ollama and Download the Model:** - Before running the chatbot, you need to install Ollama on your machine and download the necessary model. Follow the Ollama installation guide to set up Ollama. - Once installed, download the model using the following command: ``` ollama pull llama2-uncensored ``` **3- Create the OllamaClient:** - This class will handle sending requests to Ollama AI and processing the streaming responses. ``` import okhttp3.* import okhttp3.MediaType.Companion.toMediaType import okhttp3.RequestBody.Companion.toRequestBody import okio.BufferedSource import org.json.JSONObject import java.io.IOException class OllamaClient { private val client = OkHttpClient() private val baseUrl = "http://localhost:11434/api/generate" fun streamResponse(prompt: String, onResponse: (String) -> Unit, onComplete: () -> Unit, onError: (Exception) -> Unit) { val requestBody = JSONObject() .put("model", "llama2-uncensored") .put("prompt", prompt) .put("stream", true) .toString() .toRequestBody("application/json".toMediaType()) val request = Request.Builder() .url(baseUrl) .post(requestBody) .build() client.newCall(request).enqueue(object : Callback { override fun onFailure(call: Call, e: IOException) { onError(e) } override fun onResponse(call: Call, response: Response) { if (!response.isSuccessful) { onError(IOException("Unexpected code $response")) return } response.body?.use { responseBody -> val source: BufferedSource = responseBody.source() while (!source.exhausted()) { val line = source.readUtf8Line() if (line != null) { val jsonResponse = JSONObject(line) if (jsonResponse.has("response")) { onResponse(jsonResponse.getString("response")) } } } onComplete() } } }) } } ``` **4- Create the ConversationHandler:** - This class will manage the conversation, ensuring that user inputs are processed and responses are displayed in real-time. ``` import kotlinx.coroutines.* class ConversationHandler(private val ollamaClient: OllamaClient) { private val conversationHistory = mutableListOf<String>() fun start() = runBlocking { while (true) { print("You: ") val userInput = readLine() if (userInput.isNullOrEmpty()) break conversationHistory.add("You: $userInput") val context = conversationHistory.joinToString("\n") var completeResponse = "" ollamaClient.streamResponse( context, onResponse = { responseFragment -> completeResponse += responseFragment print("\rOllama: $completeResponse") }, onComplete = { println() // Move to the next line after completion conversationHistory.add("Ollama: $completeResponse") print("You: ") }, onError = { e -> println("\nOllama: Error - ${e.message}") print("You: ") } ) } } } ``` **5- Main Function:** - This will serve as the entry point for your application. ``` fun main() { val ollamaClient = OllamaClient() val conversationHandler = ConversationHandler(ollamaClient) conversationHandler.start() } ``` **Testing the Streaming Response** To ensure that our OllamaClient handles streaming responses correctly, we'll write unit tests using MockWebServer. 1. Setup the Test Class: ``` import okhttp3.mockwebserver.MockResponse import okhttp3.mockwebserver.MockWebServer import org.json.JSONObject import org.junit.After import org.junit.Before import org.junit.Test import kotlin.test.assertEquals import kotlin.test.assertNotNull class OllamaClientTest { private lateinit var mockWebServer: MockWebServer private lateinit var ollamaClient: OllamaClient @Before fun setUp() { mockWebServer = MockWebServer() mockWebServer.start() ollamaClient = OllamaClient().apply { val baseUrlField = this::class.java.getDeclaredField("baseUrl") baseUrlField.isAccessible = true baseUrlField.set(this, mockWebServer.url("/api/generate").toString()) } } @After fun tearDown() { mockWebServer.shutdown() } @Test fun `test streamResponse returns expected response`() { val responseChunks = listOf( JSONObject().put("response", "Hello").toString(), JSONObject().put("response", " there").toString(), JSONObject().put("response", ", how are you?").toString() ) responseChunks.forEach { chunk -> mockWebServer.enqueue(MockResponse().setBody(chunk).setResponseCode(200)) } val completeResponse = StringBuilder() val onCompleteCalled = arrayOf(false) ollamaClient.streamResponse( prompt = "hello", onResponse = { responseFragment -> completeResponse.append(responseFragment) }, onComplete = { onCompleteCalled[0] = true assertEquals("Hello there, how are you?", completeResponse.toString()) }, onError = { e -> throw AssertionError("Error in streaming response", e) } ) Thread.sleep(1000) assertEquals(true, onCompleteCalled[0]) } @Test fun `test streamResponse handles error`() { mockWebServer.enqueue(MockResponse().setResponseCode(500).setBody("Internal Server Error")) var errorCalled = false ollamaClient.streamResponse( prompt = "hello", onResponse = { _ -> throw AssertionError("This should not be called on error") }, onComplete = { throw AssertionError("This should not be called on error") }, onError = { e -> errorCalled = true assertNotNull(e) } ) Thread.sleep(1000) assertEquals(true, errorCalled) } } ``` ## Repository https://github.com/josmel/ChatbotKotlinOllama **Conclusion** Building a real-time streaming chatbot using Kotlin and Ollama AI is a rewarding challenge that showcases the power of modern AI and streaming capabilities. By following this guide, you can create a chatbot that not only responds quickly but also handles conversations smoothly. Remember to install Ollama and download the necessary model before running your project. Happy coding! Feel free to reach out with any questions or comments. If you found this guide helpful, please share it with others and follow me for more Kotlin and AI tutorials!
josmel
1,887,738
DEV Computer Science Challenge v24.06.12 : Recursion
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-13T22:44:29
https://dev.to/therbstar/dev-computer-science-challenge-v240612-recursion-28ik
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer **Recursion**: A process where a function calls itself as a subroutine. This enables it to perform repetitive tasks efficiently, with each step working on a smaller problem, until a base case is reached. Essential for elegant coding. ## Additional Context When crafting a One Byte Explainer, it’s crucial to distill complex concepts into their essence. The challenge lies in balancing simplicity with accuracy, ensuring the explanation is accessible to a broad audience without sacrificing the core message. This exercise not only tests one’s understanding of the subject but also their ability to communicate effectively. It’s a valuable skill in education and technology communication, where clarity is key. Judges may consider the precision of language, the choice of fundamental elements included, and the overall comprehensibility of the explanation.
therbstar
1,887,744
[Game of Purpose] Day 26
Today I made propellers animation start and end smoothly, instead of instantly. In my BP_Propeller...
27,434
2024-06-13T23:12:04
https://dev.to/humberd/game-of-purpose-day-26-3091
gamedev
Today I made propellers animation start and end smoothly, instead of instantly. In my `BP_Propeller` Blueprint I used a Timeline node, where I specified a curve. In this case it ramps up in 2 seconds non-linearly. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zaad4uewtesz73fu43he.png) Then I used that in my Event Graph. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qtdfx8uosxq7sqf11xzy.png) {% embed https://youtu.be/rYk4v3ne4Kg %} I also learned how to make a global function by creating a Blueprint Function. I moved my `RPM to Deg/s` function converter, since it looks kind of reusable. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/896kbr01cp99h6b63zzp.png)
humberd
1,886,544
Build your AI Travel Agent with GPT-4o using Python
Imagine planning your dream vacation, but instead of spending hours searching online, an intelligent...
0
2024-06-13T23:10:00
https://dev.to/spiff/build-your-ai-travel-agent-with-gpt-4o-using-python-3mmg
ai, beginners, python, programming
Imagine planning your dream vacation, but instead of spending hours searching online, an intelligent assistant arranges everything for you. Sounds like a fantasy, right? What if I said you could build this AI travel agent with just a few lines of [Python](https://www.python.org/) code? Yes, it's that simple. In this guide, we'll walk you through the process step-by-step, so you can have your travel assistant ready to help with your next adventure. You'll use some amazing tools to make this happen. [Streamlit](https://streamlit.io/) will help you create a friendly and interactive interface; [Phidata](https://pypi.org/project/phidata/) will handle your data; [OpenAI's](https://pypi.org/project/openai/0.26.5/) GPT-4 will provide smart conversational abilities; and [Google Search-Results](https://pypi.org/project/google-search-results/) will fetch the best travel options. Together, these tools will allow you to build smart, responsive AI to plan your trips as if you had a personal travel guardian. So, are you ready to dive in? It doesn’t matter if you are starting with programming; this project is designed to be easy and fun. By the end, you'll understand how these technologies work together and have a functional AI assist with your travel plans. Let's start this exciting journey and see how quickly you can become your travel guardian. ## Setting up the Environment Think of it as laying out all the ingredients before baking a cake. All your tools and software should be in place to run your code smoothly. When working in [Visual Studio Code](https://code.visualstudio.com/) (VS Code), always create a new Python file for your project. It's helpful to have separate files for different parts of your project. To do this, you can start by opening your [VS Code](https://code.visualstudio.com/) and creating a new folder: ### Step 1 Open [VS Code](https://code.visualstudio.com/) ![Vs Code Home page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9h91xy65zqbuxozdayrs.png) ### Step 2 Create new folder ![New Folder on Vs Code](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8gmdp57jqiu7pprbc0ug.png) ### Step 3 Name your new file `travel_agent.py` or stick with the default name provided. ![new file on vs code](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pzl3uirvda6j5qus2rf4.png) ## Install the Necessary Python Libraries First, you install some libraries for your project. Libraries are like extra tools that help your code do more things. To install them, you run these commands in your terminal: ```python pip install streamlit pip install phidata pip install openai pip install google-search-results ``` Here’s what each library does: - [Streamlit](https://streamlit.io/)-**helps you create interactive web apps without a backend server.** - [Phidata](https://pypi.org/project/phidata/)-**helps build AI agents and tools.** - [OpenAI](https://pypi.org/project/openai/0.26.5/)-**allows you to use GPT-4, a powerful AI language model.** - [SerpAPI](https://pypi.org/project/google-search-results/)-**helps you get search engine results from platforms like Google, Bing, or Yahoo! in a structured format.** ## Import Necessary Libraries These libraries are special tools that help you do specific tasks in your Python code. Here, you import the necessary libraries for your project: ```python from textwrap import dedent from phi.assistant import Assistant from phi.tools.serpapi_tools import SerpApiTools import streamlit as st from phi.llm.openai import OpenAIChat ``` ## Set Up the Streamlit Application Here, you lay the groundwork for your AI travel agent. You'll create a user-friendly interface where users can input their travel preferences and see the AI's suggestions in real time. ### Adding a Title to Our App Think of it like naming a book or a movie. The title is the first thing people see, it sets the stage for what they can expect. You can do this by using `st.title()`: ```python st.title("AI Travel Planner ✈️") ``` ### App Description Creating an app description helps users understand the value of your app quickly. It explains the main features and benefits and encourages people to use it. You do this by using `st.caption()`: ```python st.caption("Plan your next adventure with AI Travel Planner by researching and planning a personalized itinerary on autopilot using GPT-4o.") ``` ## Getting User Input for API Keys Before our AI can plan your dream vacation, it needs special keys to unlock its full potential. API keys act like secret passwords that allow your AI Travel Agent to access powerful services like `OpenAI` and `Google`. These keys are unique to each user and ensure the services are used securely and responsibly. You are creating two text input fields in the Streamlit app: the `OpenAI API key` and the `SerpAPI key`. ```python openai_api_key = st.text_input("Enter OpenAI API Key to access GPT-4o", type="password") serp_api_key = st.text_input("Enter Serp API Key for Search functionality", type="password") ``` The `type="password"` parameter ensures API keys are hidden when the user types them. ## Creating the Researcher and Planner Assistants Creating the Researcher and Planner assistants is like having two helpful friends for your AI Travel Agent. The Researcher loves to dig through travel websites to find the best flights, hotels, and attractions. The Planner organizes all that information into a neat, easy-to-follow travel plan. By dividing these tasks, our AI works more efficiently and accurately, just like how different people in a travel agency handle specific tasks. This way, you get well-researched options and a perfectly planned itinerary, making your trip-planning process smooth and enjoyable. Here, you check if the user has provided both the `OpenAI API` and the `SerpAPI key`. If so, it creates two instances of the Assistant class: one for the researcher and one for the planner. ```python if openai_api_key and serp_api_key: researcher = Assistant("gpt-4o", openai_api_key) planner = Assistant("gpt-4o", openai_api_key) ``` The Assistant class is part of the `phi.assistant module` and is used to create AI assistants with specific roles, instructions, and tools. The researcher assistant searches for travel-related information, while the planner assistant generates a draft itinerary based on the research results. ## Getting User Input for the Destination and Number of Days Before our AI Travel Agent can help plan your trip, it needs to know two important things: - Where you want to go. - How long will you be staying? Think of it like talking to a human travel agent: you need to tell them your destination and the length of your trip so they can find the best options for you. By asking for your destination and the number of days, our program gathers the necessary details to search for flights, hotels, and activities that fit your plans. This information is crucial because it helps tailor the travel recommendations to your needs. Here, you create two input fields in the Streamlit app: One for the user to enter the travel destination. One is for the user to enter the number of days they want to travel. ```python destination = st.text_input("Where do you want to go?") num_days = st.number_input("How many days do you want to travel for?", min_value=1, max_value=30, value=7) ``` The `st.number_input` function ensures that the user can only enter a number between 1 and 30, with a default value of 7 days. ## Generating the Itinerary An itinerary is just a detailed schedule of your trip. This means you'll create a clear plan that tells us exactly what to do and when during our trip. This way, you won't miss any important bookings or exciting experiences. Your AI Travel Agent handles this for you, making your travel planning as easy as chatting with a friend. You will create a button in the `Streamlit` app that, when clicked, generates the travel itinerary. When you click the button, the code enters a spinner state, indicating that the app is processing the request. ```python if st.button("Generate Itinerary"): with st.spinner("Processing..."): response = planner.run(f"{destination} for {num_days} days", stream=False) st.write(response) ``` The `planner.run()` method is then called, passing the user's destination and number of days as arguments. The `stream=False` parameter ensures that the entire response is returned immediately, rather than streaming. Finally, the response from the planner assistant is displayed on the `Streamlit` app using `st.write()`. ## Conclusion We've built something cool: a buddy to help you plan your trips using just a bit of code. We kept it simple, so you don't have to worry about complicated stuff. We used tools like `Streamlit`, `Phidata`, `OpenAI's` GPT-4, and `Google Search Results`. These tools work together to make your travel planning easy. We started by setting up our coding space and added the tools we need. With `Streamlit`, we made a simple place to type where you want to go and for how long. Our buddy then puts together a plan just for you. When you click a button, your personalized plan pops up. No more stressing over details; our buddy's got it covered. This code snippet creates a `Streamlit` app that allows users to input their travel preferences, which researchers and planner assistants use to generate a personalized travel itinerary.
spiff
1,863,797
Cyclic has shut down and I am migrating my data to another service
A few weeks ago, I wrote an article on deploying your full-stack application, which you can find...
23,487
2024-06-13T23:00:00
https://dev.to/devlawrence/cyclic-has-shut-down-and-i-am-migrating-my-data-to-another-service-4nj6
webdev, javascript, programming, learning
A few weeks ago, I wrote an article on deploying your full-stack application, which you can find [here](https://dev.to/devlawrence/how-to-deploy-your-fullstack-website-my-approach-3f6l). One of the hosting services I discussed was Cyclic. Unfortunately, Cyclic has since shut down. In this article, I'll explore why I believe they closed and share the best alternative I've found for migrating my data to a new platform. Let's dive in! 😉 ## What is Cyclic? Cyclic is a popular platform that provided hosting solutions for full-stack web applications specifically for JavaScript application as at the time I deployed my application to their services. ## Why did they shut down? [Here](https://www.cyclic.sh/posts/cyclic-is-shutting-down/) is an article I found on their website as to why they are shutting down but I honestly, I think the exact reasons were not disclosed, but here are some potential factors that could have led to Cyclic's decision to cease operations 👇🏽 ### 💸 Financial challenges Running a hosting platform requires significant infrastructure investments and operational costs. And I believe that they were not able to get more customers to offset their expenses and this was what they said on their blog 👉🏽 **“we were not able to sell enough to make it self sustaining”.** And why I am saying this from a good place. I felt they gave just too much for free without striking a proper balance in their pricing tier. ### 💻 Technical or operational challenges Maintaining a hosting platform can be technically complex, and issues related to scalability, security, or other operational challenges could have made it increasingly difficult for Cyclic to continue providing reliable services. ### 🔀 Strategic shift Companies sometimes pivot their focus or change their business strategy, which could involve discontinuing existing products or services. While there may be other reasons I haven't mentioned, I believe these three factors contributed to their shutdown. So this now leads us to 👇🏽 ## Where am I migrating my data to? So if you went through the first link of the article you might known already. But nevertheless, I will be migrating my data to the Sanity(sanity.io). So Sanity is a headless CMS (Content management System) which means the content like text, images, videos, etc. is stored separately from the way it's presented on the website. And the cool feature I like about them is that they offer a cloud-hosted option where you don't need to manage your own server infrastructure. So my full stack website [Nike webstore](https://nike-webstores.netlify.app/) (under maintenance though 😅) was initially built with the MERN stack so you can imagine moving my data from mongoDB to Sanity and I also have to use another method in querying my data. ## What I learned from all these? Besides learning new techniques for migrating data without breaking anything (yet 😅), I also gathered some valuable insights: always strike the right balance between free and paid offerings to incentivize upgrades. Continuously evaluate and adjust pricing models as your product and the market evolve. Focus on converting free users to paying customers through effective strategies. Diversify revenue streams beyond just a freemium model. Consistently improve the product and user experience to justify paid plans. (💡 Just a side note: avoid doing this from your main branch. 😅) 👇🏽 💡 *Just to be clear, I really enjoyed Cyclic services because they really made deploying full stack applications very easy which was why I even wrote the article.* ## Conclusion Alright guys, that’s the end of the article 🎊🎊. I hope you found it informative, and I would love to hear about better alternatives for migrating your MERN stack app to free web services. Have an amazing weekend, and see you next week! 😃.
devlawrence
1,887,741
Docker + MariaDB
Database popular among developers. Get container pull. docker pull "mariadb" ...
0
2024-06-13T22:46:14
https://dev.to/thiagoeti/docker-mariadb-4n2c
docker, mariadb
Database popular among developers. #### Get container **pull**. ```console docker pull "mariadb" ``` #### Create **network** for data. ```console docker network create "mariadb" ``` #### Create **volume** for data. ```console docker volume create "mariadb" ln -s "/var/lib/docker/volumes/mariadb" "/data/volume/" ``` #### Create and **run** container. ```console docker run --name "mariadb" \ -p 3306:3306 \ --network "mariadb" \ -v "mariadb":"/var/lib/mariadb" \ -e MARIADB_ROOT_PASSWORD="master" \ --restart=always \ -d "mariadb":"latest" ``` > Database access "MARIADB_ROOT_PASSWORD". #### Start container. ```console docker start "mariadb" ``` #### Access container. ```console docker exec -it "mariadb" "/bin/bash" ``` --- [https://github.com/thiagoeti/docker-mariadb](https://github.com/thiagoeti/docker-mariadb)
thiagoeti
1,887,736
Understanding the differences between micro-service and monolithic architecture.
One of the initial decisions you'll face when building an application is choosing the right...
0
2024-06-13T22:38:59
https://dev.to/xcoder03/understanding-the-differences-between-micro-service-and-monolithic-architecture-14dc
webdev, softwareengineering, learning, discuss
One of the initial decisions you'll face when building an application is choosing the right architecture. This choice will significantly impact your application's scalability, maintainability, and overall success. Two popular architecture approaches are monolithic and microservice architecture. This article will explain the differences between these two approaches, which are crucial to making an informed decision. To understand better we are going to use an illustration, lets get started. ## Monolithic Architecture Meet "The Guardian," a lone superhero. He is extremely strong, swift, and agile. He is powerful. He defends the city from evildoers, but his own might is the only thing that can keep the day from falling apart. Should he be hurt or unable to serve, the city is left exposed. This is comparable to monolithic architecture, in which an application is confined within a single, self-contained unit. The entire system is vulnerable if one component malfunctions or is corrupted. The phrase "monolith" can be described as massive and glacial. In a monolithic architecture all components are tightly coupled and interconnected. ##Advantages - Easier to develop and deploy. - Simplified testing and debugging. - Faster communication between components. - Lower overhead in terms of resources and infrastructure. ## Disadvantages - Difficult to scale and maintain. - A single point of failure can bring down the entire system. - Limited flexibility and adaptability. - Can become bulky and hard to manage as the application grows. ## Micro-Service Architecture Imagine now a group of superheroes, each possessing unique abilities and powers. "The Spark" (energetic projection), "The Cyclone" (speed), "The  Guard" (protection), and others are among them. They cooperate to keep the city safe, but if one hero is hurt or unable to serve, the others can step in and keep the city safe. This is comparable to microservice architecture, in which an application comprises several independent services. Every service has a distinct task, so if one fails, the other services can still function, reducing the damage. Each service is responsible for a specific task or functionality. Services communicate with each other through APIs. ## Advantages - Scalable and flexible. - Allows for independent development and deployment of services. - Improves fault tolerance and reduces the risk of a single point of failure. - Enables the use of different programming languages and frameworks. ## Disadvantages - More complex to develop and deploy. - Higher overhead in terms of resources and infrastructure. - Requires careful planning and coordination. - Can be challenging to manage and orchestrate services. **When to Apply Each:** **Monolithic Architecture:** - Well-suited for applications of small to medium scale. - Perfect for applications with straightforward, well-defined specifications. - Simpler to create and implement. **Micro-Service Architecture:** - Fit for complex, big-data applications. - Perfect for applications with several separate components. - Provides increased flexibility and scalability. By understanding the differences between monolithic and microservice architecture, you can make an informed decision about which approach best fits your application's needs. To learn more check out these two articles. [Monolith vs Micro-Service](https://blog.openreplay.com/monoliths-vs-microservices) and [Amazon Explanation](https://blog.openreplay.com/monoliths-vs-microservices/)
xcoder03
1,887,735
Vanilla CSS Animations Suck 👎 But I fixed them...
Creating animations using vanilla CSS can be a daunting task for developers. While animations add a...
0
2024-06-13T22:32:35
https://dev.to/max_prehoda_9cb09ea7c8d07/vanilla-css-animations-suck-but-i-fixed-them-4jhm
Creating animations using vanilla CSS can be a daunting task for developers. While animations add a touch of flair and interactivity to web projects, the process of crafting them from scratch can be time-consuming and frustrating. One of the primary hurdles is the syntax. CSS animations come with a plethora of properties, each serving a specific purpose. From animation-name and animation-duration to animation-timing-function and animation-fill-mode, the sheer number of options can be overwhelming. Mastering the intricacies of each property requires dedication and practice. Furthermore, creating smooth and complex animations often involves meticulously defining keyframes. Each keyframe represents a specific point in the animation timeline, and developers must specify the desired styles at each stage. This process can be tedious, especially for intricate animations that require numerous keyframes. Browser compatibility is another significant challenge. Despite following best practices and adhering to standards, animations may render differently across various browsers. Developers often find themselves grappling with vendor prefixes and fallback solutions to ensure consistent performance. This compatibility conundrum adds an extra layer of complexity to the already arduous task of crafting animations. While JavaScript libraries and CSS animation frameworks offer some relief, they come with their own set of considerations. Dependency management and additional overhead can be concerns for developers striving for lightweight and streamlined projects. However, there's a solution that simplifies the process of creating CSS animations. AI CSS Animations ([aicssanimations.com](aicssanimations.com)) leverages the power of artificial intelligence to generate flawless CSS animation code based on simple descriptions. By abstracting away the complexities of keyframes and vendor prefixes, AI CSS Animations lets developers focus on the creative aspects of animation design. So, if you find yourself struggling with vanilla CSS animations, give AI CSS Animations a try. It streamlines the animation creation process, saving you valuable time and effort. Happy animating!
max_prehoda_9cb09ea7c8d07
1,887,720
📚 How to Handle Multiple MSW Handlers in Storybook Stories
Handling multiple states in Storybook stories can often be problematic, especially when different...
0
2024-06-13T22:27:37
https://dev.to/enszrlu/how-to-handle-multiple-msw-handlers-in-storybook-stories-2mo2
storybook, webdev, frontend, programming
Handling multiple states in Storybook stories can often be problematic, especially when different handlers are required to simulate various states. This tutorial will guide you through the process of managing these handlers effectively to ensure each story showcases the correct state without persistent issues. ## 1. Problem Overview When using multiple handlers to showcase different states in a Storybook story, the first handler persists, leading to incorrect states being displayed. This often requires manually reloading the page to reset the handlers, which is not an ideal solution. We will solve this issue by implementing a custom decorator to force reload the story. ## 2. What is Storybook and MSW? Storybook is an open-source tool for developing UI components in isolation for frameworks like React, Vue, and Angular. It streamlines UI development, testing, and documentation by allowing developers to create and visualize components in a dedicated environment, independent of the main application. This enables more efficient debugging, testing, and showcasing of individual components, leading to a more robust and maintainable codebase. Aka a fancy component library tailored to your project or organisation. MSW (Mock Service Worker) is a powerful tool for mocking API requests in both client-side and server-side applications. It intercepts network requests at the network layer, allowing developers to simulate different responses and states such as loading, error, and success. By integrating MSW with Storybook, developers can create realistic scenarios for their components, ensuring comprehensive testing and consistent behavior across different states. ## 3. Creating a basic React Component Let's start by creating a simple CardList component that fetches data and handles loading, error, and empty data states. ```js import React, { useEffect, useState } from 'react'; const CardList = () => { const [cards, setCards] = useState([]); const [loading, setLoading] = useState(true); const [error, setError] = useState(null); useEffect(() => { fetch('/api/cards') .then((response) => { if (!response.ok) { throw new Error('Network response was not ok'); } return response.json(); }) .then((data) => { setCards(data.cards); setLoading(false); }) .catch((error) => { setError(error); setLoading(false); }); }, []); if (loading) { return <div>Loading...</div>; } if (error) { return <div>Error: {error.message}</div>; } if (cards.length === 0) { return <div>No cards available</div>; } return ( <div className="card-list"> {cards.map((card) => ( <div key={card.id} className="card"> <h2>{card.title}</h2> <p>{card.description}</p> </div> ))} </div> ); }; export default CardList; ``` ## 4. Creating a Basic Story Next, we will create a basic Storybook configuration for our CardList component. ```js import { StoryObj } from '@storybook/react'; import CardList from './CardList'; const meta = { title: 'Components/CardList', component: CardList, }; export default meta; type Story = StoryObj<typeof CardList>; export const Default: Story = {}; ``` ## 5. Implementing MSW Mock To simulate data fetching, we will implement msw mocks for our Default story. This will enable us to setup a mock server which will respond to fetch requests in our component, therefore we will see mock data in our story. **Note: **MSW Addon needs to be setup correctly for this to work, please refer to [Storybook MSW Addon](https://storybook.js.org/addons/msw-storybook-addon) ```js import { rest } from 'msw'; const handlers = { success: [ rest.get('/api/cards', (req, res, ctx) => { return res( ctx.json({ cards: [ { id: '1', title: 'Card 1', description: 'Description 1' }, { id: '2', title: 'Card 2', description: 'Description 2' }, ], }) ); }), ], }; export const Default: Story = { parameters: { msw: handlers.success, }, }; ``` ## 6. Adding Other Handlers and Stories Of course only adding default state is not enough, we want to see all of the possible states in Storybook. You might be wondering, how? Exactly same as we did before, more handlers more fun! We will now add additional handlers and corresponding stories to simulate various states. ```js const handlers = { success: [ rest.get('/api/cards', (req, res, ctx) => { return res( ctx.json({ cards: [ { id: '1', title: 'Card 1', description: 'Description 1' }, { id: '2', title: 'Card 2', description: 'Description 2' }, ], }) ); }), ], empty: [ rest.get('/api/cards', (req, res, ctx) => { return res(ctx.json({ cards: [] })); }), ], loading: [ rest.get('/api/cards', (req, res, ctx) => { return res(ctx.delay('infinite')); }), ], serverError: [ rest.get('/api/cards', (req, res, ctx) => { return res(ctx.status(500)); }), ], }; export const Empty: Story = { parameters: { msw: handlers.empty, }, }; export const Loading: Story = { parameters: { msw: handlers.loading, }, }; export const ServerError: Story = { parameters: { msw: handlers.serverError, }, }; ``` ## 7. Problem Explanation But wait, does this work? Partially... When using multiple handlers to showcase different states in a Storybook story, the first handler persists, leading to incorrect states being displayed. This often requires manually reloading the page to reset the handlers, which is not an ideal solution! ## 8. Fixing the Problem with forceReloadDecorator Of course, I did not write this article to talk about problems. We are here to solve problems baby! To solve the problem of handlers persisting across different stories, we will create a custom decorator that forces a reload. Feels bit rogue, bit hacky.. but you know what, it works better than any other over-engineered solution out there. Now when switching between the stories, story container will be reloaded very quickly. This will trigger re-initialisation of handlers and solve our problem. ```js const forceReloadDecorator = (storyFn, context) => { if (context.globals.shouldReload) { context.globals.shouldReload = false; window.location.reload(); } context.globals.shouldReload = true; return storyFn(); }; const meta = { title: 'Components/CardList', component: CardList, decorators: [forceReloadDecorator], }; export default meta; ``` To ensure the implementation works, run your Storybook and navigate through the different stories. Each story should now reflect the correct state without persisting the handlers from previous stories. In this tutorial, we addressed the issue of persistent handlers in Storybook stories and implemented a solution using a force reload decorator. This approach ensures that each story displays the correct state, improving the development and testing experience. ## Further Reading [Storybook Documentation](https://storybook.js.org/docs/get-started) [MSW Worker](https://storybook.js.org/addons/msw-storybook-addon)
enszrlu
1,887,733
Day 970 : Chosen
liner notes: Professional : So many meetings... again. Learned that my Visa application was denied...
0
2024-06-13T22:25:24
https://dev.to/dwane/day-970-chosen-1ek3
hiphop, code, coding, lifelongdev
_liner notes_: - Professional : So many meetings... again. Learned that my Visa application was denied because there was another option that I should have chosen that I guess is new. Refilled out the long form again only to find out I need some form that I have no clue how to get. Hopefully the organizer knows how to get it. Spent a few minutes refactoring a project. - Personal : Went through some tracks for the radio show. Did some refactoring on the highlight video creator. I abstracted some logic that I was using multiple times into one function that I can call with different options. Also got it to where I can set the text on the title and end screens and change the colors. Super basic, but it works for what I need. I picked the projects on Bandcamp that I'll be buying this week and put them in a document. ![The image shows a beautiful aerial view of the Rock Islands in Palau. The islands are covered in lush green vegetation and surrounded by crystal-clear blue waters. The islands are various shapes and sizes, and they are all surrounded by coral reefs. The Rock Islands are a popular tourist destination, and they are known for their beautiful scenery and rich marine life.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kl0u6ldow2polv4o0l0k.jpg) The AI dashboard that I've been using to generate the alt image tags for this post bugged out and gave a pretty jumbled response. It started off fine, but then started repeating itself and leaving words out of sentences. I guess we are still far from AI taking our jobs. haha Going to pick up the projects on Bandcamp and put together my social media posts for Friday. Can't believe it's already Friday. Going to work on putting together tracks for the radio show. Really need to get this logo done. haha Have a great night! peace piece Dwane / conshus https://dwane.io / https://HIPHOPandCODE.com {% youtube wAXipetexgk %}
dwane
1,887,723
Introducing jekyll-crypto-donations: Easily Add Crypto Donation Blocks to Your Jekyll Site
Hello Dev Community! I'm excited to share the release of my new Jekyll plugin,...
0
2024-06-13T22:18:11
https://dev.to/madmatvey/introducing-jekyll-crypto-donations-easily-add-crypto-donation-blocks-to-your-jekyll-site-3557
staticwebapps, jekyll, ruby, cryptocurrency
Hello Dev Community! I'm excited to share the release of my new Jekyll plugin, `jekyll-crypto-donations`. This gem allows you to seamlessly integrate cryptocurrency donation blocks into your Jekyll-generated websites. Whether you're a blogger, content creator, or developer, this plugin can help you receive support from your audience through crypto donations. ## Why jekyll-crypto-donations? Cryptocurrency donations offer a decentralized and borderless way to receive support from your audience. With the rise of digital currencies, it's essential to have a simple solution for integrating donation options into your site. That's where `jekyll-crypto-donations` comes in. This plugin provides a straightforward way to display donation addresses and track received funds in Bitcoin (BTC), Ethereum (ETH), and USDT (TRC-20). ## Key Features - **Easy Integration:** Add crypto donation blocks to your Jekyll site with minimal configuration. - **Support for Multiple Cryptocurrencies:** Display donation addresses and total received funds for Bitcoin, Ethereum, and USDT (TRC-20). - **Responsive Design:** The plugin includes CSS that ensures your donation block looks great on both light and dark themes. - **Copy and QR Code Buttons:** Users can easily copy your donation address or scan a QR code for convenient transfers. - **Customizable:** Configure wallet addresses in your Jekyll site's `_config.yml` file. ## Installation To get started, add the gem to your Jekyll site's `Gemfile`: ```ruby group :jekyll_plugins do gem 'jekyll-crypto-donations' end ``` Next, configure your wallet addresses in `_config.yml`: ```yaml crypto_donations: btc_address: "your-bitcoin-address" eth_address: "your-ethereum-address" usdt_address: "your-usdt-address" ``` ## Usage To include the donation block in your pages or posts, use the `{% crypto_donations %}` Liquid tag. For example: ```markdown ## Support Us {% crypto_donations Your support helps us keep creating awesome content! %} ``` ## Example You can see a live demo of the plugin in action on my website: [Demo](https://madmatvey.github.io/about/#donate-me). ## Get Involved I'm continuously working on improving this plugin, and I welcome contributions from the community. If you have suggestions, feature requests, or find any bugs, please open an issue or submit a pull request on [GitHub](https://github.com/madmatvey/jekyll-crypto-donations). ## Conclusion I'm thrilled to bring `jekyll-crypto-donations` to the Jekyll community. This plugin aims to make it easier for creators to receive support from their audience through cryptocurrency donations. I hope you find it useful and look forward to seeing it in action on your sites. Happy coding! [Eugene Leontev](https://madmatvey.github.io/) --- Feel free to reach out if you have any questions or need assistance with the setup. Let's continue building and supporting each other in the open-source community!
madmatvey
1,887,722
Secure your enterprise critical assets from secret sprawl
Understand the risks of secret sprawl, embracing shift-left and strategies to secure secret leaks in...
0
2024-06-13T22:02:23
https://dev.to/security-and-technology/secure-your-enterprise-critical-assets-from-secret-sprawl-4ajj
enterprise, cybersecurity, secrets, shiftleft
Understand the risks of secret sprawl, embracing shift-left and strategies to secure secret leaks in the modern software development lifecycle. --- ## Secret sprawl: Enterprises often need help with the uncontrolled proliferation of secrets across their IT infrastructure. The unchecked proliferation is called a secret sprawl, and usually, secrets get scattered across server systems, repositories, configuration files, applications, and other storage locations. The risk from secret sprawl can compromise security and enable unauthorized access, thus making it essential for organizations to address the issue. ### Risks: **High Blast radius:** When sensitive secrets get dispersed across multiple locations, the attack surface for possible data breaches multiplies. When attackers gain access to the secret, it can lead to unauthorized access and breaches. **Insider threat:** Even if the sprawled secret exists within the enterprise VPC, unauthorized employees can access sensitive assets violating least privilege access. This insider threat can lead to compromised security posture and the theft of sensitive data. **Application downtime:** When a sprawled secret expires or reaches the end of life due to a set TTL, determining the side effects is often complex and time-consuming. The effort is high to estimate the number of applications using the hardcoded secret and the importance of those applications to business operations. Hence, rotation or expiration of such secrets without proper usage analysis can lead to application downtime, affecting users and other applications. **Lack of visibility:** Since SecOps cannot monitor hardcoded secret usage, it isn't easy to track the entities accessing the secret - making audit and access control challenging. **Compliance:** Regulatory compliance policies such as GDPR, PCI DSS, HIPAA, NIST, etc… require enterprises to safeguard secrets that can access sensitive user information. Exposure of these secrets internally within the organization or externally in public repositories can lead to hefty fines and distrust among users. ## Prevent Secret Sprawl Proactively Enterprises often adopt the following best practices to prevent sprawl from happening and maintain governance: **Centralized secrets management:** As organizations adopt a multi-cloud approach, developers store secrets in native secret managers like AWS Secrets Manager, Azure Key Vault, GCP Secrets Manager, etc. While this is convenient for developers, it creates multiple secret store hotspots and complicates audit and visibility for security operations. Hence, a centralized secrets management experience is essential. **Dynamic secrets:** When an application requests a secret, creating a just-in-time secret with role-based access policies can remove secrets with a long lifetime and thus prevent any sprawl. It is essential to understand the required secret TTL based on application needs. **Secret Scanners:** Secret scanners can scan to detect sprawled secrets from popular hotspots such as repositories, container images, applications, server system files, etc. While scanning leaked secrets seems essential, the secret scanners can also help prevent future secret sprawls by preemptively guiding developers and security teams in the right direction. **Education and Training:** Enterprises should train and create security awareness programs to educate developers and security operations about the importance of collaboration and the risks associated with sprawl. Educated users are more likely to adhere to best practices and contribute to preventing sprawl. ## Shift-Left in Security To proactively reduce secret sprawl, enterprises are moving from the operational utopia of siloed security and development teams to an integrated, developer-first security approach. The main reasons that drive the integrated approach are the following: **Agility and faster time to market:** Enterprise products aim to stay relevant and competitive in a fast-paced software market. Thus, constant business requirement changes require rapid development and release processes often hindered by a siloed security team. Therefore, an integrated security team can aid rapid release without compromising security. **Early Issue Identification and Remediation:** Teams can identify and address issues sooner by shifting tasks such as testing, security reviews, and code analysis to earlier stages of the SDLC. Doing so reduces the cost and effort required to fix issues discovered later in the development process and minimizes the impact on project timelines. **Risk Reduction:** Independent security and development teams may need to pay more attention to address security vulnerabilities promptly. The integrated team can proactively identify and address security threats on time. Embracing the trend emphasizes that security is no longer a dedicated security team's concern; instead, there is a shared responsibility with the development teams. The approach is often termed "shift-left," enterprises that adopt this paradigm constantly seek to integrate security practices early in the software development lifecycle (SDLC) to address vulnerabilities. With a shift-left paradigm, the developers refrain from embedding secrets such as API tokens, passwords, and encryption keys in their source code. While this is ideal in theory, the SDLC has multiple checkpoints; thus, more than developer education is needed to help maintain compliance. So, how can enterprises preserve developer productivity while ensuring security best practices in program files? This uncertainty has led to the production of security collaboration products that can centrally manage and provide visibility into secrets embedded in source code. ## Attack Vectors As we discuss the importance of shit-left and security in source code, it is imperative to understand how modern development practices can introduce attack vectors that increase risk. Attack vectors refer to the various points within the development process where malicious actors can exploit vulnerabilities to gain unauthorized access. Here are some common attack vectors that attackers look for: **OSS/3rd party library usage:** To save time, developers often use OSS/3rd party libraries to extend the application's functionality. When such libraries have a security vulnerability, it is easier for attackers to target products that use it to exploit the vulnerability. **Configuration files:** For modularity and portability, developers often use configuration files to store API endpoints and secrets. IaC tools like Terraform and containerization tools like docker also contain configuration files. Secrets in configuration files can lead to unauthorized access, service impacts, and many other possible threats. **Log files:** While logging is helpful for debugging, developers may accidentally log sensitive information to logs/console outputs/ error messages, leading to unauthorized access. **Comments:** Developers may inadvertently include secrets as comments during debugging or documentation. While comments are not executed, they are still visible in the source code. Developers manage their source code using version control software like GitHub, GitLab, and BitBucket. These repositories often become central repositories for sensitive information. Recognizing all potential attack vectors, evaluating each checkpoint in the Software Development Life Cycle (SDLC), and implementing strategies to ensure developers adhere to security best practices throughout the process are crucial.
security-and-technology
1,854,419
Dev: AI
An AI Developer, also known as an Artificial Intelligence Developer, is a professional specializing...
27,373
2024-06-13T22:00:00
https://dev.to/r4nd3l/dev-ai-29ll
ai, developer
An **AI Developer**, also known as an Artificial Intelligence Developer, is a professional specializing in the design, development, and implementation of artificial intelligence (AI) systems and algorithms. Here's a detailed description of the role: 1. **Understanding of Artificial Intelligence:** - AI Developers possess a deep understanding of various AI techniques, algorithms, and methodologies, including machine learning, deep learning, natural language processing (NLP), computer vision, and reinforcement learning. - They stay updated with the latest advancements in AI research, frameworks, libraries, and tools to leverage cutting-edge technologies and best practices in their projects. 2. **Problem Analysis and Solution Design:** - AI Developers analyze complex problems, business challenges, or opportunities where AI can provide value-added solutions or automation. - They collaborate with stakeholders, domain experts, and data scientists to define project requirements, objectives, success criteria, and key performance indicators (KPIs) for AI applications. 3. **Data Acquisition and Preprocessing:** - AI Developers collect, preprocess, and prepare large volumes of structured and unstructured data from diverse sources, such as databases, APIs, files, sensors, and web scraping. - They perform data cleaning, transformation, normalization, feature engineering, and dimensionality reduction to improve data quality, reduce noise, and extract relevant features for AI modeling. 4. **Machine Learning Model Development:** - AI Developers design, train, and evaluate machine learning models using supervised, unsupervised, semi-supervised, and reinforcement learning techniques. - They select appropriate algorithms, loss functions, optimization methods, and hyperparameters to build predictive models, classifiers, regressors, clustering algorithms, or recommender systems. 5. **Deep Learning Model Development:** - AI Developers develop neural network architectures, layers, and configurations for deep learning models using frameworks like TensorFlow, PyTorch, Keras, or MXNet. - They implement convolutional neural networks (CNNs) for image recognition, recurrent neural networks (RNNs) for sequential data analysis, and transformer models for natural language processing (NLP) tasks. 6. **Natural Language Processing (NLP):** - AI Developers apply NLP techniques to process, understand, and generate human language data, including text analysis, sentiment analysis, named entity recognition (NER), text summarization, machine translation, and chatbots. - They use libraries and models such as spaCy, NLTK, Gensim, BERT, and GPT to perform NLP tasks and develop conversational AI systems. 7. **Computer Vision and Image Processing:** - AI Developers work on computer vision projects to analyze, interpret, and extract information from digital images and videos. - They implement image classification, object detection, segmentation, feature extraction, and image generation algorithms using frameworks like OpenCV, TensorFlow Object Detection API, and PyTorch Vision. 8. **Model Evaluation and Performance Optimization:** - AI Developers evaluate the performance of AI models using metrics such as accuracy, precision, recall, F1 score, ROC curve, and confusion matrix. - They fine-tune model parameters, adjust training strategies, apply regularization techniques, and handle overfitting or underfitting to improve model generalization and robustness. 9. **Deployment and Integration:** - AI Developers deploy trained models into production environments, containerize them using platforms like Docker, and deploy them to cloud services like AWS, Azure, or Google Cloud Platform (GCP). - They integrate AI capabilities into existing software applications, systems, or workflows using APIs, SDKs, microservices, or serverless architectures. 10. **Continuous Learning and Innovation:** - AI Developers engage in continuous learning, experimentation, and innovation to explore new AI technologies, algorithms, and applications. - They participate in AI communities, forums, conferences, and workshops to share knowledge, collaborate on projects, and contribute to the advancement of AI research and development. In summary, an AI Developer plays a pivotal role in leveraging artificial intelligence techniques and technologies to build intelligent systems, automate tasks, make data-driven decisions, and create value for businesses, industries, and society as a whole. By combining expertise in AI algorithms, programming languages, data science, and domain knowledge, they tackle complex challenges and drive innovation in various domains such as healthcare, finance, manufacturing, transportation, and entertainment.
r4nd3l
1,887,721
Creating an AI-driven experience using Twilio
Creating an AI-driven experience using Twilio can open up many possibilities for interactive and...
0
2024-06-13T21:55:56
https://dev.to/hussein09/creating-an-ai-driven-experience-using-twilio-31he
devchallenge, ai, twilio
Creating an AI-driven experience using Twilio can open up many possibilities for interactive and automated services. One compelling application is setting up an AI-powered SMS chatbot that can handle customer inquiries, book appointments, or provide information. Here’s a step-by-step guide on how to build this experience using Twilio and OpenAI: **Step 1**: Set Up Twilio Account 1. **Create a Twilio Account**: Sign up for a Twilio account if you don't have one. 2. **Get a Twilio Phone Number**: Purchase a phone number from Twilio capable of sending and receiving SMS. **Step 2**: Set Up Python Environment Ensure you have Python installed. Install the required libraries: ```bash pip install twilio flask openai ``` **Step 3**: Create a Flask Application Set up a Flask web application to handle incoming SMS messages and interact with the OpenAI API. Create `app.py` ```python from flask import Flask, request, jsonify from twilio.twiml.messaging_response import MessagingResponse import openai app = Flask(__name__) # Set your OpenAI API key openai.api_key = 'YOUR_OPENAI_API_KEY' @app.route("/sms", methods=['POST']) def sms_reply(): """Respond to incoming SMS messages with a friendly AI-powered message.""" # Get the message from the request incoming_msg = request.form.get('Body') resp = MessagingResponse() # Use OpenAI to generate a response ai_response = openai.Completion.create( model="text-davinci-002", prompt=f"Respond to this message: {incoming_msg}", max_tokens=150 ) # Extract the text from the AI response response_text = ai_response.choices[0].text.strip() # Create the Twilio response resp.message(response_text) return str(resp) if __name__ == "__main__": app.run(debug=True) ``` **Step 4**: Configure Twilio Webhook 1. **Deploy the Flask Application**: You can deploy it on a cloud platform such as Heroku, AWS, or any other hosting service. 2. **Set Up the Webhook**: In your Twilio console, configure your phone number's webhook to point to your Flask application's URL. For example, if deployed on Heroku, it might be `https://your-app.herokuapp.com/sms`. **Step 5**: Test the Chatbot Send an SMS to your Twilio number and see the AI respond based on the prompt it receives. You should see the chatbot's responses generated by OpenAI's GPT-3. **Step 6**: Enhance the Chatbot To improve the chatbot's capabilities, consider the following: 1. **Context Handling**: Implement session management to maintain the context of conversations. 2. **Custom Prompts**: Customize prompts to make responses more relevant to your use case. 3. **Additional Features**: Add functionalities like appointment booking, FAQs, or connecting to other APIs for richer interactions. Here’s an example of enhancing the chatbot to handle basic conversation context: ```python from flask import Flask, request, jsonify, session from twilio.twiml.messaging_response import MessagingResponse import openai app = Flask(__name__) app.secret_key = 'your_secret_key' # Set your OpenAI API key openai.api_key = 'YOUR_OPENAI_API_KEY' @app.route("/sms", methods=['POST']) def sms_reply(): incoming_msg = request.form.get('Body') resp = MessagingResponse() # Retrieve the conversation history from the session if 'conversation' not in session: session['conversation'] = [] session['conversation'].append(f"User: {incoming_msg}") # Use OpenAI to generate a response conversation = "\n".join(session['conversation']) ai_response = openai.Completion.create( model="text-davinci-002", prompt=f"The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.\n\n{conversation}\nAI:", max_tokens=150, stop=None, temperature=0.9 ) response_text = ai_response.choices[0].text.strip() session['conversation'].append(f"AI: {response_text}") # Create the Twilio response resp.message(response_text) return str(resp) if __name__ == "__main__": app.run(debug=True) ``` This code snippet maintains a conversation context by storing the history in a session variable. This way, the AI can provide more contextually relevant responses based on the conversation history. By following these steps, you can create a sophisticated AI-driven SMS chatbot leveraging Twilio and OpenAI, providing an interactive and automated experience for users.
hussein09
1,878,548
Let's Build HTTP Client From Scratch In Rust
Previously, we dived into how to make an HTTP parser with no dependencies. In this blog let's extend...
0
2024-06-13T21:45:26
https://dev.to/hiro_111/lets-build-http-client-from-scratch-in-rust-46ad
rust, dns, http
[Previously](https://dev.to/hiro_111/lets-build-http-parser-from-scratch-in-rust-5gih), we dived into how to make an HTTP parser with no dependencies. In this blog let's extend our knowledge further by building an HTTP client CLI😉 (I call our brand-new CLI [`fetch`](https://github.com/hirotake111/rust_fetch), by the way) It would be very time-consuming if we implemented all the feature requirements for modern HTTP client CLI. Instead let’s just build a simple one that only supports HTTP/1.1 protocol using IPv4 Address (no HTTP/2, 3, and IPv6. Also some features in HTTP/1.1 will be missing). ## API Design Let's first think about how our brand-new CLI will be used. It should be similar to a well-known tool curl. But I like to have an HTTP method as an argument explicitly. Also we may want to POST data to a server using HTTP body (or not). So we have possibly three arguments passed to the CLI: - HTTP method - URL - HTTP body (Optional) And here is an example command. ```bash fetch post example.com '{"foo": "bar"}' ``` Now we can see what our internal API look like. Let's write our `main` function: ```rust fn main() { let mut args = std::env::args(); let program = args.next().unwrap(); if args.len() <= 1 { display_usage(&program); exit(1); } let method: Method = args.next().unwrap().parse().unwrap(); let url = args.next().unwrap(); let body = args.next(); let client = Client::new(); // Initialize a client // Make an HTTP request let response = client.perform(method, url, body).unwrap(); println!("{response}"); } ``` That's enough now. ## DNS Client Now let's write our client struct and its corresponding method called `perform`: ```rust pub struct Client {}; impl Client { pub fn new() -> Self { Self {} } } impl Client { pub fn perform( &self, method: Method, url: String, body: Option<String>, ) -> Result<String, String> { let (protocol, url) = url.split_once("://").unwrap_or(("http", &url)); let (hostname, url) = match url.split_once('/') { Some((hostname, url)) => (hostname, format!("/{url}")), None => (url, "/".to_string()), }; // ...!? } } ``` Finally, I realized we need a way to resolve DNS in our project😓 Don't worry! I have implemented it [before](https://dev.to/hiro_111/building-a-simple-dns-client-in-rust-5bp0). Let's just copy-and-paste the code for the DNS client from my repository. ## HTTP Request Now we have an HTTP method, a server’s hostname, a URL path, a payload to be sent, and a server’s IP address. So what to do next? Well we need to create our HTTP request out of some parameters above. Let’s do it. ```rust let request = HTTPRequest::new(method, hostname, &url, body); ``` In short, our HTTP Request struct looks like below: ```rust #[derive(Debug, Clone)] pub struct HTTPRequest { request_line: RequestLine, headers: HTTPHeaders, body: Option<String>, } impl HTTPRequest { pub fn new(method: Method, hostname: &str, url: &str, body: Option<String>) -> Self { let request_line = RequestLine::new(method, url); let headers: HTTPHeaders = vec![("Host".to_string(), hostname.to_string())].into(); Self { request_line, headers, body, } } } ``` Once we create an HTTP request, we can serialize and send it to the server via TCP stream: ```rust // Connect to a server let mut stream = match protocol { Protocol::HTTP => TcpStream::connect((addr, 80)).map_err(|e| e.to_string())?, Protocol::HTTPS => unimplemented!(), }; // Send an HTTP request to the server let request = HTTPRequest::new(method, hostname, &url, body); let n = stream .write(request.to_string().as_bytes()) .map_err(|e| e.to_string())?; println!("sent {n} bytes"); ``` ## HTTP Response Finally, we can receive HTTP response from the server, and make something processable (our HTTP Response) out of it: ```rust // After sending HTTP request, create a buf reader and get data in it let reader = BufReader::new(stream); let response = HTTPResponse::try_from(reader)?; println!("{:?}", response); ``` And our HTTPResponse struct should be like below: ```rust #[derive(Debug, Clone)] pub struct HTTPResponse { status_line: StatusLine, headers: HTTPHeaders, body: Option<String>, } impl<R: Read> TryFrom<BufReader<R>> for HTTPResponse { type Error = String; fn try_from(reader: BufReader<R>) -> Result<Self, Self::Error> { let mut iterator = reader.lines().map_while(Result::ok).peekable(); let status_line: StatusLine = iterator .next() .ok_or("failed to get status line")? .parse()?; let headers = HTTPHeaders::new(&mut iterator)?; let body = if iterator.peek().is_some() { Some(iterator.collect()) } else { None }; Ok(HTTPResponse { status_line, headers, body, }) } } ``` We are done! Unfortunately, not really. ## Test When I `cargo run` it, I found the program doesn’t finish after receiving an HTTP response from the server. ```bash cargo run -- get example.com // -> this just blocks and never ends... ``` Why? Our HTTPResponse parser implementation worked in the previous project. So it's supposed to work this time too... Well, it turned out, in my previous project I tested my HTTPResponse parser using data in a text file. However, in read world HTTP response there is no `end of file` section. So it turns out we need somehow stop reading the byte stream when we find an `empty line`. Here is updated version of my implementation: ```rust impl<R: Read> TryFrom<BufReader<R>> for HTTPResponse { type Error = String; fn try_from(reader: BufReader<R>) -> Result<Self, Self::Error> { // ... let mut body = vec![]; for data in iterator { // Break if it's just an empty line if data.is_empty() { break; } body.push(data); } Ok(HTTPResponse { status_line, headers, body: Some(body.join("\n")), }) } } ``` Then, let’s try it again. ```bash cargo run -- get example.com // -> works! ``` Congratulations!! However here is another challenge - when I run it for other URL (say, google.com) it again doesn’t finish… And this is the biggest lesson learned in the project for me. ## **HTTP/1.1 Persistent Connections** By default, HTTP/1.1 uses persistent connections. This means a single TCP connection can be used to send and receive multiple HTTP requests and responses. This improves efficiency by avoiding the overhead of re-establishing connections for each request. Many servers, including `google.com`, use chunked transfer encoding for the body. This encoding allows the server to send the body in chunks of variable size, with each chunk preceded by its size information. Simply put - some HTTP servers don’t even have an empty line in the HTTP body. My idea after understanding the cause of the issue was, to read the `Content-Length` HTTP header, and then read the exact bytes from the byte stream. So here is my final implementation of HTTPResponse struct: ```rust impl<R: Read> TryFrom<BufReader<R>> for HTTPResponse { type Error = String; fn try_from(reader: BufReader<R>) -> Result<Self, Self::Error> { // The use of .lines() splits the stream by new line (\n, or \r\n). // But this makes it impossible to parse HTTP body for us. // So instead, leverage .split(b'\n') let mut iterator = reader.split(b'\n').map_while(Result::ok).peekable(); let status_line: StatusLine = iterator .next() .ok_or("failed to get status line")? .try_into()?; let headers = HTTPHeaders::new(&mut iterator)?; // The length of the HTTP body let mut length = headers .0 .get("Content-Length") .ok_or("HTTP header doesn't have Content-Length header in it")? .parse::<usize>() .map_err(|e| e.to_string())?; let mut body = vec![]; for mut data in iterator { data.push(b'\n'); length -= data.len(); body.push(data); if length <= 0 { break; } } let body = body.into_iter().flatten().collect::<Vec<u8>>(); let body = String::from_utf8(body).map_err(|e| e.to_string())?; Ok(HTTPResponse { status_line, headers, body: Some(body), }) } } ``` And here is the final result: ![final result of our HTTP client CLI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jwv9fh1r2ly4iia2b59y.png) 🎉 Thanks for reading 😉
hiro_111
1,887,719
Babylon.js Browser MMORPG - DevLog - Update #10- Entities interpolation
Hey! Last two days I was working on syncing server and client clocks. On top of it I implemented...
0
2024-06-13T21:44:55
https://dev.to/maiu/babylonjs-browser-mmorpg-devlog-update-10-entities-interpolation-2a9g
babylonjs, indiegamedev, indie, mmorpg
Hey! Last two days I was working on syncing server and client clocks. On top of it I implemented entity interpolation. On the video I'm presenting You situation when server network tick is set to 4 which means it sends messages to players each 250ms, which is simulating quite significant lags. Without interpolation movement is super shaky and after enabling it it looks very good taking into consideration 250 ms lags. Next I'll work on client side prediction and reconciliation which will make similar difference but for the player. {% youtube UvLH-9sCKiw %}
maiu
1,887,296
Contributing to ODK: the go-to mobile form data collection tool
This is a post with very little code, but instead a higher level retrospective on contribution to a...
0
2024-06-13T21:36:16
https://dev.to/spwoodcock/contributing-to-odk-the-go-to-mobile-form-data-collection-tool-3mco
This is a post with very little code, but instead a higher level retrospective on contribution to a mature open-source project. ## Intro to ODK [ODK](https://getodk.org) is a collection of open-source tools that are used to collect data via forms designed for a mobile phone. It is at the forefront of tools available for this purpose, commonly used in sectors such as public health, global development, crisis response, environmental research, and more. The key tools to be aware of are: - **ODK Collect**: the Android application written in Java/Kotlin, used to collect data. - **ODK Central**: the Node.js web server written in JavaScript, used to receive the data from mobile devices. There is a companion web frontend written in Vue. ## Additional requirements from field mappers At the Humanitarian OpenStreetMap team (HOT), we use ODK to underpin one of our core tools, the Field Mapping Tasking Manager ([FMTM](https://github.com/hotosm/fmtm)). Mappers in the field must use ODK Collect to gather information about features on the ground (buildings, roads, etc). However, there are two major features that would significantly improve the workflow for mappers: 1. To have an easy way to load base satellite imagery in the background of ODK Collect. 2. To open ODK Collect from the click of a button in FMTM, with information pre-filled, such as a selected building. Luckily, the ODK community and developers, are both very welcoming, and also very receptive to feedback and new ideas. ### Implementing easier MBTile usage ODK has supported loading basemaps in MBTile format for a while now, however adding them to a project requires connecting a computer and manually copying the files over. After fruitful [discussion](https://forum.getodk.org/t/provide-a-way-to-get-mbtiles-to-collect-without-having-to-connect-to-a-computer/42206) around this task on the ODK forums, a plan was made by the ODK devs to improve the implementation. HOT works closely with [NAXA](https://naxa.com.np) for development on FMTM, and NAXA's current Chief Operating Officer is incidentally one of their most senior Android devs! Nishon managed to put in a big shift to implement most of the requirements for loading the MBTiles file directly from a device [here](https://github.com/getodk/collect/pull/5917). Unfortunately due to lack of capacity from our side, the final touches to the PR were not able to be made 😅 Grzegorz from the ODK team picked up the feature and finalised it for their most recent beta release (2024.2.0-beta.2). However, overall it was a very successful collaboration & we are extremely grateful to the ODK team for being so responsive and receptive to contribution. ### Loading ODK Collect by intent More recently we have been focusing on our second requirement: loading ODK Collect from an external app, with data pre-filled. We started [discussion](https://forum.getodk.org/t/launch-odkcollect-from-a-browser-with-feature-pre-selected/43190/8) on this a while ago, gathering community input and requirements. Ping, an very experienced dev who used to be part of ODK, helped us to create a [proof of concept](https://github.com/hotosm/odkcollect/pull/2). After further bug fixing by Samir at NAXA, we finally have a built dev APK with all of our requirements. Obviously this proof of concept would not pass ODK's very strict code quality requirements, including extensive testing and covering a well researched combination of user requirements. However, we plan to continue the discussion and development (primarily by Samir) for hopefully another successful collaboration with the ODK team. ### Other contributions During this process I have also been providing additional contributions to documentation and build configs (as my experience does not lie in Android development), to help build goodwill and aid the long term success of ODK. At the organisation level, and individual level, we are very keen to see the suite of ODK tools grow and the team to continue the exemplary job they are doing managing an open-source project! Some minor contributions: - Helping debug the ODK Central deployment on ARM-based systems. - Upgrading the Python version for pyxform-http. - Tweaking the containerised deployment of Central. - Proof reading and adding content to docs. - Testing all the tools in various scenarios. - Early adoption of new features and providing feedback. ## Retrospective Contributing to an established application with many complex user requirements is no easy task. Thankfully the developers at ODK are extremely professional about this process, and while expect a very thorough and professional contribution, they are open to assist and provide guidance throughout the entire process. It may take some extra time to gather user requirements, provide comprehensive tests, and review the code more thoroughly than normally accustomed, however in the end it has been very rewarding. ODK is, and remains to be, the most reliable and complete suite of tools for form-based data collection, and hopefully with the support of the open-source community will continue to be long into the future!
spwoodcock