text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Many of you as parents may think, not much when asked this question. But in reality, it’s probably a lot more than you think. So it should come as no surprise to anyone that in McAfee’s 2013 study Digital Deception: Exploring the Online Disconnect between Parents and Kids that examines the online habits and interests of tweens, teens, and young adults, finds there is a significant disconnect between what they do online and what their parents believe they do. The phrase “liar liar, pants on fire” comes to mind when I hear this topic and the phrase applies to both parents and kids. Parents are lying to themselves if they think they know what their kids are doing online, since 80% said they would not know how to find out what their kids are doing online and 62% do not think that their kids can get into deep trouble online. As for our kids, let’s face it–kids sometimes lie. The study found that 69% of kids say that they know how to hide what they do online from their parents and disturbingly 44% of them cleared their browser history or used private browsing sessions to hide their activity from their parents. While youth understand the Internet is dangerous, they still engage in risky (and sometimes illegal) behavior. Not only are they hiding this activity from their parents in a variety of ways, but almost half (46%) admit that they would change their behavior if they knew their parents were paying attention. - 86% of youth believe that social sites are safe and are aware that sharing personal details online carry risk, yet kids admit to posting personal information such as their email addresses (50%) and phone numbers (32%) - 48% have viewed content they know their parents would disapprove of - 29% of teens and college aged youth have accessed pirated music or movies online Adding to this problem is how clueless parents are regarding technology and their kids’ online lives. 54% of kids say their parents don’t have time to check up on the kids’ online behavior and 42% say their parents don’t care what the kids do online. And even worse, only 17% of parents believe that the online world is as dangerous as the offline world and 74% of parents just admit defeat and claim that they do not have the time or energy to keep up with their kids and simply hope for the best. So how do you bridge this divide? Parents, you must stay in-the-know. Since your kids have grown up in an online world, they may be more online savvy than you, but giving up isn’t an option. You must challenge yourselves to become familiar with the complexities of the online universe and stay educated on the various devices your kids are using to go online. Here are some things you can do as parents to get more tech savvy: - Get device savvy: Whether you’re using a laptop, desktop, Mac, tablet, mobile, wired Internet, wireless, or software, learn it. No excuses. No more, “My kids know more than I do,” or “All I know how to do is push that button-thingy.” Take the time to learn enough about the devices your kids are using. - Get social: One of the best ways to get savvy is to get social. By using your devices to communicate with the people in your life, you inevitably learn the hardware and software. Keep in mind that “getting social” doesn’t entail exposing all your deepest, darkest secrets, or even telling the world you just ate a tuna sandwich, but it is a good way to learn a key method that your kids communicate. - Manage your/their online reputation: Whether you are socially active or not, whether you have a website or not, there are plenty of websites that know who you are, that are either discussing you or listing your information in some fashion. Google yourself and your kids to see what’s being said. Teaching your kids what is and is not appropriate online is a must these days. And as a good rule of thumb - Get secure: There are more ways to scam people online than ever before. Your security intelligence is constantly being challenged, and your hardware and software are constant targets. Invest in comprehensive security solutions that include antivirus, but also protects your kids, identity and data for ALL your devices like McAfee LiveSafe™service. Or you can be like me and tell your kids that once they turn 10 they will be locked in a box in my basement until they turn 30. Just kidding (maybe). But seriously, parents – it’s time to make this a priority, for you and your kids. Robert Siciliano is an Online Security Expert to McAfee. He is the author of 99 Things You Wish You Knew Before Your Mobile was Hacked! Disclosures. Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:138f28cb-0bfc-41ee-8ec6-c2c1373862cd>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/consumer/digital-divide
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00271.warc.gz
en
0.964138
1,036
2.609375
3
The word botnet or bot is short for robot network. A botnet is a group of Internet-connected personal computers that have been infected by a malicious application (malware) that allows a hacker to control the infected computers or mobile devices without the knowledge of the device owners. When malware is launched on your computer or mobile device, it “recruits” your infected device into a botnet, and the hacker is now able to remotely control your device and access all the data on your device. A botnet can consist of as few as ten computers, or tens or hundreds of thousands. Millions of personal computers are potentially part of botnets. Computers that aren’t properly secured are at risk of being turned into bots, or zombies. Consumers’ and small businesses’ relaxed security practices give scammers a base from which to launch attacks, by allowing them to create botnets without being detected. Hackers use botnets to send spam and phishing emails and to deliver viruses and other malware and thus make money. To stay protected, you should: - Don’t click on links from people you don’t know - Be cautious downloading content from peer-to-peer sites - Be wary of free downloads (is it really free?) - Keep your operating system and browser updated - Make sure you have updated security software for all your devices, like McAfee All Access Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:74580d5c-99ba-4925-8399-5bf783de01f9>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/privacy-identity-protection/what-is-a-botnet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00271.warc.gz
en
0.929876
318
3.671875
4
Share this post: Advancements in cutting-edge technologies have resulted in some influential developments. Artificial Intelligence and Machine Learning are making hyper-automation a reality and immersive technologies like Augmented Reality (AR) and Virtual Reality (VR) will soon offer kinesthetic interfaces and multiple touchpoint experiences. These trending technologies are expected to offer much more than what was expected from them a few years ago. They are revolutionizing the conventional operational methodologies of different sectors and ushering in changes to logistics and supply chain businesses as well, especially in the field of cargo monitoring. The logistics and transportation sectors are among the biggest playing fields for cutting-edge technologies with more than 95 percent of all manufactured goods moved by container at some point during their shipping lifecycle. Internet of Things (IoT) and blockchain are anticipated as game changers due to their remote monitoring and decentralization capabilities respectively. While the former allows businesses to track the status and condition of the cargo being transported, the latter creates a secure and fast framework for digital contract storage and quick transactions. Learn how industries are revolutionizing business with IBM Blockchain Additionally, both these technologies complement and enhance the features of the other. IoT creates a widespread network of interconnected devices and blockchain allows the ingestion and sharing of data among these devices over a secure platform. Their amalgamation makes the ideal solution for logistics companies to remotely manage their fleets, monitor the condition of cargo, and ensure the delivery of products before deadline. The role of IoT in cargo monitoring IoT creates transparency in fleet operations allowing companies to monitor the location and the status of the cargo being shipped. The technology is particularly valuable when used to track the journey and condition of perishable cargo. Devices like GPS and temperature sensors are attached to the freight tracking its location and condition. This data is transferred via gateways to a platform where fleet operators and cargo handlers can monitor and manage shipments. They gain visibility into their transportation operations which allow them to make smart decisions to improve the efficiency of their supply chain and ensure timely delivery of products. They’ll have key data about the location and status of the products even as cargo is transferred from port to port. The sensors will also help them to manage the movement of their vehicles based on demand and supply conditions, weather predictions, route options, and the type of cargo being transported. They can make smart decisions to keep the goods moving and see the live location of the cargo on a computer or smartphone, allowing them to estimate when the shipment will be delivered. While the ETA matters a lot to the recipient of perishable cargo, they also want it to be fresh and undamaged when the cargo arrives. Throughout the journey, powered cargo monitoring systems enable handlers to ensure that the cargo is fresh until it reaches the customer. Parameters like airflow, temperature, humidity, and condensation can be tracked in the container to ensure that the package is all right. Beyond the real-time monitoring of cargo IoT can also be used for other applications as well in the shipping process. A two-way interacting IoT solution could empower fleet managers to remotely control some functioning of the containers themselves. For instance, the presence of moisture, water, and condensation can damage the packing and result in the manifestation of mold and bacteria. Fleet managers could remotely operate an evaporator fitted in the cooling engine to remove condensation and moisture from the storage area and prevent degradation of products without compromising the cooling capacity of the engine. IoT systems can also be used to prevent theft of cargo from the truck trailer. Weight monitoring sensors, when embedded on the axle of the trailer, can monitor the weight of the cargo and share it with fleet operators. In case the weight of the cargo unexpectedly decreases, an alert could be sent to fleet handlers alerting them to possible cargo theft. Why use blockchain with IoT for cargo monitoring? Most documentation in logistics businesses are still paper-based and, even with the implementation of IoT systems, cannot be leveraged to the fullest extent without including blockchain technology. Blockchain allows the creation of a smart Bill of Lading (BoL) documents that are digitally stored and shared with different parties associated with cargo transfer. Hence, all the parties can check if the specified terms and conditions in the digital BoL are adhered to via data gathered from IoT sensors. Furthermore, since blockchain stores data in a decentralized and immutable ledger, there is no way the data can be manipulated once it is entered. There won’t be a need for parties to use their credit cards to facilitate transactions since they can use their cryptocurrency wallets. Hence, the standard paper-based operations are digitized, reducing the physicality of contracts and reducing operational costs associated with cargo supervision. The amalgamation of IoT and blockchain technologies offers extraordinary applications for cargo monitoring and security and the market is expected to grow at a compound annual growth rate of 92.92 percent, from USD 113.1 million in 2019 to USD 3021 million by 2024. It’s clear that shippers and all parties in the supply chain and global shipping ecosystem see what lies ahead for these technologies, and this transition and digitalization is already taking place. Combining IoT to blockchain increases the efficiency of supply chains and helps fleet operators manage their cargo handling operations seamlessly. From time to time, we invite industry thought leaders, academic experts and partners, to share their opinions and insights on current trends in blockchain to the Blockchain Pulse blog. While the opinions in these blog posts are their own, and do not necessarily reflect the views of IBM, this blog strives to welcome all points of view to the conversation. How to get started with IBM Blockchain now
<urn:uuid:a390beca-cf31-479d-ba88-5395fd3b7d90>
CC-MAIN-2022-40
https://www.ibm.com/blogs/blockchain/2020/08/iot-and-blockchain-technologies-for-universal-cargo-monitoring/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00271.warc.gz
en
0.939195
1,156
2.578125
3
Gain Complete, Dynamic Control Over Which Traffic Flows are Forwarded from FPGA to Application At a high level, flow shunting allows an application to programatically turn packet transmission on or off—for a given flow (based on 5-tuple). In other words, the application can decide from which flow(s) it does and does not want to receive data traffic. By intelligently “toggling” the flow shunting switch, an application can greatly reduce the amount of data it has to analyze, thereby freeing up CPU resources for more pressing tasks. The diagram to the right clearly illustrates the flow shunting process. Inside a security appliance there is an Accolade ANIC adapter (in PCIe slot) and the security application. The security application communicates with the adapter via a well-defined API (natively integrated into Suricata, PF_RING etc.) which it uses to configure and control the adapter. The adapter classifies each flow and subsequently sends the entire packet (header + data payload) to the application. The application in turn examines the packet (or more likely many packets in a row) and decides whether this particular flow requires further analysis or not. If the flow is not of interest then the application tells the adapter to turn flow shunting on or in other words to stop sending any packets from that flow. If for some reason the situation changes, flow shunting can always be turned off for this flow, in which case packets will resume being forwarded to the application. There may be instances when “toggling” flow shunting on and off is necessary. There are many reasons why an application may not want to continue receiving traffic for a given flow. For example, if the application cannot process encrypted traffic there is no point in receiving encrypted flows. Or an application may not want to examine video traffic (e.g. NetFlix) because it doesn’t pose a threat or wastes too much disk space, so all video traffic could be shunted away. Or perhaps the application has an IP blacklist (or whitelist) on which it operates and therefore any flows which don’t match the list should be shunted aside. The value of flow shunting is clearly that it puts control in to the hands of the application, so that dynamic decisions such as which traffic flows should be analyzed can be made based on programming logic.
<urn:uuid:4a03de39-c09a-40de-883a-934209499261>
CC-MAIN-2022-40
https://accoladetechnology.com/portfolio-item/flow-shunting/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00271.warc.gz
en
0.942002
494
2.84375
3
Db2 for z/OS and SQL concepts Many structures and processes are associated with a relational database. The structures are the key components of a Db2 database system, and the processes are the interactions that occur when applications access the database system. In a relational database, data is perceived to exist in one or more tables. Each table contains a specific number of columns and a number of unordered rows. Each column in a table is related in some way to the other columns. Thinking of the data as a collection of tables gives you an easy way to visualize the data that is stored in a Db2 database. Tables are at the core of a Db2 database. However, a Db2 database involves more than just a collection of tables; a Db2 database also involves other objects, such as views and indexes, and larger data containers, such as table spaces. With Db2 for z/OS and the other Db2 products, you can define and manipulate your data by using structured query language (SQL). SQL is the standard language for accessing data in relational databases.
<urn:uuid:e576146d-4207-422e-b29c-7880252ff065>
CC-MAIN-2022-40
https://www.ibm.com/docs/en/db2-for-zos/12?topic=zos-db2-sql-concepts
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00271.warc.gz
en
0.896097
226
3.375
3
Why use RBF Learning rather than Deep Learning in an industrial environment One of today’s most overused buzzword is “Artificial Intelligence”. Both technical and general press is full of articles talking about machines that drive autonomous cars and invent new languages. Many also talk about intelligent machines being a threat to humanity. Machine Learning is an essential part of the AI puzzle and Deep Learning is one of the most popular approaches to implement Machine Learning. Interestingly, Deep Learning is not new. Geoffrey Hinton demonstrated the use of back-propagation of errors for training multi-layer neural networks in 1986, more than 30 years ago. Even earlier, in the 60’s, Kelley, Bryson and Ho published research papers about dynamic optimization which many consider as the basis for back-propagation. Generations of researchers have shown that, given enough data, neural networks can be trained to recognize things. This training consists in slow, progressive, iterative adjustments that allow the network to progressively configure itself to produce the desired answer. Deep Learning is not new but it recently became popular because of the availability of GPU/TPU/VPU architectures which offer some level of parallelism and, therefore, deliver acceptable performance for some applications. What are the key differences between Deep Learning and RBF-based solutions? Online vs Offline learning Deep Learning is an offline learning process. The learning phase and the execution (inference) phase are separate and, very often, are not even processed on the same machine. Typically, the learning phase happens in a data center. A massive data set is crunched to generate a neural network. This takes huge computing resources and can take days depending on the size of the data set and the number of levels in the network. Once the network has been generated, it can be executed to perform the required recognition tasks. Such inference execution can sometimes be achieved on relatively low power devices (Intel-Movidius or Nvidia Jetson are good examples of embedded inference processing platforms that are not capable of embedded learning). More often, powerful PCs with GPU accelerators are used, leading to significant cost and power consumption. Moreover, as the training dataset grows during the learning phase, there is no guarantee that the target hardware will remain sufficient and users may have to upgrade their inference hardware to execute properly after a new network has been generated during the learning phase. In a way, this is similar to the PC world where you have to upgrade your hardware regularly if you want to run the newest games. This continuous and fast upgrade cycle is driving a healthy consumer business but is unacceptable in an industrial environment. The most important limitation of this approach is that new training data cannot be incorporated directly and immediately in the executable knowledge. In a fairly static environment where the training data is not changing often, this may not be a problem. For example, speed limit road signs are always the same, so you don’t need to learn new ones dynamically. However, in an industrial environment, novelty is very common. New components, new suppliers, new configurations happen almost every day, and it’s critical to be able to train an industrial machine dynamically, just like we train operators, on-the-job. In fact, modern manufacturing techniques tend to encourage novelty with smaller volumes per products and a higher level of customization. Our approach, using a neuromorphic technology, solves that problem. Training can happen online, at any time, dynamically. Also, unlike Deep Learning networks, RBF networks are free of convergence problems and they can be easily mapped on hardware because the structure of the network does not change with the learned data. This ability to map the complete network on specialized hardware allows RBF networks to reach unbeatable performances in terms of speed and power dissipation both for learning and recognition. In contrast, any other Neural Network solution based on back propagation of errors for learning needs to be mapped (and remapped after each learning process) on programmable hardware (CPU, GPU or FPGA with specific hardware assist or not), which is a lot costlier in terms of complexity and power dissipation. Deep Learning is fundamentally a software technology which requires powerful, expensive and power consuming hardware to achieve reasonable levels of performance. It often also requires a fair amount of hand coding and tuning to deliver useful performance on the target hardware and is therefore not easily portable. Local vs Remote learning Another issue with Deep Learning is that data is crunched in a data-center which usually means that it is handled on someone else’s computer. This may create confidentiality or security issues. Many industrial customers prefer to keep their precious data local. The data used by industrial customers is very sensitive because it may contain their process and quality secrets. Ownership and control over this data are, more often than not, very critical to their business. With our approach, precious data stays local. It is learned and then recognized on the same machine, in a totally controlled environment. This gives the ability for the domain experts to train the machines themselves without having to outsource the training process to IT specialists who do not necessarily understand the meaning behind the data. The domain expert has a lot more control over the training of the machine and has full control over the qualification of that machine and its release to production. Additive learning vs Forget-and-Learn-from-scratch learning When a Deep Learning based system needs to learn something new, it has to forget everything it knew before and learn from scratch, based on the new dataset. In a way, it’s similar to “old manufacturing” style in which you have to break the old mold and build a new one if you want to have a different plastic casing. In our modern world of additive manufacturing and flexible production chains, it is paradoxical to introduce a machine learning technology which is not additive in nature. Besides the lack of flexibility, this creates another potential problem. When a Deep Learning based system “batch-learns” from a new incrementally better dataset, there is no guarantee that previous results will be maintained. In an inspection system, parts that were good before may be bad now and vice versa. RBF learning is an additive process, unlike Deep Learning. It is also important to note that Deep Learning requires a lot of training data to produce acceptable results. Even with minimal training, the RBF classifier will output the closest match along with a confidence factor. It is also capable of pinpointing uncertainties and unknowns therefore enabling dynamic learning. Redundant RBF classifiers can also work in parallel using different features to produce more robust decisions. The ability to detect unknown situations is essential for the implementation of anomaly detection in predictive maintenance applications. Predictable recognition latency For all industrial applications, low and constant latency is a very desirable feature because it guarantees high and predictable productivity. With Deep Learning, latency varies. Typically, the more the system learns, the slower it gets. This is due to the Von Neumann architecture bottlenecks found in all computers which are sequential by nature. Even the most modern multi-core architectures, even the best GPU/TPU/VPU architectures, have limitations to their parallelism because some resources (cache, external memory access bus, …) are shared between the cores and therefore limit their true parallelism. The neuromorphic architecture goes beyond the Von Neumann paradigm and, thanks to its in-memory processing and fully parallel nature, does not slow down when the training dataset grows. In addition, the shallow nature (3 levels) of RBF networks is not a disadvantage for such applications as researchers have shown that 3 layers are sufficient to solve any pattern classification problem. The quality of the recognition is therefore not compromised. Deep Learning is an exciting field of research, and it has produced amazing results in many Cloud-based applications where its limitations are not critical. However, in an industrial, real-time, high productivity, high predictability but high flexibility environment, we consider that Deep Learning is not the best approach to solve the machine learning problems the market is facing for inspection, monitoring, maintenance and robotics applications. In fact, any environment which needs dynamic on-the-job learning, fast and predictable latency, easy auditing of decisions is likely to be better served by RBF neural networks, rather than by Deep Learning neural networks. Philippe Lambinet is a Senior Executive in the Semiconductors, with a proven track record in developing successful large businesses and leading global international teams, engaged in a new adventure. With a few friends, he created Cogito Instruments to deliver embedded machine intelligence. Philippe believes that cognitive processing can be and should be done inside the machines, at the edge of the Industry 4.0 network.
<urn:uuid:a21f3ed9-04eb-43d6-a707-8d91a4c3eb63>
CC-MAIN-2022-40
https://www.iiot-world.com/artificial-intelligence-ml/machine-learning/why-use-rbf-learning-rather-than-deep-learning-in-an-industrial-environment/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00271.warc.gz
en
0.941002
1,778
2.9375
3
The scale of the U.S. census is breathtaking. For 2020, the United States Census Bureau set the total national population at 331,449,281. At 4K resolution, a tv screen representing one pixel for each person would be six stories high and, at 145 feet, longer than a Boeing 737. Meanwhile, there are more than 18,000 variables collected. At 5 seconds per variable, it would take more than 25 hours straight to even read these variable names, much less consider their descriptions! When data gets this big and complex, interactive visual analytics goes from ‘nice to have’ to ‘essential.’ While use cases and specialized tools may vary, there are at least three common characteristics to consider: - The patterns you see in census data vary tremendously by spatial and temporal scale, and can be biased by collection methodologies. For example, important privacy protections in place within the census aggregate sensitive data at various levels. The only realistic way to understand such patterns and make effective decisions based on them is to leverage the power of your visual cortex. - Like many other data types, census data often works best in combination with other datasets. For example, businesses need to know where and when to target potential customers given current or potential future retail locations. Government planners and NGOs want to know which special needs populations need to be considered relative to a particular program or policy decision. - Last but perhaps most important, these data not only vary over historic time, but are critical in many kinds of future projections. Both commercial or public sector analysts need to make decisions today about infrastructure and programs lasting many years. So a telco looking to locate 5G antennas efficiently and a local government looking to site a public school both need predictive analytics based in part on census data. Fortunately, new tools are making census analysis not only far more granular, but orders of magnitude faster as well. Advanced techniques that leverage the parallel processing capabilities of GPUs (graphics processing units) are allowing billions of data points to be interrogated in milliseconds. When applied to U.S. census information and presented in a geospatial format, business analysts and researchers of all types can see what is happening across the country at speeds and scales never before possible. Analytics dashboards powered by these processing technologies allow analysts, data scientists and even casual users to view changes in U.S. demographics at any scale, even by neighborhood block, at the speed of their natural curiosity. The applications for these capabilities are endless. One Person, One Dot For the first time ever, accelerated analytics can present practical dot density maps of the entire United States. Dot density is an intuitive method for demonstrating how humans cluster together or disperse across an area. Typically, dot density maps are slow to generate; plotting hundreds of millions of points quickly is daunting, to say the least. Moreover, conventional dot density plots don’t easily reflect changes over time, much less interactive demographic targeting workflows. Thanks to new analytics techniques, however, today’s dot density maps can plot over a billion points at one-person-to-one-dot resolution. These can be filtered interactively to focus on particular regions, time periods or demographics. They can also be animated over time to build a visual understanding of which patterns are static and which are rapidly changing. This type of demographic geospatial analysis is transformational. Real estate developers can immediately understand the changing needs of neighborhoods. Housing advocates can remarket older neighborhoods to emerging populations, or better justify initiatives for affordable housing. Urban planners can see where former manufacturing sites have converted to multifamily housing, or where specific populations have left a particular area. Such tools also allow governments and social service organizations to evaluate—or reevaluate—the efficacy of policies based on changes in ethnic or racial percentages or special needs populations. Deep Dives in Seconds The U.S. Census offers data that goes far beyond location and ethnicity. The American Community Survey, a longstanding initiative of the bureau, provides yearly information about education, internet access, transportation options, housing age and construction, home values, income, language proficiency, disabilities, migration patterns and much more. ACS data helps the U.S. government distribute $675 billion in spending each year—but for business analysts and researchers it’s a treasure trove of additional value. It can reveal how gentrification works, or where local construction is changing demographics. It can determine how to best serve a community with retail, schools, daycare, healthcare and other community services. Consumer goods manufacturers can use ACS information to anticipate, on a block-by-block basis, which streets will be most likely to need new appliances, roofing materials or power saws. Accelerated geospatial analytics are delivering these insights at a scale and resolution that, until now, simply didn’t exist. Moreover, census data is ripe for cross-referencing with other repositories. One of the most promising applications for this combined approach involves digital twinning—the construction of a “virtual duplicate” that mirrors the three-dimensional configuration, physical properties, and environmental conditions surrounding a real-world object or place. Understanding the impact of the natural environment on populations, and vice versa, can change the direction of public policy, environmental management, disaster relief, homeland security and emergency services. Knowing at a moment’s notice, for example, what specific populations will be impacted by a hurricane, wildfire or other disaster can save lives, e.g., who on a street is disabled, or the languages spoken in a given neighborhood. Digital twinning also supports better utility planning, cellular and public Wi-Fi deployment, even forestry and park services planning. One Household, One Dot When census data is combined with parcels and buildings data, we can combine visualizations of population demographics and built form and property values. This too generally requires GPU analytics, both for the geoenrichment and for rendering. For example, Florida has more than 16 million parcels, and at national scale, the open dataset of Microsoft Building Footprints contains approximately 150m features. This is simply not feasible to process with conventional desktop tools, but is easily done with modern GPU hardware and appropriate software. This kind of hybrid dataset has two strong advantages. First, it gives a much more accurate spatial picture of actual residential locations. This is particularly valuable in lower density locations like rural residential areas, where census block groups may not correspond with built locations. For example, commercial and industrial areas, wetlands and parks are all included within census geographies, but are not normally inhabited places. When combining environmental risk maps, such as fire or flood risk, it often becomes important to understand where residential buildings are actually situated within a large census block group. The second advantage is socioeconomic. In states or areas with large retirement populations, such as Florida, census median income figures do not reflect consumer purchasing power nearly as well as measures including parcel values. A Historic Development Dot density and building-level maps of the U.S. Census, powered by GPU-based analytics in the cloud and delivered via online dashboard, have the potential to democratize our largest national demographic database like never before. From business to academia, public safety to social services, accelerated analytics is giving every kind of user the ability to better understand our nation in ways as dynamic, diverse, rich—and numerous—as the census count itself.
<urn:uuid:540a8a37-27dc-4569-9bce-a87030181b9f>
CC-MAIN-2022-40
https://coruzant.com/analytics/accelerated-analytics-is-taking-the-census-where-its-never-been/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00471.warc.gz
en
0.928505
1,508
3.0625
3
Data Visualization: Inspiring the Greatest Number of Ideas Written by: Clay McNeff, Senior Consultant, Tableau Developer This is a continuation of our 3-part series on Data Visualization Best Practices, where we’re discussing one of Edward R. Tufte’s principles of graphical excellence, which is ‘… in the shortest time and with the least ink… give to the viewer the greatest number of ideas’. To view part one on inducing conclusions in the “shortest time”, visit here. To view part two on producing visualizations with the “least ink”, visit here. In part three of our series, we’ll be talking about Tufte’s concept of inspiring the “greatest number of ideas”. How do we increase the number of ideas someone can have when interacting with our visualizations? After all, most of our designs are built upon defined requirements in which the key questions have already been identified. While this may be the case, we can still design a more efficient dashboard that not only answers these predetermined questions but also provides a tool that answers questions the end-user may not even know he has yet. I’m going to focus on three ways in which we can inspire the greatest number of ideas from our visualizations: - make better use of the data-ink you already have - make better use of the real estate you’ve been provided - incorporate elements of interactivity the keep the end-user engaged Make Better Use of the Data Ink You Have We’ve already established that ink that doesn’t serve a distinct purpose should be removed from a visualization. We can take that a step further and suggest that ink should actually serve more than one graphical purpose when suitable. Graphical elements that serve multiple functions can effectively display much more complex data and provide users with tools that are capable of answering a wider array of questions. The question is “How?” The limitation we face in two-dimensional space is that we’re often confined to two-dimensional analysis, i.e. a bar chart that presents one quantitative measure across a set of qualitative “bars”, or a scatter plot that places dots at the intersection of two qualitative measures. These charts, while certainly valuable when telling a story, are a bit limited in that they really only tell one story. Reworking these charts and allowing other aspects of your data to drive color, size, and shape of your marks can deepen your understanding of the underlying data without adding more ink. Make Better Use of the Real Estate You’ve Been Provided Some visualization tools are limited in that they can only display a certain level of complexity per chart. In order to provide an additional layer of information, you may feel inclined to create a second chart and ask the user to jump back and forth between the two. The downside of this approach is that each time the user breaks her eye line to shift to another chart, part of her focus is lost along the way and she may not be able to make the necessary connections between the two to maintain appropriate context. One way to engineer your way around this problem is to layer charts on top of one another to create some interesting, albeit complex, views. Let’s say you have a request from someone wanting to know his monthly sales numbers compared to his $50,000 target. He seems to have 2 primary goals: first, to know how much time he’s spent over and under this goal, and second, to know to what degree the difference between his sales and his target had accumulated over time. You begin with a simple line chart and a $50,000 reference line, but this doesn’t really provide many quick, clear insights to either of his questions. See below. You could calculate the difference between his sales and his target and put it on a second graph, and perhaps calculate the percentage of the time he was above or below his target and put in on a third, but you start to lose focus while bouncing around all these visualizations. You’d find yourself capable of only answering very simple, one-dimensional questions, often without additional context from the other charts. My eventual solution did incorporate multiple charts, but it layered them in a way where the eyes can stay focused in one space. See below. This visualization contains three separate charts, but they’ve been combined in a way that appears as if all are part of only a single chart. First, I converted the line chart to 2 area charts, one that’s colored blue for all marks above the $50,000 target, and one that’s colored orange for all marks below the same target. Area charts were chosen to better display an answer to his second question – to what degree does the difference between his sales and his target accumulate over time? They show a combination of how long, and to what degree, did he make or miss his target. A third line chart was added at the top of the visualization that strips out the degree to which he made or missed his target and does a better job of answering his first question – how much time did he spend over and under his goal? This layered chart provides clear answers to all his questions without ever asking the user to look elsewhere for more information. Another great example of chart layering is an example authored by Chris DeMartini and shared on Tableau Public. His analysis of the 2017-2018 NBA playoffs leverages a concept called small multiple flows, which combines a set of events, providing an intense data visualization about those events, while also connecting one event to the next via a flow element. His specific visualization features bar charts of individual plays depict individual game narrative, circle charts to show how many games a team had won in the series up to that point, and it’s all organized to reflect the playoff bracket format of the NBA playoffs. Incorporate Elements of Interactivity to Keep the End-User Engaged As data literacy continues to progress across all industries, we’re finding a continued push away from ad hoc analysis and towards self-service analytics. There’s an ever-growing need not for simple reports that serve a singular purpose for one person, but for analytic tools that can be leveraged across departments, if not entire organizations. As data visualization experts, we can do our part by promoting the interactive components of our dashboards that open them up to wider audiences. One common way to promote interactivity is instead of designing a single dashboard for whoever is making the request, design a series of dashboards intent on highlighting the data at varying levels of granularity. Upper management may only want to see a certain month’s result, but lower management will need to know details. Individual analysts may need to go further and drill all the way into record-by-record data. Designing a series of dashboards that allows users to not only jump into the data at different levels of aggregation but then allows them to seemingly jump up or down these levels of aggregation with just a few clicks of a button, will help transition entire teams from slow, ad hoc analysts to more knowledgeable, self-servicing aces in no time. Another design feature I leverage to promote interactivity and engagement within Tableau is the user-driven parameter. I use these to essentially swap out entire portions of a dashboard to fit different needs. A more modular design allows a user, even one with more general interest than defined questions, a sandbox in which he can interact with the data that matters most to him. In many of my designs, you’ll find dropdown boxes that allow the user to swap out metrics on the chart in front of him. In more complex cases where it’s not just a metric swap, but perhaps an entire chart type that needs to be changed, I’ll design my dashboard with a series of charts all occupying the same space, but the only one populates once the user defines what he wants to see. Tableau is smart enough to minimize unused space, so once a user defines the details he wants, the chart he wants appears before him and all others are minimized. By enforcing the greatest number of ideas, we can make data accessible and insightful for all types of audiences. Looking for help bringing your data to life? Schedule a free consultation today.
<urn:uuid:d7b3f845-d039-46bb-bab1-47b9c68254b7>
CC-MAIN-2022-40
https://cspring.com/data-visualization-inspiring-the-greatest-number-of-ideas/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00471.warc.gz
en
0.950408
1,731
2.515625
3
Last year, the City of New Bedford, Massachusetts was targeted by an attacker that came with a $5.3 million ransom demand, the largest ever recorded for a local government. While the ransom might have been notable, the fact that New Bedford was the victim of a ransomware attack was seemingly inevitable due to the City’s lack of proper cyber resources. Unfortunately, this is the case for many government organizations. Fortunately, there are steps to take in order to help governments prevent ransomware attacks. Recently, we explored the ransomware trends that agencies need to better understand, how IT pros can help bolster their security posture, and today, we provide a simple security assessment that can help governments identify areas of concern. The Path Forward for Government report takes a deep dive into how agencies can reduce their attack surface, better protect their assets and infrastructure, mitigate cybersecurity risk, and build a stronger incident response capability. These efforts are top-of-mind for federal, state, and local governments as ransomware attacks continue to grow in both frequency and sophistication. According to the report, 32 percent of state and local agencies and approximately 30 percent of federal agencies have experienced a ransomware attack. Moreover, it’s likely that the actual number of ransomware attacks is much higher. According to the report, only about 10 percent of attacks are reported. As ransomware attacks become more of an established part of the threat landscape for public sector organizations, government agencies can forge a safer path towards mission success by focusing on four key areas for cyber mitigation. Dr. William Kennedy, a cybersecurity expert with Verizon, recommends the following: - Create a Risk Mitigation Strategy Using best practices and understanding the current threat environment can help agencies be prepared for an attack. By using threat intelligence, governments can establish what areas of their environment need the most attention to defend against attacks. It’s important that agencies have an overview of their devices to fully understand risk. With an effective understanding of what and who is in your environment, agencies can mitigate risk and use cybersecurity resources most efficiently. - Create a Strong Cybersecurity Program With the help of industry partners, governments can create an effective cyber program that takes the environment, approach, policy and compliance all into account. Agencies should look to build a cyber hygiene training program, implement network segmentation, back up critical data, and keep software up to date. - Monitor Your Environment To create an effective cybersecurity program, governments need to understand the vulnerabilities, threats, and cybersecurity necessary in their environment. Agencies should look to a partner that can conduct cybersecurity assessments to identify threats and provide a list of devices that are connected to the network. - Be Prepared in the Event of a Threat Do you have a response plan? How will you notify staff? What about clients? A response plan should include these answers as well as your decision on paying ransom and how data recovery will be done. Using these four areas, agencies can work to build a stronger cybersecurity posture to help prevent and mitigate ransomware attacks. Is your agency ready to defend against attacks? Read this paper to learn how.
<urn:uuid:3cf97848-f91b-4a19-89a7-e83b3b80fff9>
CC-MAIN-2022-40
http://governmenttechnologyinsider.com/a-guide-to-help-governments-prevent-ransomware-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00471.warc.gz
en
0.948885
624
2.609375
3
The discovery of the SolarWinds hack at the end of 2020 and the recent ransomware attacks on the Colonial Pipeline Company and meatpacker, JBS, have brought to light how critically important cybersecurity is and demonstrated the disastrous consequences of a vulnerable IT infrastructure . While cyberattacks have been potential threat since we first went online, these latest high profile attacks have served to bring a sense of reality and seriousness of cyber attacks on America’s critical infrastructure and assets. With the SolarWinds hack it was the realization that a foreign government entity had been roaming undetected in federal agency networks for about nine months that caused the shock. And with Colonial and JBS it was the sharp impact on the price and availability of essential commodities that gave these attacks added weight. In the five months following the public disclosure of the SolarWinds hack, these cyberattacks have effected real change in U.S. cyber policy such that by May 12, 2021 President Biden issued an Executive Order (EO) on Cybersecurity that charts a new course for national cybersecurity and the protection of federal government agencies. This cybersecurity EO not only recognizes the increasing sophistication of the nation-state actors initiating the attacks but the “insufficient cybersecurity defenses” that have made public sector agencies more vulnerable to these attacks. However, it’s not just that the attackers have grown more sophisticated, or the lack of investment cyber defenses that keep pace with these attacks that make federal agencies more vulnerable to attacks, it’s compounded by the fact that federal agencies are creating, storing, and protecting more data than ever that resides in more places than ever. As federal agencies have increasingly become more data-driven, they have built sophisticated data management environments to be able to move this data from the data center to the edge so that it can be used to drive the mission forward, whether that’s protecting the warfighter, supporting America’s farmers, or empowering medical researchers to solve the challenges of COVID-19. Not only is this data of great value to America’s adversaries, but the multi-cloud environments federal agencies rely on to store, move, and process this data creates more opportunities for exploitation. “The Cybersecurity Executive Order couldn’t have come at a better time for federal agencies,” shared Cameron Chehreh, Federal Chief Technology Officer at Dell Technologies. “It’s a perfect opportunity for federal agencies to modernize their cyber postures and build defenses that protect not just the network but the devise, data, apps, and clouds that they rely on to deliver on today’s mission.” Mismatched Cyber Defenses In many ways the cyber defenses agencies depend on today are from a completely different era. Back at the dawn of the Internet age, it was perfectly feasible to protect an agency’s valuable assets using Intrusion Detection System (IDS), Intrusion Prevention System (IPS) , and a strategically placed firewall or two. “Back when agencies developed proprietary software in house and all data resided in a data center, and was only used in-network, that sort of configuration provided adequate security, explained Chehreh. But today’s federal agency is very different. “In addition to generating so much more data today than ever before, agencies have become much less centralized,” explained Rob Davies, Chief Operating Officer at ViON. “Not only do they need to run multi-cloud environments in order to manage and move the petabytes of data it takes to deliver on today’s mission, but they’ve also made the move from proprietary tools to Commercial Off the Shelf (COTS) solutions to drive digital transformation, and have enabled access to data from phones, laptops, and tablets to support remote work.” In other words, with data being stored in many locations, moving in many different directions, and with agencies part of other organizations’ supply chains, there’s no longer only one point of entry and exit and one attack surface for cyber attackers to choose from, but thousands Despite all these potential vulnerabilities Chehreh and Davies are confident that it is possible to secure today’s agency and their multi-cloud environments. First and foremost, the federal government has made security a priority for cloud solutions with the FedRAMP authorization process. “Since 2011 FedRAMP certification has provided trusted cloud services to federal agencies,” explained Davies. “This is a great foundation, but securing a multi-cloud environment requires more than simply validating the security of the cloud.” Zero Trust, More Security Zero Trust architecture is quickly becoming the de facto standard in securing multi-cloud environments. Zero Trust – which covers a variety of strategies to protect data at rest and in motion, at the edge, in the or core or in the cloud – works by limiting the “internal lateral movement” of a user. Zero Trust has received widespread acceptance because as well as supporting today’s work and IT environments, it doesn’t require a wholesale rip and replace of existing security infrastructure. Instead, Zero Trust works with existing security infrastructure and focuses on improving key areas over time. Typically the first two areas to be addressed are device security and identity management. “Zero Trust has gained significant acceptance in the private sector as the most effective way to prevent a data breach or cyber attack because it provides a continuous process of authentication and validation,” noted Chehreh. “With a traditional network defense posture, once a user was validated, they were free to move about the network. But with a Zero Trust posture, not only can I decide if I’m going to let you into the network, but I can then decide where you have permission to go and what you can access. Zero Trust provides a much more granular level of control.” In creating these zones of authentication and validation, Zero Trust architecture creates a defense-in-depth approach to security with a localized kill chain that contains a threat. This is important because that ability to contain a threat prevents widespread data exfiltration or alteration, preventing widespread data and reduces the likelihood of a long dwell time for an APT within a multi-cloud environment. While Zero Trust might be disruptive it offers the best path forward to federal agencies to secure data in a multi-cloud environment. For federal agencies facing a 60 day window to comply with the Cybersecurity Executive Order, this fundamental change in approach might seem unachievable, but ViON’s Davies disagrees. “In the next 60 days agencies aren’t required to identify the technology they’re going to use or stand up a whole new solution,” he explained. “What agencies need to do in the next 60 days is to make a plan that starts with their data and specifically these two questions: Where does my data reside? Where does it travel?” For Chehreh the other important part of this planning stage is for agency IT teams to understand and rationalize the agency’s app portfolio. “Agencies can minimize risk and maximize reward by understanding what apps they have today, what apps they will be retiring, and what apps they are considering adding to their portfolio,” he added. The other good news for agencies is that the private sector has led the way in Zero Trust adoption and has a lot of information to share about products, tools, and ways to make seemingly overwhelming tasks much simpler. “The key is to understand that moving to a Zero Trust posture is a marathon, not a sprint because you can make significant improvements in your agency’s security posture with a few small steps and then a few more small steps,” explained Chehreh. “Start with a roadmap that outlines not only what you want to achieve overall but who you need to get onboard to deliver success. It’s important to have business stakeholders involved in the process as well as IT stakeholders so you can not only demonstrate the business and mission value, but also identify interdependencies that may change how you execute on your new security posture. And it’s vital to apply a Zero Trust framework not just to technology – your devices, workload, network, and cloud, but also to your processes, and most of all your people.” Chehreh also shared that the federal government’s commitment to implementing Zero Trust as the cybersecurity standard across all agencies has already resulted in both NIST and the NSA releasing guidelines on how to build a Zero Trust architecture. Of course, there are still other challenges to overcome. “Agencies might be overwhelmed by the idea of identifying where their data resides and where does it travel,” explained Davies. “However, with the right tools it’s not complicated to know where data is, identify it, and understand how it transits and provide the foundational blueprint for a Zero Trust posture. We have the tools for that at ViON and we’re here to help.” Federal agencies will always face cybersecurity challenges and the threats will only increase in sophistication. As cyberspace evolves as an escalating the need to secure not only infrastructure, but also data will continue to be of paramount importance. The recent decision to advance Zero Trust as the centerpiece of the federal government’s cybersecurity strategy is a smart one for many reasons. A Zero Trust approach aligns with, and supports, the way the federal government operates today as a data-driven enterprise and the way it will evolve as AI and IoT become even more fundamental to driving American innovation and mission success. At the heart of any data-driven organization is a multi-cloud environment. This complex web of private and public clouds that is needed to support the applications and workloads that are at the heart of mission success demand a Zero Trust architecture in order to operate at speed and scale. “Securing the mission today has never been more important,” concluded Davies. “It’s definitely a complex problem, but by investing in trusted partnerships we can solve these challenges and ensure the security of federal agencies together.”
<urn:uuid:bee1ad07-4517-481a-abc8-80dda63d1d81>
CC-MAIN-2022-40
http://governmenttechnologyinsider.com/securing-data-in-a-multi-cloud-environment-the-value-of-a-zero-trust-approach-%E2%80%A8/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00471.warc.gz
en
0.944216
2,083
2.578125
3
There is a highly practical and accessible resource to help resolve this: Liberating Structures, a method to enhance business creativity and team innovation. Specifically, Liberating Structures are 33 microstructures that help teams work and a better interaction between them. This is a proven method used to unleash the intelligence of each member in a team in order to obtain better and more creative ideas. These are patterns that encourage, enrich and deepen the ways in which people interact in teamwork, in order to facilitate the deployment of individual and group innovation, stimulating the participation of all members of the organization. These microstructures are adaptable on diverse situations and are ideal as a complement for teamwork practices commonly used in companies, such as discussions, report presentations and brainstorming, among others. In times where collaborative work is leitmotiv, Liberating Structures are necessary tools since they have the ability to modify the way in which people collaborate and discover solutions together. The microstructures are especially useful as support for Scrum Masters work – the facilitators responsible for ensuring that the values and good practices are established within a Scrum framework. This process puts in practice several proposals so that collaborative teamwork can obtain the best results for a complex project and enable Scrum Masters to perform according the preferred positions, allowing the deployment of a leadership model based on the service and encourage the group to find their own solutions. In addition, Liberating Structures offer limits for groups self-organization and help teams define a goal and a joint strategy to work with the Scrum model without going astray. These microstructures have the advantage of being simple and easy to learn and understand, and also “open source”, allowing being available to everyone (you can easily get them on the web www.liberatingstructures.com). Every single one of the Liberating Structures proposes teamwork dynamics with clear patterns of distribution of space, people and accurate times for each action. But in order to fully understand the power of these microstructures, it is best to review some concrete examples. The “1-2-4-All” microstructure, for instance, is designed to involve members of a team in the generation of questions, ideas and suggestions, simultaneously. You only need 12 minutes for making it work and can be used for thinking in group about some issue that happened at the company, for planning on how to persue, present innovative solutions or simply to share ideas. In the context of this microstructure, we start by introducing a challenge or situation that involves the team, and each member is invited to think about this for 1 minute and contribute ideas. These are then shared in pairs for 2 minutes, and new ideas are created from that interaction; each couple will then share their ideas with another duo for about 4 more minutes, attending to the similarities and differences and developing the same. Finally, each group of 4 people have 5 minutes to share their idea. As you can see, the “1-2-4-All” microstructure allows to examine proposals that are suggested by a team in no time at all. Other interesting microstructures are “Troika consulting” (used for obtaining creative solutions from colleagues, with focus on personal challenges), “Crowdsourcing” (taking advantage of learning from large teams and proposing creative ideas in a faster way), and “Impromptu networking” (to quickly share challenges and expectations of a large group and create new connections). There are also liberating structures known as “Conversation coffee” (it is used to raise awareness among the whole team about the meaning of certain deep challenges and generate the conditions for new strategies to emerge) and “Open space” (to build the agenda of priorities among all). If in your company is not effective with team communication, the way that decisions are made, or the generations of ideas in group – either because it does not encourage everyone’s participation, or because it is too chaotic, Liberating Structures can open the doors to an interesting evolution. By linking several of these microstructures together, a highly productive flow of interactions can be generated and sustained, which enhances the daily dynamics of teamwork and elevates the organization to a whole new level.
<urn:uuid:e896818c-a3b6-4716-8467-f2edaad3b9aa>
CC-MAIN-2022-40
https://baufest.com/en/liberating-structures-power/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00471.warc.gz
en
0.941062
887
2.671875
3
A child or parent reports bullying during the parent-teacher conference. Then what? What constitutes bullying? What should a teacher do with this report? Who does the information get shared with? What type of investigation needs to be done? Do you talk with the bully and victim together or separately or both? Should you call in the other parents? How much information can you give them? What are the consequences for bullying? Is the situation getting better? How do you know? Has it gotten worse? Are you monitoring behaviors ongoing? Maybe the bullying stopped in your classroom, but do you know if it is going on elsewhere? Do you need to share this report with administration? Does this classify as bullying for the state report? Does this involve a special needs student? Do you have evidence or written statements from either party? How has the student been affected at school? Lower grades? Less participation? Have they been absent more often? Do they need to be referred to outside services? What if there was a physical injury? School personnel are busy. It is difficult for them to know the right thing to do in every situation, so it is critical for schools to develop clear policies and steps for investigation so all the right information is gathered and shared with the right people and appropriate actions are taken to proactively resolve incidents before they escalate. Does your school have a clear procedure for investigating incidents of bullying and harassment? If you do, do you know if your teachers and staff have read the policy and understand their individual responsibilities? How are you ensuring this procedure is followed? To learn how a leading school district is working to improve their ongoing investigation process, click here to listen to Tulsa Public School’s Student Services Director, Tenna Whitsel, discuss their efforts.
<urn:uuid:e5aa68cf-e02e-4345-b210-a2e82a75f774>
CC-MAIN-2022-40
https://www.awareity.com/2013/11/01/do-your-teachers-know-how-to-investigate-bullying/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00471.warc.gz
en
0.965507
357
3.40625
3
This holiday season, buyers everywhere will flock to the Internet to rack up savings on deals and avoid the hassles of shopping in malls and department stores. Unfortunately, shopping online without using caution can lead to great headaches due to the prevalence of criminal activity. One of the most devastating identity theft techniques comes in the form of email phishing. Phishing involves the use of phony links, emails and websites for the purpose of gaining access to sensitive consumer information – usually by installing malware on the target system. This data is then used to steal other identities, gain access to valuable assets and overload inboxes with email spam. In addition to affecting desktop computers, a mobile device does not mitigate phishing attempts. As with the SMS notifications, if you feel the email could be legitimate, log directly in to that account and do not click the link. Currently there exists a misconception amongst consumers that phishing is not something that could happen to the average user. However, it was recently reported in the APWG Phishing Activity Trends Report that as of June 2013, 38,110 websites were identified as hosted phishing domains. To make matters worse, as many as 425 brands were recently targeted by phishing attempts. The following tips can help you avoid the pitfalls of being targeted by phishing campaigns during the holidays: - Trust your spam filter Browsing through your junk email box is important as your spam filter might occasionally send important emails to the trash. However, more often than not an item is sent to the spam filter because it is dangerous and filled with malware. Trust your spam filter. If an important email winds up there, you can always ask a user to re-send the information. To protect your critical information, avoid clicking on ANY links from an email sent to the spam box. - Beware of misspellings in email subject lines When you get an email with incorrect or misspelled names, or the email is a grammatical disaster, there is strong likelihood that it could be a phishing attempt. These emails are not hard to identify. Chances are, if you get an email from an official company and it looks like an individual with a poor grasp of the intended language wrote the content, do not click or open it. - Look out for random or misspelled hyperlinks If you are presented with a link that is shortened and contains jumbled letters – or appears to take you to a nefarious website – these are common signs of phishing. Always examine the link before you click on it to avoid clicking on malware and infecting your computer. A helpful way of avoiding malevolent links is to investigate the website in question by safely performing a Google search.
<urn:uuid:4640d244-2dee-4f3c-bfe4-e9ec1f86d7e9>
CC-MAIN-2022-40
https://www.entrust.com/es/blog/2013/12/can-you-spot-a-phishing-email/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00471.warc.gz
en
0.95177
550
2.734375
3
The Intergovernmental Panel on Climate Change (IPCC) released a summary Friday of its latest report on global warming. The report, gleaned from the help of 2,000 scientists, calls for the world to cut emissions of CO2 and shift energy sources to renewable fuels. The report also calls for the stabilization of greenhouse gases in the atmosphere by 2015 to the level of 445 parts per million, which scientists say will keep global temperatures from rising high enough to cause environmental and economic chaos. In addition, the IPCC has set a goal to reduce emissions between 50 and 85 percent by 2050. The writers of the report warn that failing to keep greenhouse gas emissions under strict control through the use of biofuels, better fuel efficiency, and increased use of renewable power like solar and wind will result in a global temperature climb of 2 degrees Celsius or more over pre-industrialized levels, leading to widespread environmental disasters. The report is the third in a series by the IPCC. The first focused on confirming that global warming was indeed occurring, and the second focused on the impact to people and the environment. The third focuses on economics and technological solutions. Technology to the Rescue The IPCC notes that the technology to reduce greenhouse gases and generate power via renewable resources is currently possible and affordable. “In the U.S. over the last several years, we’ve put up a couple thousands of megawatts of wind turbines a year, and globally, it’s been in the tens of thousands of megawatts, so it’s possible to put in significant amounts of wind capacity in a short period of time,” Chris Namovicz, an operations research analyst for the U.S. Energy Information Administration, Office of Integrated Analysis and Forecasting, told TechNewsWorld. “There are some indications that we’re straining the productive capacity in the wind industry, but that capacity isn’t static. If there’s long-term demand, it would encourage industry to invest in building new factories and to train engineers to run these things,” he added. Willing to Pay? The key problem that critics raise, however, is the cost. “They think we can level emissions by 2015? Have they not looked at China’s reporting, that by the end of the year they are going to surpass the U.S.? I did not see any place where they said that China was not going to build a coal-fired power plant a week for the next five years,” H. Sterling Burnett, a senior fellow with the National Center for Policy Analysis, told TechNewsWorld. “I didn’t see that in the report. I didn’t see where they said India was going to halt development. And I didn’t see where they said in the U.S. that the next presidential candidate was going to run on a platform of zero [economic] growth. “And that’s the only way you actually stop emissions growth … is stopping economics growth,” he continued. “They are wrong — we don’t have the technology today to separate energy use from economic growth. And if you can’t reduce it, you can’t reduce greenhouse gases in the time frame they are talking about.” The key problem in reaching the IPCC goals, Burnett noted, is that the world would not only have to build new, cleaner energy plants to meet new energy demands, but it would also have to replace existing plants. “It defies reality,” he said. “The U.S. has been saying all along … that if humans are the cause of global warming, the only way to solve it is through technological change and innovation. It’s not technology we have today, it’s the technology of the future,” Burnett explained. “We need to pour money into that technology and then disperse it into developing countries where you can get the most bang for the buck,” he said. “It doesn’t make sense to replace coal fired plants in the U.S. with new technology, but it does make sense to not build dirty coal-fired plants in China and instead build with new technology.” If the technology is indeed present and capable of being leveraged by 2015, the issue becomes political because it turns into a question of what technologies the world’s population and governments are willing to invest in. Wind power, for example, runs into problems because energy is only generated when the wind blows, making it much more difficult to control or store. “It all comes down to cost,” Namovicz said.
<urn:uuid:c1c141d2-9714-46cb-bc90-db6c21382f1d>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/report-world-can-afford-to-beat-global-warming-57242.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00471.warc.gz
en
0.94798
974
3
3
Dark data is a major challenge in enterprises, and it’s not going away soon. Fortunately, there are ways to reduce dark data and the risks that come with it. Many businesses unknowingly ignore an entire category of data essential to their data protection and management policies in today’s ostensibly endless data volume and complexity. According to recent research by Veritas, Data’s Dark Side, a company’s data is typically more than 50% “dark,” or information stored in data repositories but has no assigned or established value. Dark data pose severe threats to an enterprise’s security and compliance efforts and has high storage costs, making it more crucial than ever to solve the underlying problems that lead to it. Dark data threatens protection The majority of firms are unclear about the data they must secure. Dark data reservoirs, which may house sensitive and important data, constitute a tempting target for hackers and ransomware attacks since dark data is frequently out of sight and out of mind for many companies. Clearly, understanding the data’s nature, location, and value is essential for surviving any ransomware assault. Organizations will be more successful in knowing how to safeguard data from risk and recover after an attack if they have a greater understanding of their data. Dark data threatens compliance Untagged and unstructured data makes it challenging to comply with continually changing regulatory environments. To comply with data compliance regulations, businesses may strategically implement data gathering, archiving, and surveillance capabilities. Companies will be able to adhere to strict standards and adopt retention rules throughout their whole data estate with better management of dark data. Dark data and sustainability Dark data plays a significant role in an enterprise’s environmental compliance. The environmental cost of dark data must be given attention as businesses attempt to create sustainability initiatives that will fulfill carbon reduction objectives. Companies must eliminate unnecessary data from their data centers and clouds to safeguard the environment from dark data. Enterprises have a great chance to lower their carbon footprint, adhere to industry environmental requirements, and achieve sustainability goals that are becoming more and more relevant to various stakeholders by appropriately handling dark data. Managing and protecting dark data Dark data compromises an organization’s compliance and security. Here is how data managers can better identify, manage and protect dark data within their company: Proactive Data Management For enterprises to obtain visibility into their data, manage data-related risks, and decide which data to keep versus delete before a critical security event occurs, data officers must adapt and act from a proactive data management mindset. Data managers can implement tactics to establish a proactive mindset, are data mapping and data minimization. These tactics can discover all sources and locations of collected and stored data, reduce the amount of data being stored and confirm that retained data is directly related to the purpose in which it was collected. Using Technology Advancements Businesses could benefit from technological improvements as well. Large pools of untagged, unstructured data may be efficiently identified, managed, and protected due to Artificial Intelligence (AI) and Machine Learning (ML), which also play a crucial part in data management procedures. To ensure that sensitive or risky data is correctly managed and safeguarded, regardless of where it resides, the ultimate goal is to collect the information, not just the data, at the source (edge). This can be accomplished by fast scanning, labeling, and classifying information. As a result, by identifying vulnerabilities and mitigating risks, transparent AI and ML rules assist enterprises in gaining complete visibility into their data. The next frontier is that. By lowering costs and enabling actions through untapped intelligence, properly managed dark data provides enterprises with a safer and more compliant future. It also opens up opportunities for organizational optimization and innovation within any company.
<urn:uuid:f7b190f7-34fd-41b2-b3f0-45b50c68964c>
CC-MAIN-2022-40
https://enterprisetalk.com/featured/dark-data-management-mitigating-the-risks-of-the-invisible/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00471.warc.gz
en
0.93196
758
2.828125
3
Until a few weeks ago, the general public was not privy to data showing how many people visit government websites, which websites they visit and whether they’re using a desktop or smartphone, or an Android or Windows operating system. If you’re wondering, there were 1.4 billion visits to government websites within the past 90 days, and most people were looking for information on taxes, weather and federal jobs. More often than not, they used a desktop computer running Windows 7, but some visitors navigated federal websites from their smartphones and tablet computers. “One of the reactions from people is they just didn’t know the amount of traffic that was coming to government websites,” says Gwynne Kostin, director of the digital government division within the General Services Administration. “People think there is a perception that government is far behind or not providing services.” At any given moment, tens of thousands of people are perusing government websites, and that number is prominently displayed at the top of the analytics.usa.gov website. (At the time this story was written, there were 126,754 people on government websites.) “Just that big number at that time really does show a level of impact about what is happening on government [websites],” says Kostin, one of the pioneers of the Digital Analytics Program within the GSA. The program began in October 2012 as a pilot within the agency’s Office of Citizen Services and Innovative Technologies and has since evolved into the analytics dashboard that citizens can view. The data “comes from a unified Google Analytics profile that is managed by the Digital Analytics Program,” according to one of the federal entities that had a hand in building the site. Development of the new analytics website was a collaborative effort among several entities: the GSA’s internal digital services team, 18F; the Digital Analytics Program; the U.S. Digital Service; and the White House. “It’s a tool for transparency, and it allows the public to get an inside view of how the government works and see what sites people are using,” Kostin says. She expects all agencies will join the program, after they work through the onboarding process and align it with their agencies’ internal timelines. Analytics data for healthcare.gov, one of the government’s more high-profile websites, is not yet publicly available, but Kostin says that information will eventually be included on the new site. A recent GovFresh article notes that "despite a digital strategy issued by the White House two years ago calling for more mobile-friendly citizen services, the top four most-visited federal government websites over the past 30 days fail this test.” In response to questions about the lack of mobile optimization for the sites and how the analytics data would help to usher in changes, Kostin says many organizations are working to introduce mobile-friendly websites and some of those updates will be rolled out in the coming weeks. Making analytics data public empowers citizens to ask tough questions and encourages agencies to make changes, Kostin adds. “There is always a question about how to apply resources.” What’s Next for Analytics.usa.gov? Short term, the analytics program wants to introduce geographic data to the site to show where people are accessing government sites. The site will not collect personally identifiable information, and data will be anonymized, Kostin explains. According to 18F, which was part of the development team, “individual visitors are not tracked across websites, and visitor IP addresses are anonymized before they are ever stored in Google Analytics. The Digital Analytics Program has a privacy FAQ.”
<urn:uuid:5db203e5-f136-403a-8451-119021a3b16a>
CC-MAIN-2022-40
https://fedtechmagazine.com/article/2015/04/federal-web-traffic-data-goes-public-big-way
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00671.warc.gz
en
0.951136
775
2.609375
3
BEAST stands for Browser Exploit Against SSL/TLS. It is an attack against network vulnerabilities in TLS 1.0 and older SSL protocols. The attack was first performed in 2011 by security researchers Thai Duong and Juliano Rizzo but the theoretical vulnerability was discovered in 2002 by Phillip Rogaway. Why do we want to talk about such an old attack technique? According to research done for the 2020 Acunetix Web Application Vulnerability Report, 30.7% of scanned web servers still have vulnerable TLS 1.0 enabled, which means that they are susceptible to the BEAST attack. This shows how IT security is still a major issue for businesses and no matter how many new features improving security are introduced in software, old attacks are still a major problem. This situation also applies to SSL/TLS vulnerabilities including BEAST, BREACH, POODLE, or OpenSSL Heartbleed. How Does the BEAST Attack Work The Transport Layer Security (TLS) protocol is a successor to Secure Sockets Layer (SSL). Both are cryptographic protocols that let you use different cipher suites to encrypt the communication between a web browser and a web server. This makes it impossible for someone to listen in on the communication and steal confidential data. Attackers may be able to tap into the conversation between a web server and a web browser using man-in-the-middle attack techniques. If they do and if there is no encryption, they have access to all the information exchanged between the web server and web browser: passwords, credit card numbers, etc. However, even encryption might have its weaknesses and be broken. This is exactly the case with the BEAST attack. The researchers found that TLS 1.0 (and older) encryption can be broken quickly, giving the attacker an opportunity to listen in on the conversation. If your server supports TLS 1.0, the attacker can make it believe that this is the only protocol that the client can use. This is called a protocol downgrade attack. Then, the attacker can use the BEAST attack to eavesdrop. Technical Details of BEAST The TLS protocol uses symmetric encryption with block ciphers. Symmetric encryption means that the same key is needed to encrypt and decrypt the message. Block ciphers mean that information is encrypted in blocks of data that have fixed length. If there is not enough data for the last block, that last block is padded. Some popular block ciphers are DES, 3DES, and AES. If the same data and the same key always gave the same encrypted content, an attacker could easily break any encryption. That is why TLS uses initialization vectors. This means, that encryption is seeded using random content. This way, if you use the same data and the same key many times, every time you end up with different encrypted content. However, it would not be efficient to use random data to seed every block in a block cipher. That is why SSL/TLS also uses cipher block chaining (CBC). Blocks are chained with one another using a logical XOR operation. In practice, this means that the value of each block depends on the value of the previous block. So, an encrypted value representing some original data depends on the previous block of that data. The Attack Technique The basic principle of breaking codes is: everything can be broken, it’s just a matter of how long it takes. The same principle applies to SSL/TLS ciphers. A good cipher is not impossible to break. It is simply impractical to break – impossible to break in a sensible amount of time using current computing resources. The attacker could break a block cipher by trying different combinations and seeing if they get the same result with the same initialization vector (which they know). However, they can only check that for a whole block at a time, and a block can have, for example, 16 bytes. This means that for the block to be checked, the attacker would have to test 25616 combinations (3.4028237e+38) for every block. What the BEAST attack does is make this much simpler: the attacker only needs to guess a single byte at a time. This can be done if the attacker can predict most of the data (for example, HTML code) and needs just one piece of secret information, for example, a password. The attacker can then test the encryption carefully, selecting the right length of the data, so that they have just one byte of information in a block that they do not know. And then, they can test the block just for 256 combinations of this byte. Then, they repeat the process for the next byte, soon coming up with the entire password. Is BEAST a Practical Attack? The difficulty of the attack is why this vulnerability is rarely exploited, despite a third of the websites still supporting the vulnerable TLS 1.0 protocol (according to our statistics). However, it is possible and therefore you should protect yourself against it. How to Discover if Your Web Server Is Vulnerable to BEAST Discovering whether your web server is vulnerable to BEAST is very easy. If it supports TLS 1.0 or any version of SSL, it is vulnerable to BEAST. You can easily discover if your web server supports TLS 1.0 or any version of SSL using Acunetix or manually. The advantage of using Acunetix is: you will also find all your web vulnerabilities that other tools won’t discover. And what’s the point of fixing just one vulnerability and not even knowing about others, which may be just as dangerous? BEAST shows the major difference between web vulnerabilities and network vulnerabilities: network vulnerabilities are very easy to detect even using free tools and the only way to eliminate them is to upgrade affected software or hardware. Web vulnerabilities must be detected by specialized software like Acunetix and they can be eliminated by fixing application code. How to Fix the BEAST Vulnerability Originally, the RC4 cipher was recommended for use to mitigate BEAST attacks (because it is a stream cipher, not a block cipher). However, RC4 was later found to be unsafe. Currently, PCI DSS (Payment Card Industry Data Security Standard) prohibits the use of this cipher. Therefore, you should never use this method to protect yourself from BEAST. Just as with other network vulnerabilities, there is just one simple fix to BEAST: turn off TLS 1.0 and older protocols. Here is how you can do it for the most popular web server software. What we recommend is also disabling TLS version 1.1 and leaving just TLS 1.2 running (all major browsers such as Google Chrome, Firefox, and Safari support TLS 1.2). Apache Web Server Edit the SSLProtocol directive in the ssl.conf file, which is usually located in /etc/httpd/conf.d/ssl.conf. For example, if you have: SSLProtocol all -SSLv3 change it to: Then, restart httpd. Edit the ssl_protocols directive in the nginx.conf file. For example, if you have: ssl_protocols TLSv1 TLSv1.1 TLSv1.2; change it to: Then, restart nginx. To disable TLS 1.0 in Microsoft IIS, you must edit the registry settings in the Microsoft Windows operating system. - Open the registry editor - Find the key HKLM SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server - Change the DWORD value of the Enabled entry to 0. - Create a DisabledByDefault entry and change the DWORD value to 1. Repeat the above steps for all versions of SSL and TLS 1.1 (if you want to go along with our recommendation and disable it, too). Get the latest content on web security in your inbox each week.
<urn:uuid:86bff54d-0cb2-43b7-8051-07d5aebe10a4>
CC-MAIN-2022-40
https://www.acunetix.com/blog/web-security-zone/what-is-beast-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00671.warc.gz
en
0.90967
1,781
3.25
3
Materials science is an interdisciplinary field involving the properties of matter and its application to various areas of science and engineering. It includes elements of applied physics and chemistry, as well as chemical, mechanical, civil and electrical engineering. Materials science focuses on the relationship between the atomic and molecular structure of a material, the properties of the material (such as strength, electrical conductivity or optical properties), and ways in which the material is manufactured or processed into a shape or product. Quantum technologies will bring new capabilities to the sector as they are adopted in the coming years and decades. Discover how ID Quantique’s infrared single-photon detectors and super-conducting nanowire detectors, coupled with the ID900 Time Controller, can be used to provide greatly improved observations.
<urn:uuid:86f768ec-2cff-4f81-bad8-5aa2b19e7198>
CC-MAIN-2022-40
https://www.idquantique.com/quantum-sensing/applications/materials-science/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00671.warc.gz
en
0.930052
157
2.984375
3
What is a DDoS Attack and How Can I Prevent One? Have you ever merged onto a freeway only to find it flooded with a sea of red tail lights? Or you’ve run inside a Starbucks for a quick latte, and the order line wraps around the counter? Immediately, you have a change of heart, and all service comes to a grinding halt. A DDoS attack affects your business in the same way, except it’s not only you whose frustrated, but your customers as well. Your consumers pay an equal share of disappointment by losing access to a product, service, and a more significant deal with higher stakes than a $5 latte. To make matters worse, DDoS attacks have nearly tripled since 2014, and over 80% of attacks use several tactics, rendering a defense plan harder to deploy for IT teams. Despite the high skill level of modern hackers, a DDoS attack can be a anageable threat with the right strategy. To ensure that your business stays protected, here’s what you need to know about DDoS attacks and how you can prevent one: What is a DDoS Attack? A DDoS attack is a distributed denial-of-service strike that disrupts the way a website functions. Hackers try to flood a server or network with tons of traffic, making online services impossible to reach. Here are three types of DDoS attacks that can occur: - Protocol Attacks: targeted toward server vulnerabilities, exposing, and attacking resources. - Application Attacks: targeted toward applications, infecting software, and hard drives. - Volumetric Attacks: an overload of traffic cripples a network. Whether it’s a tidal wave of inbox messages or fake contact requests, the sheer amount of requests overrides your system and causes it to shut down. The bigger issue, on top of the system shutting down, is discovering what happened and where the breach came from. This burdens your team with the task of identifying which piece of equipment or application led to the breach, which unknown location was vulnerable, and how to prevent such events from happening in the future. Am I at Risk? Determining the risk level of a DDOS attack depends on the complexity of your infrastructure. For example, an e-commerce business facing a present-day DDOS attack is subject to all three attack types (volumetric, protocol, and application). The main defense to this type of attack is a firewall which creates a barrier to prevent malicious threats. This defense method’s downfall is that firewalls are ineffective against a dynamic strike (which is the combination of attack methods). Arbor’s 12th Annual Worldwide Infrastructure Security Report states that 36% of e-commerce businesses fell prey to DDOS attacks in 2017. Other industries, such as government, financial services, and education, were within a 5% difference of threat levels. In addition, Neustar’s 2016 Worldwide DDOS Attacks and Protection Report claims that 73% of retail businesses suffered from a DDoS attack with 83% of respondents undergoing many attacks per year. Both tech and financial service respondents had similar results, except roughly 30% of them experienced an attack every month. With the modern savvy that attackers have today, a single defense strategy is no longer a safe option. Security tactics, like firewalls, cloud services, and on-premises appliances must work together to offer protection on all fronts. How Can I Prevent an Attack? Considering the mitigation tactics above, here are a few things you can do to make a DDOS attack less likely: Hire a DDOS Protection Service One of the most effective steps you can take to prevent a DDOS attack is to use a dedicated DDOS protection service. By merging outside and in-house resources, a DDOS-as-a-service provider offers a security plan that responds to threats immediately, maintains compliance, and prevents attacks by filtering out malicious traffic and downloads. The only downside to a DDOS provider is that services are limited to a specific security threat, rather than a multitude of protection features to counter various attacks. Update Firewalls and Routers As a primary line of defense, you should configure your firewalls and routers with security updates to better detect faulty traffic. With mainstream hardware, you can scan network traffic before it makes contact with a server. In return, data receives an identity that allows IT teams to locate threats and remove them. Although they’re useful, keep in mind that firewalls and routers only identify security threats, whereas extra measures are needed to prevent them from making an impact. Use Cloud Technologies A cloud-based solution gives you more bandwidth and cutting-edge resources to watch your traffic at all times. Also, cloud-based applications sift through data before it lands at a destination, flagging and removing malicious files upon sight. IT teams who manage cloud services conduct thorough searches for DDOS attacks and other harmful ploys carried out by hackers. Overall, the benefits that cloud providers offer are versatile, but the cost for businesses to ditch legacy strategies and make a full cloud-based transition is often too high. Hire a Managed Security Services Provider (MSSP) Investing in an MSSP gives you the freedom to focus on parts of your business that drive customer engagement. More importantly, reliable protection means that you can engage with your audience to increase profits, rather than losing viewership to major site failures. With 24/7 support, expertly managed security initiatives, and affordable rates, you can trust that when an attack strikes, someone is there to thwart the blow. Integrative strategies that use anti-virus software, firewalls, secure routers, managed services, and cutting-edge technologies almost guarantee security from the majority of DDOS attacks. New attacks are being generated all the time, and what’s current one day may not be suitable the next. There’s only so much you can do when the technological landscape is always changing, which is why finding the right support is paramount to your success. ArmorPoint protection places threat intelligence at the helm of your business, making DDOS attacks less of a threat. Are you ready to Armor Up? Contact a representative today to learn more. ArmorPoint is a security information and event management solution that provides a cost-effective and reliable way to continually protect your business from emerging threats. Through its customizable service pricing model, ArmorPoint’s cost-effective packages and dynamic levels of expert management support the security strategies of all companies, regardless of available budget, talent, or time. And since ArmorPoint offers 24/7 security support with a team of dedicated specialists, they can provide you with the manpower you need to expertly manage all of your cybersecurity initiatives. See how ArmorPoint can make a difference in your security posture with a risk-free 30 day free trial.
<urn:uuid:e4d49b3b-d284-42e0-a907-8573a37b7d1c>
CC-MAIN-2022-40
https://armorpoint.com/2019/05/08/what-is-a-ddos-attack-and-how-i-can-prevent-one/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00671.warc.gz
en
0.93336
1,416
2.78125
3
Privilege escalation is a type of network attack used to gain unauthorized access to systems within a security perimeter. Attackers start by finding weak points in an organization’s defenses and gaining access to a system. In many cases that first point of penetration will not grant attackers with the level of access or data they need. They will then attempt privilege escalation to gain more permissions or obtain access to additional, more sensitive systems. In some cases, attackers attempting privilege escalation find the “doors are wide open” – inadequate security controls, or failure to follow the principle of least privilege, with users having more privileges than they actually need. In other cases, attackers exploit software vulnerabilities, or use specific techniques to overcome an operating system’s permissions mechanism. We’ll cover the five most common privilege escalation attack vectors, and show specific examples of privilege escalation techniques attackers use to compromise Windows and Linux systems. In this article: There are two types of privilege escalation: For attackers, privilege escalation is a means to an end. It allows them to gain access to an environment, persist and deepen their access, and perform more severe malicious activity. For example, privilege escalation can transform a simple malware infection into a catastrophic data breach Privilege escalations allow attackers to open up new attack vectors on a target system. For example, it can involve: When security teams suspect privilege escalation it is important to perform an in-depth investigation. Signs of privilege escalation include malware on sensitive systems, suspicious logins, and unusual network communications. Any privilege escalation incident must be dealt with as a severe security incident and, depending on the organization’s compliance obligations, might have to be reported to authorities. Privilege escalation attacks typically involve the exploitation of vulnerabilities such as software bugs, misconfigurations, and incorrect access controls. Every account that interacts with a system has some privileges. Standard users typically have limited access to system databases, sensitive files, or other resources. In some cases, users have excessive access to sensitive resources, and may not even be aware of it, because they do not try to gain access beyond their entitlements. In other cases, attackers can manipulate weaknesses of the system to increase privileges. By taking over a low-level user account and either abusing excessive privileges, or increasing privileges, a malicious attacker has an entry point to a sensitive system. Attackers might dwell in a system for some time, performing reconnaissance and waiting for an opportunity to deepen their access. Eventually, they will find a way to escalate privileges to a higher level than the account that was initially compromised. Depending on their goal, attackers can continue horizontally to take control of additional systems, or escalate privileges vertically, to gain admin and root control, until they have access to the entire full environment. Here are the most important attack vectors used by attackers to perform privilege escalation. Single factor authentication leaves the door wide open to attackers planning on performing privilege escalation. If attackers obtain a privileged user’s account name – even without the password – it is a matter of time before they obtain the password. Once they obtain a working password, they can move laterally through the environment undetected. Even if the attacker is detected and the organization resets the password or reimages the affected system, the attacker may have a way to retain a persistent presence – for example, via a compromised mobile phone or rootkit malware on a device. This makes it important to thoroughly eradicate the threat and continuously monitor for anomalies. Here are common ways attackers can gain access to credentials: Attackers can perform privilege escalation by exploiting vulnerabilities in the design, implementation, or configuration of multiple systems – including communication protocols, communication transports, operating systems, browsers, web applications, cloud systems, and network infrastructure. The level of risk depends on the nature of the vulnerability and how critical is the system in which the vulnerability is discovered. Only a small fraction of vulnerabilities allow vertical privilege escalation. However, any vulnerability that can allow an attacker to change privileges should be treated with high severity. See the following sections for examples of vulnerabilities that can lead to privilege escalation on Windows and Linux. Privilege escalation very commonly results from misconfiguration, such as failure to configure authentication for a sensitive system, mistakes in firewall configuration, or open ports. Here are a few examples of security misconfigurations that can lead to privilege escalation: Attackers can use many types of malware, including trojans, spyware, worms, and ransomware, to gain a hold on an environment and perform privilege escalation. Malware can be deployed by exploiting a vulnerability, can be packaged with legitimate applications, via malicious links or downloads combined with social engineering, or via weaknesses in the supply chain. Malware typically runs as an operating system process, and has the permissions of the user account from which it was executed. So there are two directions for exploitation: Here are common examples of malware that can be used for privilege escalation: Social engineering is used in almost all cyber attacks. It relies on manipulating people into violating security procedures and divulging sensitive or personal information. It is a very common technique used by attackers to gain unauthorized access and escalate privileges. Social engineering is highly effective because it circumvents security controls by preying on human weaknesses and emotions. Attackers realize that it is much easier to trick or manipulate a privileged user than break into a well-defended security system. Here are common types of social engineering attacks and how they are used for privilege escalation: There are many privilege escalation methods in Windows operating systems. Here is a brief review of three common methods and how you can prevent them. Windows uses access tokens to determine the owners of running processes. When a process tries to perform a task that requires privileges, the system checks who owns the process and to see if they have sufficient permissions. Access token manipulation involves fooling the system into believing that the running process belongs to someone other than the user who started the process, granting the process the permissions of the other user. There are three ways to achieve access token manipulation: In this method, an adversary has a username and password, but the user is not logged There is no way to disable access tokens in Windows. However, to perform this technique an attacker must already have administrative-level access. The best way to prevent the attack is to assign administrative rights in line with the least-privilege principle, regularly review administrative accounts and revoke them if access is no longer needed. Also, monitor privileged accounts for any sign of anomalous behavior. The Windows user account control (UAC) mechanism creates a distinction between regular users and administrators. It limits all applications to standard user permissions unless specifically authorized by an administrator, to prevent malware from compromising the operating system. However, if UAC protection is not at the highest level, some Windows programs can escalate privileges, or execute COM objects with administrative privileges. Review IT systems and ensure UAC protection is set to the highest level, or if this is not possible, apply other security measures. Regularly review which accounts are a local administrator group on sensitive systems and remove regular users who should not have administrative rights. Attackers can perform “DLL preloading”. This involves planting a malicious DLL with the same name as a legitimate DLL, in a location which is searched by the system before the legitimate DLL. Often this will be the current working directory, or in some cases attackers may remotely set the working directory to an external file volume. The system finds the DLL in the working folder, thinking it is the legitimate DLL, and executes it. There are several other ways to achieve DLL search order hijacking: Here are several ways to prevent a DLL search order hijack: In Linux systems, attackers use a process called “enumeration” to identify weaknesses that may allow privilege escalation. Enumeration involves: Attackers use automated tools to perform enumeration on Linux systems. You should also use the same tools to pre-empt an attack, by scanning your own system, identifying weaknesses, and addressing them. Below are two specific techniques for escalating privilege on Linux and how to mitigate them. From time to time, vulnerabilities are discovered in the Linux kernel. Attackers can exploit these vulnerabilities to gain root access to a Linux system, and once the system is infected with the exploit, there is no way to defend against it. Attackers go through the following steps: Follow security reports and promptly install Linux updates and patches. Restrict or remove programs that enable file transfers, such as FTP, SCP, or curl, or restrict them to specific users or IPs. This can prevent transfer of an exploit onto a target device. Remove or restrict access to compilers, such as GCC, to prevent exploits from executing. You should also limit which folders are writable or executable. SUDO is a Linux program that lets users run programs with the security privileges of another user. Older versions would run as the superuser (SU) by default. Attackers can try to compromise a user who has SUDO access to a system, and if successful, they gain root privileges. A common scenario is administrators granting access to some users to perform supposedly harmless SUDO commands, such as ‘find’. However, the ‘find’ command container parameters that enable command execution, and so if attackers compromise that user’s account, they can execute commands with root privileges. Never give SUDO rights to the programming language compiler, interpreter or editors, including vi, more, less, nmap, perl, ruby, python, gdb. Do not give sudo rights to any program that enables running a shell. And severely limit SUDO access using the least-privilege principle. In this article, we were only able to cover a few common privilege escalation attacks. For more attacks and additional details on how to mitigate and detect each attack, refer to MITRE ATT&CK privilege escalation tactics. Cynet 360 is a holistic security solution that can help with three important aspects of privilege escalation—network security, endpoint security, and behavioral analytics. 1. Network Analytics Network analytics is essential to detect and prevent initial penetration and privilege escalation on your network. The challenge—sophisticated attackers target an organization’s weak points. Following an initial endpoint compromise, the attacker looks to expand their reach and gain privileges and access to other resources in your environment. Their ultimate aim is to access your sensitive data and to transfer it to their premises. Key parts of these attack vectors can only be discovered via generated anomalous network traffic. The solution— Cynet Network Analytics continuously monitors network traffic to trace and prevent malicious activity that is otherwise invisible, such as credential theft and data exfiltration. 2. Endpoint Protection and EDR Unauthorized access to endpoints is a common entry point in a privilege escalation attack. The challenge—attackers with strong motivation will eventually bypass the prevention measures on the endpoint. They will use several tools to work undetected until they achieve their desired outcome. The solution— Cynet EDR continuously monitors the endpoints, so defenders can detect the active malicious presence, immediately understand its impact and scope, and respond. 3. User and Event Behavioral Analytics Behavioral analytics can help you detect anomalous activity on organizational systems or user accounts, which may indicate intrusion attempts or privilege escalation. It is also especially important to detect privilege escalation conducted by malicious insiders. The challenge—attackers with clear objectives in mind, or those with insider privileges, might bypass detection, succeed in compromising user accounts and use them for data access and lateral movement. Cynet User Behavior Analysis monitors and profiles user activity continuously, to establish a legitimate behavioral baseline and detect anomalous activity that suggests compromise of user accounts or privilege escalation.
<urn:uuid:e460470e-3b3d-4edd-990a-94cba5bac42b>
CC-MAIN-2022-40
https://www.cynet.com/network-attacks/privilege-escalation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00671.warc.gz
en
0.911592
2,468
3.3125
3
In this tutorial, we’ll show how to create and use streams in Snowflake. (This article is part of our Snowflake Guide. Use the right-hand menu to navigate.) Streams in Snowflake explained A Snowflake stream—short for table stream—keeps track of changes to a table. You can use Snowflake streams to: - Emulate triggers in Snowflake (unlike triggers, streams don’t fire immediately) - Gather changes in a staging table and update some other table based on those changes at some frequency Tutorial use case Here we create a sample scenario: an inventory replenishment system. When we receive replenishment orders, we need to increase on-hand inventory. We run this task manually. In actual use, you would want to run it as a Snowflake task on some kind of fixed schedule. Create the data, stream & tables In order to follow along, create the orders and products table: - Orders are inventory movements. - Products holds the inventory on-hand quantity. If you start with 25 items and make three replenishment orders of 25, 25, and 25, you would have 100 items on hand at the end. Sum those three orders and add 75 to the starting balance of 25 to get 100. Create these two tables: CREATE TABLE orders ( customernumber varchar(100) PRIMARY KEY, ordernumber varchar(100), comments varchar(200), orderdate date, ordertype varchar(10), shipdate date, discount number, quantity int, productnumber varchar(50) ) create table products ( productnumber varchar(50) primary key, movementdate datetime, quantity number, movementtype varchar(10)); Now, add a product to the products table and give it a starting 100 units on-hand inventory. insert into products(productnumber, quantity) values ('EE333', 100); Now create a stream on the orders table. Snowflake will start tracking changes to that table. CREATE OR REPLACE STREAM orders_STREAM on table orders; Now create an order. insert into orders (customernumber,ordernumber,comments,orderdate,ordertype,shipdate,discount,quantity,productnumber) values ('855','533','jqplygemaq','2020-10-08','sale','2020-10-18','0.10503143596496034','65','EE333') Then query the orders_stream table: select * from orders_stream Here are the results. You can see that we added one record. Update on-hand inventory We have received 65 more items into inventory, so we need to update the inventory balance. Procedure as follows. Start a transaction using the begin statement. Then run this update statement, which basically: - Sums the orders for each product - Adds the sum of orders quantities to the original inventory balance. This update statement gets the product numbers from the orders stream table. That’s the table that tells Snowflake which products need to have their inventory updated. update products set quantity = onhand from (select distinct p.productnumber, p.quantity as dquantity, o.quantity , p.quantity + o.quantity as onhand from products p inner join orders o on p.productnumber = o.productnumber) as z where z.productnumber = (select productnumber from orders_stream) At this point the orders_stream table is emptied, which happens when you execute a read on it. (Note: Begin and commit make a transaction, which is a logically related set of SQL statements. They lock the tables involved. Without that, you could end up with a mismatched situation, like an incorrect inventory balance because one transaction worked and the other did not.) Now query orders_stream and you will see that the table is empty. For more tutorials like this, explore these resources:
<urn:uuid:b89b688a-c246-4439-9c3b-944c197c8b7b>
CC-MAIN-2022-40
https://www.bmc.com/blogs/snowflake-table-streams/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00671.warc.gz
en
0.797422
856
2.59375
3
In 2009, Volodymyr V. Kindratenko, et al., presented a paper on using Graphics Processor (GPU) clusters for High Performance Computing (HPC). In this paper, they note the rise of GPU based high performance compute clusters, and consider several of the problems, including processing, speed, and power efficiency across GPUs connected to the same cluster. Many of these challenges, it seems, were traceable back to a pair of problems. The first problem is the design of the software itself; building software to run on multiple cores is, to this day, a lot more challenging that it might initially appear. The second is what any network engineer with long experience might expect: the network. The problem the CAP theorem exposes, that there are efficiency and consistency costs to moving data through space, will always rear its head when processing large scale problems across multiple processors connected through some sort of network, even GPUs in an HPC cluster. A more recent paper led by Youngsok Kim considers a series of proprietary connectivity options designed by GPU vendors to reduce the inefficiency when clustering GPUs in this way. These proprietary systems are purpose built for graphics applications, which means they may, or may not, work in other HPC situations, such as neural networks used for deep learning applications. The proprietary interfaces also limit the user to utilizing GPUs from a single vendor, and likely even a single generation of chips from a single vendor. Of course, these vendor limitations are what general purpose networks are designed to resolve. The network world, right now, revolves around Ethernet for data, and Fiber Channel for storage. Ethernet is simple and fast, but to go from the GPU to the physical Ethernet network, a new chip, and control plane, must be inserted along the way—the Network Interface. This is clearly not ideal when dealing with high speed data, as each chip in the middle requires some form of processing, including looking the destination up, imposing headers, switching based on the headers, and then stripping the headers. Data must be carried off the internal PCI bus connecting the GPU to the network interface, then through the physical interface, serialized onto the Ethernet, copied back off the physical wire, and then copied back onto another PCI bus. When a Bus Isn’t a Bus The obvious question here is: why not just use the PCI bus as a connection network? Traditional PCI (such as PCI-X) was designed as a true bus, which means there were parallel wires to carry each bit. PCIe, or PCI Express, is not a bus. Rather it is a network. The figure below illustrates. In order to serialize the data onto a single set of wires, the data must be marshaled, which involves framing the data and placing some sort of destination address on the frame. Unlike Ethernet, however, PCIe does not use Media Access (MAC) addresses to determine the destination of the framed data. Instead, it maps each particular device into a memory space. For instance, if you have 4 devices, and 256 bits of memory (a radical simplification of the real world, just for illustration), you can map the information flowing from the first device into the first 64 bits of memory space, the second in the second 64 bits of memory space, etc. When some other device on the system wants to read from or write to a specific device, it can simply pull from or push to that specific device’s memory space. Addressing Memory Ranges in a PCIe Network The neat property of PCIe is that every device built to run inside any sort of compute platform has an PCIe interface built into it, from memory to network interfaces to… GPUs. Given this, each device already knows how to translate its data to and from the PICe network, including any required framing, serialization, and deserialization. The problem you will face when using PCIe as a network is, however, the rather interesting form of addressing just described—there are no addresses, just memory ranges. This suggests a possible solution, however, similar to the solution used in all optical systems. In an all optical system, the data being carried over one wavelength of light can be switched to another data stream using optical devices to change its color. To switch PCIe frames, then, what you need is a system that can virtualize the available memory space and set up channels between devices by switching the destination memory address on the fly. This would allow the PCIe switch to connect a wide range of devices to one another, where each has every other device mapped into what appears to be a local memory space. One GPU, for instance, could address a particular piece of information to another GPU by placing it on the PCIe network, using the destination GPUs memory block as a sort of destination address. The PCIe switch could remap the memory from the source GPUs address space to the destination GPUs address space, allowing the two devices to communicate at a very high speed across a close-to-native interface. In the GPU use case, this functionality is referred to as peer-to-peer communication The Liqid Switch Liqid makes just such a device—a PCIe switch. This kind of switch can be used to compose a system out of a number of different components, such as shared memory space, GPUs, display outputs, network interfaces, and large-scale storage (such as solid state or spinning drives). Such a system would allow multiple GPUs to communicate to one another, register to register, or through an apparently shared memory space (remember the memory spaces are virtualized and remapped in the switching process, rather than being directly correlated as they might be in more standard applications). This enables the construction of multicore GPU systems, limited only by the scale and speed of the PCIe switch connecting everything together. This kind of system may not be as fast as the proprietary GPU interconnections being designed and shipped by GPU designers today, but it is much more flexible, in that a single multicore task can be assigned the “correct amount” of processing power, along with enough memory to carry out the required processing, and access to a set of input and output devices. All of this could be done on the fly by configuring the PCIe switch ports into what would look (to the average network engineer) like a set of virtual networks, or VLANS. This kind of system could provide a huge breakthrough in the ability of even mid-sized companies who do a lot of data analytics to build cost effective, flexible systems to take on analytics jobs, such as searching for patterns in customer data, or doing speech recognition chores. This is an interesting field of engineering—something every engineer should be keeping an eye on, as the network in the rack might, in the future, be a PCIe bus, rather than Ethernet.
<urn:uuid:38888282-d3df-4942-b76b-195f60145979>
CC-MAIN-2022-40
https://gestaltit.com/tech-talks/liqid/russwhite/gpus-hpc-pci-switching/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00671.warc.gz
en
0.940469
1,381
2.609375
3
A Health System's leap to IOT Industrial Scientific Blazes a Trail in IoT IoT Platforms: What are They and Do You Need One? IoT Facilitates Enhancements to Water Management Systems Transforming the Future City Brenna Berman, CIO, City of Chicago How Internet of Things (IoT) will Rewire Supply Chains Chad Lindbloom, CIO, C.H. Robinson Embracing the Internet of Things John Sprague, Deputy Associate CIO for Technology and Innovation, NASA Six Major Challenges Facing Global Leaders - and How to Overcome Them Leo Rajapakse, Technology Director Cloud IaaS Center of Excellence, Bimbo Bakeries USA Thank you for Subscribing to CIO Applications Weekly Brief Major Challenges in Managing IoT Data IoT platforms can collect, transmit, and use data that is unencrypted, which makes it completely vulnerable. The biggest offenders are platforms or devices created and maintained by inexperienced developers Fremont, CA: Internet of things (IoT) has become one of the most prevalent technologies of modern times, leading towards a digital revolution. IoT collects ample amounts of information that pass on as data. The purpose of the technology is to make things like appliances, every day and household items, and industrial machinery smarter with the help of data. Most online connections are protected using a form of encryption, which implies that the data is locked behind the software and can not be unlocked without the appropriate authorization. Breaking the basic forms of encryption can be easy but time-consuming as well. Encrypted data is clearly more secure than raw data. Companies are still reluctant to use encryption to protect sensitive information such as account or personal details, passwords. However, IoT platforms can collect, transmit and use data that is unencrypted, which makes it completely vulnerable. The biggest offenders are platforms or devices created and maintained by inexperienced developers. If a company deals with sensitive information, the improper handling of data can affect privacy for both customers and employees. Power outage, natural disasters, or emergencies can affect data centers, which disrupt the operation of most IoT solutions. These inconveniences can lead to many different consequences: sometimes systems or devices can become unusable, or they may work with reduced capacity; the data collection and reporting process can be interrupted. Just as electronics, IoT devices also require electricity to operate. These devices are programmed to transmit data 24/7, which means constant support from other technologies, including network adapters, gateways and more. Besides electricity, data requires physical storage. Even with edge and cloud computing solutions, a remote server is connected to the network that is being used to house the digital content. Servers and data centers consume a massive amount of energy. Data centers require large-scale cooling systems to operate under heavy loads.
<urn:uuid:10e5cb56-338c-4cda-b789-e2361764b741>
CC-MAIN-2022-40
https://www.cioapplications.com/news/major-challenges-in-managing-iot-data-nid-6021.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00671.warc.gz
en
0.919311
594
2.640625
3
What is Deep Learning? Deep Learning (sometimes known as deep structured learning or hierarchical learning) is part of Machine Learning. It’s focused on algorithms inspired by the structure and function of the brain called artificial neural networks. The heart of deep learning is using today’s superfast computers and masses of data to actually train large neural networks to act like a brain, which can learn over time. Why use Deep Learning? If you are interested in AI or Machine Learning, Deep Learning is a vital part of this. The idea of having a super brain of networks is obviously a contentious, highly complex and nuanced subject but whatever way you look at it, Deep Learning has the potential to radically transform how you use your data and infrastructure to do every aspect of business. Latest Deep Learning Insights Which technology trends will be game changing for your business in 2019? Read our predictions, in which we look at everything from how Python is tightening its grip in the world of machine learning, the rise of data-centricity, to new ways to merge BI and Data Science. It is time to start with data-driven decision making, selling the right products to the right clients, matching resoures better and recognizing trends quicker. Derive value from your data with Predicting Analytics. AI and ML are disrupting a number of areas, including point of entry, automated backend processes, and knowledge management. A mix of IT and business decision makers were surveyed in their use of data analytics in order to fully understand the variances in perception and actions between these two pivotal decision-maker groups. Interested in learning more? Whether you’re looking for more information about our fast, in-memory database, or to discover our latest insights, case studies, video content and blogs and to help guide you into the future of data.
<urn:uuid:90ef5d80-d432-40e6-a1e7-85eed1971918>
CC-MAIN-2022-40
https://www.exasol.com/glossary-term/deep-learning-definition/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00071.warc.gz
en
0.940824
373
2.96875
3
Our data is everywhere. Literally and figuratively. It’s simultaneously hopping oceans and crossing borders through wires while being packaged and redistributed among internet service providers, government organizations, retailers, financial institutions, medical institutions, and of course, social media providers. We’re all familiar with data breaches when this information is intercepted and stolen. But what about a situation where we freely give our data over and that information is used in ways that we didn’t expect? Recent news about UK-based firm Cambridge Analytica’s relationship with Facebook has generated headlines that have stirred a tumultuous conversation about the responsibilities of social media companies, app developers, and individual users in ensuring the honest use of data. The data gleaned in 2014 by Cambridge Analytica through a third party quiz app on Facebook about the users’ friends that they themselves did not allow. And, while Facebook has since updated their rules to prevent this, many third party apps are still collecting important personally identifiable information willingly – though perhaps unwittingly – from users. Identity protection begins with an individual’s commitment to making thoughtful choices about where they use their data and what are some of the potential consequences. What is a Third Party App? Facebook and other social media sites use an army of “third party” apps to help expand the platform’s ability to engage their users. In fact, they’ve been an essential part of what has made many of these social media giants – well, giants. Third party apps can help making log-in processes for regularly-visited websites easier or to help personalize content you see on Facebook or other websites. Many appear as quizzes or games that request access to photos, timeline history, email addresses, friend lists, and current location. Some of the most recently popular apps allow users to compare their profile image to famous works of art. Perennial favorites are quizzes such as “Where Should You Live?”, “Which Hogwarts House Do You Belong In?”, “What’s Your True Personality Type.” Users agree to terms of services up front (many of which are now receiving new scrutiny). Some apps provide services, such as grocery delivery and music downloads. Many of these apps have access to your financial information. Where Does My Third Party App Data Go? According to Facebook’s guidelines, third party apps can only collect data that it uses directly for the purpose of the app itself. Each app must undergo a compliance review but this leaves a potential gap in interpretation and the opportunity for abuse. In some cases, answers to online quizzes may be used by identity thieves to secure valuable data that can unlock sensitive accounts. Companies not caught early by Facebook can sell or illegally distribute personal data. According to Top10VPN’s Privacy Central, Facebook account details will fetch around $10 each on the dark web where identity thieves buy and sell the information used to commit fraud. What Can You Do? Review your current third party apps. Facebook offers a “privacy checkup” tutorial that assists in reviewing the settings that allow access to your data. You can access it on your mobile device by: - navigating to the Main Menu (the three lines at the bottom right hand corner of the screen) - Select Account Settings - Select Privacy - Select Check a Few Important Settings - Click Continue - Adjust information about posts and profile information - Review your app settings and remove any that you no longer need. You can access what information apps can access from a desktop here, or by: - Navigating to the Main Menu (the triangle at the top right hand corner of the page) - Selecting Settings - Clicking Apps It’s important to note that just because you remove an app from Facebook, it doesn’t automatically delete any information that they’ve already collected. From this page, you can update the personal information that apps can see. Facebook continues to review and refine it’s data security and integrity policies. While your connections on Facebook can no longer grant a third party to access your data – you still can, and that has risks. Being proactive is an important first step in proactive identity protection. However, nothing is ever secure. Our identity monitoring services scour the deep and dark web looking for these stolen personal data points and alert you at the first sign of potential danger. This early warning sign allows you to take action quickly – whether it’s changing passwords, reviewing forgotten accounts of suspicious activity – and reduce your risk for costly, and time-consuming identity theft issues.
<urn:uuid:ffc0f090-88ba-485a-b8ce-f41fe6268d33>
CC-MAIN-2022-40
https://www.irisidentityprotection.com/blog/third-party-apps-breachless-breach/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00071.warc.gz
en
0.91995
1,042
2.625
3
Originally developed by Google and now maintained by the Cloud Native Computing Foundation, Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized workloads, services, and applications across host clusters. Kubernetes organizes application containers into blocks, nodes (physical or virtual machines), and clusters. Multi-node clusters in turn form clusters controlled by a wizard that coordinates cluster-related tasks such as scaling and updating applications. Named Siloscape by Unit 42 researcher Daniel Prizmant, the malware is the first to attack Windows containers. The malware exploits known vulnerabilities in web servers and databases with the ultimate goal of compromising Kubernetes nodes and installing backdoors. “Siloscape is a highly obfuscated malware that attacks Kubernetes clusters through Windows containers. Its main purpose is to open a backdoor in poorly configured Kubernetes clusters in order to launch malicious containers, ” explained Prismant. According to researchers Unit 42 Zelivanski Ariel (Ariel Zelivansky) and Matthew Chiodo (Matthew Chiodi), until recently their colleagues recorded malware, attacking only clusters on Linux, due to the prevalence of this platform in the cloud. While most cloud malware is designed to mine cryptocurrencies or carry out DDoS attacks, Siloscape has a different purpose. Firstly, it bypasses detection much better, and secondly, its main task is to install a backdoor that opens the way for the use of compromised cloud infrastructure in order to carry out such malicious actions as theft of credentials, personal data, ransomware attacks, and even supply chain attacks.
<urn:uuid:12860097-7ffc-4210-ae20-89bd13df49c3>
CC-MAIN-2022-40
https://www.cyberkendra.com/2021/06/windows-container-malware-targets.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00071.warc.gz
en
0.907387
332
2.71875
3
DDoS: The Past is the Future DDoS: The Past is the Future Distributed denial-of-service (DDoS) attacks are one of the oldest weapons in the hacker’s arsenal, nearly as old as the Internet itself. The U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency defines DDoS as the following: “A denial-of-service (DoS) attack occurs when legitimate users are unable to access information systems, devices, or other network resources due to the actions of a malicious cyber threat actor. Services affected may include email, websites, online accounts (e.g., banking), or other services that rely on the affected computer or network. A denial-of-service condition is accomplished by flooding the targeted host or network with traffic until the target cannot respond or simply crashes, preventing access for legitimate users. DoS attacks can cost an organization both time and money while their resources and services are inaccessible.” DDoS attacks come in many flavors — one report counted 26 kinds, although these can be organized into three general categories: - Volume-based (volumetric) attacks that overwhelm the target network’s bandwidth - Protocol attacks that exhaust a server or firewall - Application layer attacks that attack a specific application rather than the entire network Motivations for such attacks vary, from hacktivist protests of political, social or economic initiatives to financial gain. In the past, DDoS attacks were often carried out by “hactivists” and other parties with grievances and agendas, but ransom DDoS are growing increasingly common, with malicious actors threatening companies and organizations with attacks if they don’t pay up. DDoS attacks continue to evolve, becoming more sophisticated and complex every year. Indeed, attackers can now use malware to create global networks of enslaved devices, or bots, which they can use to launch massive DDoS attacks on unsuspecting victims. Because existing security measures cannot prevent all DDoS attacks, you always need to prepare for new threats. A time-honored tradition The first recorded DDoS attack took place in 1996, when a hacker used a spoofed IP address to overwhelm the server of Panix, New York’s oldest commercial internet service provider. Fake packages flooded the company’s server, rendering it unable to process legitimate traffic. Some 36 hours later, a global network of internet specialists were able to regain control of Panix, but a tradition was born. In fact, the origin of DDoS attacks are even older. Way back in 1974, a 13-year-old high school student by the name of David Dennis successfully shut down 31 PLATO terminals at the Computer-Based Education Research Laboratory (CERL) of the University of Illinois Urbana-Champaign with some mischievous programming, reportedly as a well-intentioned experiment. Then there was the Morris worm of 1988, when Cornell University graduate student Robert Morris released into the wild a self-replicating program in a well-intentioned but ultimately destructive attempt to bring existing network weaknesses to attention. According to the U.S. government, the Morris worm resulted in anywhere from USD 100,000 to USD 10,000,000 in damages. From humble beginnings, though, mighty cybercrime weapons grow. According to one report, there were 5.4 million DDoS attacks in the first half of 2021 alone. In recent years, some of the world’s biggest companies have fallen victim to DDoS attacks, including major global internet companies and financial institutions. One attack on internet giant Google clocked in at 2.54 Tbps, with Google warning that such attacks are only likely to increase as the internet itself grows. You don’t need to be a criminal mastermind to launch a DDoS attack, a factor in their popularity with malicious actors. In fact, a recent report by the UK National Crime Agency warned that children as young as nine were launching DDoS attacks against their own school networks. Cloudbric ADDoS: Edge computing to the rescue Cloudbric ADDoS: Advanced DDoS Protection is a cutting-edge DDoS (Distributed Denial-of-Service) attack protection and mitigation service. Cloudbric ADDoS leverages decentralized edge locations closest to the client, rather than the centralized cloud infrastructure, to provide a more effective, efficient defense against the newest DDoS attack patterns. With the fastest response time, largest capacity, and affordable costs, Cloudbric ADDoS is the optimal choice against organized DDoS attacks. By processing huge volumes of traffic, Cloudbric ADDoS mitigates and resolves large and complex DDoS attacks. Mitigated attacks prior to reaching the application, the solution minimizes the impact on the web services. Cloudbric ADDoS responds to a full range of DDoS attacks, from frequent DDoS attacks to multi-vector attacks and application attacks. Learn more here.
<urn:uuid:f3c3ba88-50f7-435d-9b63-abf277cbbdc7>
CC-MAIN-2022-40
https://www.cloudbric.com/ddos-the-past-is-the-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00071.warc.gz
en
0.936961
1,018
3.328125
3
The Internet of Things (IoT) is a concept almost as loose in definition as the two terms that make it up: “Internet”, which has come to denote, in whole or in any part, the vast system of electronic communications that now connects nearly every area of human endeavor; and “Things”, which could only be more inclusive if we broadened the term to the Internet of Anything. Perhaps that is coming. Gartner defines the IoT as “…the network of dedicated physical objects (things) that contain embedded technology to sense or interact with their internal state or external environment. The IoT,” it explained, “comprises an ecosystem that includes things, communication, applications and data analysis.” McKinsey Global Institute simplifies this a little to say the IoT comprises “sensors and actuators connected by networks to computing systems”. The International Data Corporation (IDC), meanwhile, identifies each of these sensors and actuators as “a uniquely identifiable device with its own IP address that connects over a network.” In short, the Internet of Things is, well, all of the things connected to and by the Internet. The implications, however, are not so simple. Take, for example, the connected “Smart” office, says Lance Spellman, president at Workflow Studios. Of course, many of these things have long been connected to the Internet – the phones, the computers, the alarm system. What is changing is that these things are now also connected to each other through the Internet. The phone system now ties in with the CRM software, which is now online (SaaS). The alarm system, lights, window blinds, entry systems and thermostats now communicate with your smartphone, allowing you to control them remotely or allowing them to control each other so that when motion sensors or the keyless entry system detect someone in the building, the A/C kicks on and the lights wake up. Your printers and HVAC system now tell you when they need maintenance and, in some cases, schedule their own resupplies. Even your planters have something to say, letting you know when your office plants need water. And this is just in the office. In the warehouse, the Internet of Things is helping to keep track of inventory, track orders, and identify and schedule maintenance for faulty equipment. According to McKinsey, this last has been known to reduce maintenance costs by as much as 40% and cut unplanned downtime in half. On the customer end, devices are making it possible to track customer behavior with almost invasive detail. The data gathered at point of sale and through apps, in-store monitoring systems, and inventory management devices enable companies to predict buying habits of individual customers and make super-personalised suggestions. What’s more, they are able to see clearly how a product is being used and even how people are trying but failing to use a product, giving unprecedented insight into how to make products better fit customer expectations. Experts in every industry are noting the difference this makes in product development. They’re all saying it – a product is no longer just a product. As more and more Things get connected, products are becoming services. Where a thermostat used to be something you bought at the hardware store and used until it stopped working, today’s thermostat comes with software, and with the software a support team. Things that used to get outdated regularly can now be upgraded with a simple software update. In the next few years, customers will come to expect some level of connectivity in almost everything they buy, whether it be the ability to scan a food item for nutritional information or a function that allows their running shoes to send them a notification when it’s time to get a new pair. Businesses will have to jump on this bandwagon from both sides – equipping their products with the connected services customers will come to expect and outfitting their offices, stores, and warehouses with the connected equipment that will help them keep up. There are a few key areas of concern that seem always to come up whenever we talk about the onward rush of the Internet of Things. More connected devices inevitably results in what is referred to as an increased “attack surface”. Where before, a hacker had a only few of entry to access your company network, the connectedness of things poorly managed could mean that they can now gain access through your potted plant or your vending machine. While privacy and security generally have a close relationship, in the case of the IoT, the concern seems less to be about the security of private information gathered by connected devices, but the gathering of that information to begin with. Suddenly, an office can monitor the work (and other) habits of the staff. Apps track every move a customer makes in a store. The potential for devices and sensors to gather data is almost endless and some folks are not so comfortable with this. Data Storage and Analysis For those who are comfortable with the gathering of these masses of data, there arise the conjoined twin problems of where and how to store it and how to analyse it. After all, what better way to help folks get comfortable with the collection of personal data than to show them how it is being used to enhance their personal experience. As it stands, there are already hundreds and thousands of devices, sensors, gizmos, gadgets, and thingamajigs that connect to the Internet. Unfortunately, almost every one of them currently requires its own app or other software. This is manageable up to a point, but opening 7 different apps just to shut down the office for the day gets tiresome and quickly starts to negate any added efficiency being connected may have afforded. As the Internet of Things starts to become just “things”, we will need to find ways to plug these things in to a dashboard of some kind that will make it easier for us to control them and enable them to interact optimally. Energy and Bandwidth Needs We are already seeing how a world of connected devices is affecting our connectivity and energy needs. From coffee shops to airplanes, amendments to include outlets at every seat and Wi-Fi everywhere are becoming commonplace. Datacenters are already consuming enough energy to power a large city. The question then arises – as all the “Things” get plugged in and hooked up, how will energy and bandwidth requirements be affected, on both the wide and individual scale? If your entire office functions on connected devices, what happens when you lose power or – heaven forbid – Internet service? Cut lines, bad weather, provider outages – how do you protect a connected business when so much of it depends on things outside your control? A.I. Armageddon and a Robot-Controlled Dystopian Future We’d like to say “nobody’s saying it but everybody’s thinking it” and then laugh it off. But the truth is, people are saying it, and they’re pretty serious. As we give up more and more control of the basic functions of our daily lives to software and computer-controlled devices, how do we know when it becomes too much? Where do we draw the line between automated efficiency and the kind of lazy complacency that stifles creativity and mires innovation? The author of this blog is Lance Spellman, president at Workflow Studios. Comment on this article below or via Twitter: @IoTNow_ OR @jcIoTnow
<urn:uuid:76f70f53-cc5b-4abd-9162-ee0d6f6be00c>
CC-MAIN-2022-40
https://www.iot-now.com/2016/06/17/48640-taming-the-internet-of-things-with-the-cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00071.warc.gz
en
0.954628
1,563
2.65625
3
Initially introduced on computers as a replacement for LPT, PS/2, or serial ports (e.g., printers, scanners, mouses), USB ports and USB devices have now been part of our life for more than 25 years already. They are the standard when it comes to charging mobile and portable devices; we can find USB charging port/station pretty much anywhere, from a Starbucks to an airport. USB memory (flash) sticks have largely replaced writeable discs as the medium to exchange all sorts of documents quickly, easily, and, to a certain extent, safely between devices. However, one must be careful what and where they “plug in,” as USB remains one of the easiest ways for hackers to get physical access to a system with the purpose of hacking it. Don’t Use USB Just Anywhere Although most charging stations or ports you find in public places are legitimate, some could have been tampered with to damage your device, or even include a micro-computer (e.g., Raspberry Pi) that would attempt to connect to your device and clone/retrieve data from it. To avoid this, the best option remains using your own USB charger and plugging it into a power socket. Ensure the USB is Trustworthy The same goes for when connecting a USB stick or HDD into a computer to copy/download documents. Make sure you know who and where this USB device comes from and what it supposedly contains. Ensure your device has some kind of virus/malware protection that will scan through it, looking for potential virus, trojan, or any other threats. These attack vectors have been used in Hollywood as plot devices for movies and shows, for example, in a Mr. Robot episode, where the protagonist drops USB sticks in the parking lot of a police precinct with the goal to infiltrate a police department to alter prison records. The concept is that someone at the precinct would eventually pick up the USB stick and, out of curiosity, plug it into his computer. Like all great plot devices, the protagonist’s plan worked and can alter the records. However, in real life, hopefully the police’s security software would be able to detect the malware on time before an infection occurs. Ideally, a dedicated computer, isolated from your local network, could be used to first plug in an unknown USB device and scan it. Then, if safe, documents could be copied over another trusted device to your computer. This may seem impractical, but it does, however, ensure that your main device does not fall victim to a USB kill device, which resembles a regular USB stick but sends high-voltage power surges into the device it is connected to, likely damaging hardware components. These precautionary steps might sound like too much for a simple, seemingly innocuous USB key, but the hacking potential is well-established. Protect Your Devices One of the best measures you can take to ensure your data is not compromised is to secure your devices. From an enterprise perspective, you want to prevent any potential unauthorized access to any device, either personal-owned or corporate-owned, depending on where the corporate data resides. Your UEM software should already include some USB related policies to control how the USB port can be used. Some examples include: - Disable the USB port when the device is locked - Require device password when connecting to a computer - Only plug in trusted USB sticks - Be wary of USB cords with an unnecessarily large hood - Disable file transfer using Media Transfer Protocol (MTP) - Disable USB OTG (host storage) so no external storage can be mounted If you have any questions or concerns about how to improve your security posture, please feel free to contact us. (C) Rémi Frédéric Keusseyan, Global Head of Training, ISEC7 Group
<urn:uuid:9ea00275-c211-4904-b9d2-3495dc4ed350>
CC-MAIN-2022-40
https://www.isec7.com/2021/02/02/everyday-security-risks-universal-serial-bus-usb/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00071.warc.gz
en
0.940722
796
2.625
3
Autonomous vehicles are coming. Former Transportation Secretary Anthony Foxx told The Verge last year that by 2021, “we will see autonomous vehicles in operation across the country in ways that we [only] imagine today … Families will be able to walk out of their homes and call a vehicle, and that vehicle will take them to work or to school.” Research firm IHS Automotive predicted last year that there will be nearly 21 million autonomous vehicles sold annually in 2035 and nearly 76 million vehicles with some level of autonomy will be sold by 2035. What role will the federal government have in shaping that world? According to a recent report from consultancy Deloitte, a potentially big one, depending on how agencies approach the technology. Agencies will need to interact with states, cities, private companies, academics and others as new mobility models are developed. The report from the Deloitte Center for Government Insights, “Governing the Future of Mobility,” highlights three main ways in which the government has a stake in shaping this debate: - As policymakers and regulators, agencies can ensure public safety and bolster cybersecurity as companies and state and local governments navigate the world of autonomous and semiautonomous transportation. “The policies and regulations the federal government creates and implements can have make-or-break impacts on the maturation of mobility innovations,” the report states. - As researchers and developers, they can help foster technological innovation, Deloitte notes, and federal research dollars can significantly influence the market. Researchers can also “serve as important arbiters balancing technological development with public safety and security as citizens adopt these new technologies.” - As end users, agencies can “improve government-operated vehicle fleets, invest in new related infrastructure, and spur state and local adoption of autonomous vehicles, shared mobility, and other new types of travel through their procurement decisions,” the report adds. The Role of Regulation for Autonomous Vehicles The government has been thinking about these ideas for several years. In September 2016, DOT released the Federal Automated Vehicles Policy, a set of guidelines for carmakers pursuing autonomous vehicle initiatives. “We envision in the future, you can take your hands off the wheel, and your commute becomes restful or productive instead of frustrating and exhausting,” Jeffrey Zients, then director of the National Economic Council, told The New York Times, adding that highly automated vehicles “will save time, money and lives.” “DOT decided to put out a living document to start creating a framework to understand the technology,” Vinn White, a specialist leader at Deloitte Consulting and a former DOT deputy assistant secretary, tells GCN. DOT Secretary Elaine Chao is expected to release a revised version of the autonomous vehicle report in the next few months based on comments received from industry and government stakeholders, White says. Deloitte hopes to shape the conversation leading up to that release. The Deloitte report agrees with Zients’ assertions, noting that auto accidents killed more than 35,000 people in the United States in 2015 and left another 2.44 million with injuries, figures autonomous vehicles could reduce. Fewer crashes also mean lower costs across the board, the report adds. The National Highway Traffic Safety Administration estimates that the price tag for motor vehicle crashes — including medical, legal, emergency services and insurance costs, plus lost productivity — reached $277 billion in 2010. Federal regulators and Congress will need to address numerous factors related to automated vehicles, including vehicle safety standards, liability, data management and privacy, and, of course, cybersecurity of such vehicles and infrastructure. Another big wild card is connectivity and interoperability, including how the Federal Communications Commission will govern the 75 megahertz of wireless spectrum at 5.9 gigahertz band for connected vehicle-to-vehicle communications. Such spectrum could be used for new crash avoidance technology, which could potentially address 81 percent of crashes involving unimpaired drivers, the report notes. The report recommends that agencies consult with industry and state and local regulators “to make sure that the resulting rules are relevant to states and cities, and to avoid creating a patchwork of contradictory regulations that prevent progress and ultimately hurt consumers.” Such regulations also should be flexible, “allowing for timely exceptions so they do not inhibit advances in technology.” Regulators also should revisit and refine rules often and remain technology neutral, the report recommends. Researchers Can Shift the Market for Self-Driving Cars Federal researchers and those agencies that provide grants can also shape the landscape, the report notes. Deloitte says the government should incentivize the private sector and provide seed money “to engage and create spillover benefits that far exceed the initial outlay.” For example, DOT’s Federal Highway Administration’s Intelligent Transportation Systems Joint Program Office used its own funding to “encourage pilot efforts in advanced mobility through competition and public-private partnerships when it created the Smart City Challenge,” which ultimately awarded the prize to Columbus, Ohio, the report notes. Government researchers can also focus on areas overlooked by the private sector, the report notes, including how autonomous vehicles could impact rural parts of the country. Federal researchers can explore the societal implications of self-driving vehicles and help identify the unintended consequences of such a shift in transportation. For instance, members of five labs funded by the Energy Department “are studying how to make sure that autonomous vehicles — whose 24/7 availability might increase overall road miles traveled — don’t significantly increase energy consumption.” The Energy and Transportation departments recently initiated new collaboration within the National Renewable Energy Laboratory “to accelerate research, demonstration, and deployment of innovative transportation and alternative fuel technologies.” Federal Buying Power Could Reshape Autonomous Vehicle Market The government is the nation’s largest employer and operates some of the world’s largest vehicle fleets — more than 600,000 nontactical vehicles, the report notes. For example, the Postal Service owns nearly 228,000 vehicles, and the General Services Administration leases more than 200,000 vehicles to agencies. Given that buying power, the report says the government “has an opportunity to support industry sales while shaping the evolution of offerings and solutions by making clear their criteria for purchase.” Additionally, by testing and deploying autonomous vehicles, agencies “can send markets and consumers a powerful vote of confidence in these systems’ viability and reliability.” Moving forward, the report says, agencies “should consider ways to modify procurement processes to accommodate the fast pace of change” in smart transportation technology. Shorter leases could be a possibility, or modular systems that make upgrades easier. Agencies should also explore creative financing options, such as Mobility as a Service, cost sharing or other approaches that could allow them to “introduce new technologies without requiring a huge up-front investment.” For example, a ridesharing service for federal employees could be created instead of assigning cars to individuals, or charging stations could be set up for federally owned electric vehicles that citizens would pay to use.
<urn:uuid:04b94d23-44d6-4c97-af8b-2f805ed48518>
CC-MAIN-2022-40
https://fedtechmagazine.com/article/2017/08/how-can-agencies-move-autonomous-smart-transportation-forward
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00071.warc.gz
en
0.936718
1,466
2.75
3
The company in question is NuTec Energy, serving the oil and gas industry with seismic imaging services. Based in Houston Texas, NuTec employs 35 staff. In early 2000, the company struck a deal with IBM to develop a massively parallel supercomputing system capable of dealing with the ever-increasing demands of seismic signal processing for the oil and gas industry-based applications. The storage system initially consisted of 3000 Power 3/3+ CPU’s with AIX on each server, and each CPU running its own analysis. The Network File System (NFS) file server utilized 2 IBM ‘Shark’ units connected to three B80 servers, and with shared file access to all CPU’s. By 2003, however, the system was not keeping up with the demands being made on it, and so a project was established to specify a replacement. According to Sampath Gajawada, manager of software development at NuTec Energy, “The target was a super-scalable SAN – a high-performance, single image storage environment using Intel, Linux, Fibre Channel and Ethernet.” He defined several key objectives for the SAN: – Software tuned to be latency-tolerant and massively parallel, buffered asynchronous communication & I/O – High I/O bandwidth (>500 processing nodes) – High computing power (processing power >2 Teraflops) – Large flat file system (10-100TB), with easy storage management – Cost effective, price/performance balance, scalable at low incremental costs. The main issues with the incumbent UNIX system were the high cost of the proprietary software and associated support and management, barely adequate computing power and bandwidth for some of the processing requirements, and a bottleneck on the storage NFS. “The existing system just couldn’t cope with the demands of our Depth-domain Analysis and Time-domain Analysis,” said Gajawada. “We had reached the stage where business requirements were forcing us to reconsider our entire system. We looked at all the alternatives, and settled on a combination of Intel and Linux.” Page 2: The Switch NuTec adopted Minneapolis-based Sistina Software‘s GFS (Global File System) Linux cluster file system. Its cluster nodes physically share storage over fiber channel or shared SCSI, and while each node thinks the file system is local, file access is synchronized across the whole cluster. In effect, GFS can pool storage onto cheap, efficient machines. NuTec’s system resides on a Fibre Channel SAN infrastructure from LSI logic for high I/O performance. Processing consists of 350 dual processor P4 based nodes, providing 750 CPU’s running on Linux, each one four times faster per box than the previous AIX processors. The following table, prepared by NuTec, compares the two systems: One of the main challenges NuTec experienced in the changeover was porting imaging software from UNIX to Linux. Though there were risks involved, the company saw it as an opportunity to reduce costs and management, and they made the transition in just four weeks. As a result, definite cost savings have been achieved. The headlines are 50 percent fewer administrators and a 90 percent reduction in data center space needed, down from 10,000 to just 1,000 sq ft. “The bottom line is overall cost savings of 84 percent, including hardware and software,” said Gajawada. “And, as a bonus, a higher adoption of Linux elsewhere in the company as a direct result of this implementation.”
<urn:uuid:d70aa6ac-85af-47ce-b695-b1d9bfbdaffa>
CC-MAIN-2022-40
https://www.datamation.com/applications/linux-can-linux-san/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00071.warc.gz
en
0.926355
795
2.609375
3
Humans and AI: Why AI Won’t Take Your Job Could you do your job without a computer? As a child in the 1970s, I was told that computers would take all of our jobs. Yet here I am, working in a career that wouldn’t exist without computers. Most modern jobs require computers for emails, report writing, or videoconferences. Rather than replacing our jobs, computers have created new jobs and made existing jobs more human-centric, as we delegate tedious mechanistic tasks to machines. I love watching the movie Hidden Figures, showcasing the social and technological revolutions of the 1960s. During the movie’s early moments, we are introduced to three computers: Mary Jackson, Katherine Johnson, and Dorothy Vaughan. In contrast to the popular science fiction theme of human-like robots, these computers were natural humans. Back then, “computer” was a job title. Towards the end of the movie, we see NASA install its first machine computer. But there was a twist to the story – the human computers adapted and taught themselves to become highly valued programmers of the new machines. AI Hype Versus Narrow AI The topic of artificial intelligence can be very divisive. Some people think that AI is going to help us to solve new problems. There’s another school of thought that believes that AI might one day turn against us. This more pessimistic school of thought is based upon the premise that computers will become more intelligent than humans, making us obsolete and maybe even becoming our masters. As far back as 1965, Irving Good, a British mathematician who worked as a cryptologist at Bletchley Park with Alan Turing, speculated: “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind… Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.” The singularity is a hypothetical point in time when technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. In one version of this narrative, an AI will enter a “runaway reaction” of self-improvement. Each new and more intelligent generation of AI will appear more and more rapidly, resulting in a powerful superintelligence that far surpasses all human intelligence. But we are nowhere near ultraintelligence. In the present moment, we have narrow AI. You can train an AI to do one narrowly defined task under certain controlled circumstances. It has no common sense. It has no general knowledge. It has no idea of right and wrong. It does not know when it’s getting things wrong. It just knows what it was taught. It’s a bit like we design a sausage machine. It’s perfectly okay as long as you’re making sausages with it. You try it out on anything else, and you’re just going to make a mess. But the beauty of narrow AI is that many of those tasks are things we were wasting human talent on. For example, I’m old enough to remember when we were not allowed to use calculators in high school. Calculators were going to destroy our ability to do mathematics. Teachers taught us to use log tables. It’s a wonder that all these manual calculations didn’t turn us away from maths forever! Humans have general intelligence, the ability to do many diverse tasks. However, narrow AI cannot multitask. Organizations need dozens, if not hundreds, of narrow AIs, each doing a single task and contributing to the whole. Even a single use case could require multiple AIs. For example, even a customer churn reduction use case could have several AIs. One could predict which customers are likely to churn. Another AI could predict which customers could change their minds if the customer retention team took action. A third AI could choose the optimal action to take to retain each customer. A fourth AI could predict which customers are worth keeping, i.e., which customers will be profitable in the future. There are three reasons why we don’t have ultra-intelligent AI. - Computing power limitations: The complexity of AI models is growing faster than improvements in computer hardware. For example, state-of-the-art language models are increasing in size by at least a factor of ten every year. Yet, according to Moore’s Law, computing power doubles every two years. - Cost limitations: The cost of training and running AI models increases with their complexity and size. It is prohibitively expensive to train complex AI models. For example, training a 175-billion-parameter neural network requires 3.114e23 FLOPS (floating-point operation), which would theoretically take 355 years on a V100 GPU server with 28 TFLOPS capacity and cost $4.6 million at $1.5 per hour. In addition to the monetary cost is the environmental cost. Researchers estimated the carbon footprint of training OpenAI’s giant GPT-3 text-generating model is similar to driving a gasoline-engined car to the Moon and back. Even if it were technically possible, using current technology to replicate human intelligence would take up more than the world’s entire energy budget. - Technical limitations: The current generation of AI is powered by machine learning, a form of pattern recognition. Narrow AI doesn’t understand what it is saying or doing. It is merely following patterns found in data. AI systems still make simple errors that any human can sport. For example, when asked which is heavier, a toaster or a pencil, GPT-3 declared that a pencil is heavier than a toaster. Sam Altman, a leader at OpenAI, tweeted about his AI model GPT-3, “The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!), but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.” To move AI forward, we need to find a fundamentally different approach, and at the moment, we don’t have any promising ideas of how to do that. The critical reason that AI won’t simply replace humans is the well-known economic principle of comparative advantage. David Ricardo developed the classical theory of comparative advantage in 1817 to explain why countries engage in international trade even when one country’s workers are more efficient at producing every single good than workers in other countries. It isn’t the absolute cost or efficiency that determines which country supplies which goods or services. Optimal production allocation follows the relative strengths or advantages of producing each good or service within each country and the opportunity cost of not specializing in your strengths. The same principle applies to humans and computers. Computers are best at repetitive tasks, mathematics, data manipulation, and parallel processing. These comparative strengths are what propelled the Third Industrial Revolution, which gave us today’s digital technology. Many of our business processes already take advantage of these strengths. Banks have massive computer systems that handle transactions in real-time. Marketers use customer relationship management software to store information about millions of customers. If a task is repetitive, frequent, has a predictable outcome, and you have data to reach that outcome, automate that workflow. Humans are strongest at communication and engagement, context and general knowledge, common sense, creativity, and empathy. Humans are inherently social creatures. Research shows that customers prefer to deal with humans, especially for issues that generate emotion, such as when they experience a problem and want help solving it. Jobs Versus Tasks The key reason that an AI will not take your job is that AIs can’t do jobs. A job is not the same thing as a task. Jobs require multiple tasks. A narrow AI can do a single well-defined task, but it cannot do a job. Thankfully, the tasks that narrow AI is best at are also the mundane and repetitive inhuman tasks that humans least like doing. The future of work will be transformed task-by-task, not job-by-job. By analyzing which tasks will be automated or augmented, organizations can determine how each job will be affected. Rather than replacing employees, a successful organization will redesign jobs to be human-centric rather than process-centric. The future isn’t AI versus humans – it is AI-augmented humans doing what humans are best at. Humans and AI Best Practices Fearful employees can be blockers to organizational change. Human-centric job redesign can be a win/win situation for employers and employees. Since narrow AI does tasks, AI use case selection and prioritization is vital to success. With few people having experience in AI transformation, find a partner who will design an AI success plan. They can host use case ideation workshops and educate your employees. The goal is for your organization to become self-sufficient, in control of its AI destiny. You need to manage all of the AIs deployed across your organization. But with dozens or hundreds of narrow AI models in production, ad-hoc AI governance can rapidly become overwhelmingly complex. Modern MLOps systems and practices can tame this complexity, empowering and augmenting human employees to practically manage your AI ecosystem.
<urn:uuid:633b5799-5426-4dfa-be25-c521fbf0d244>
CC-MAIN-2022-40
https://www.datarobot.com/blog/humans-and-ai-why-ai-wont-take-your-job/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00071.warc.gz
en
0.950073
2,034
2.6875
3
Securing public clouds such as Amazon Web Services (AWS) poses unique challenges for cloud network security, as the physical infrastructure is controlled by AWS, sitting in their data centers, and not the customer’s data center. Security in Amazon Web Services, like most public cloud security, operates using a shared-responsibility model. According to AWS, “When you move computer systems and data to the cloud, security responsibilities become shared between you and your cloud service provider. In this case, AWS is responsible for securing the underlying infrastructure that supports the cloud, and you’re responsible for anything you put on the cloud or connect to the cloud.” AWS is responsible for protecting the global infrastructure that runs all of the services in the AWS cloud, as well as the security configuration of its products that are considered managed services. Examples of managed services include Amazon DynamoDB, Amazon RDS, Amazon Redshift. AWS customers, however, are responsible to manage their credentials and user accounts. According to AWS, it is the responsibility of the customers and not AWS to manage the infrastructure under their control – those that fall under Infrastructure-as-a-Service (IaaS) – such as Amazon EC2, Amazon VPC, and Amazon S3. Securing Your Workloads and Firewalls in AWS According to AWS, the IT infrastructure that AWS provides is managed in alignment with security standards including SOC 1, SOC 2, and SOC 3, FISMA and FedRAMP, PCI DSS, ISO 27001, and as well as many other regulations and standards. This helps ensure cloud compliance and data security. However, security is about more than just compliance. Amazon VPC “supports a complete firewall solution enabling filtering on both ingress and egress traffic from an instance. The default group enables inbound communication from other members of the same group and outbound communication to any destination. Traffic can be restricted by any IP protocol, by service port, as well as source/destination IP address (individual IP or Classless Inter-Domain Routing (CIDR) block).” Cloud providers’ built-in configurations, such as security groups and network ACLs, impact security posture. The need to protect cloud assets, such as virtual machines, RDS instances, and Lambda functions, lead to network complexity. This increases the likelihood for misconfigurations. Misconfigurations can introduce security risks. AWS allows integration with next-generation firewalls and intrusion protection systems offered by third-party security vendors, such as Check Point, Palo Alto Networks, and Fortinet, which are an important part of your network security but, when using multi-vendor firewalls as part of your AWS or hybrid environment, managing them using your vendor’s standalone management tools creates a fractured and risky environment. Organizations have multiple AWS cloud accounts spread across their network, including “rogue IT” – accounts created without the approval of the IT and security teams. This creates numerous challenges that organizations need to face (check out how one customer was able to use AlgoSec’s Security Management Solution to gain control of unauthorized AWS accounts while supporting business agility.) Each vendor’s security policy management system also does not adequately provide holistic management or change automation for multi-vendor and multi-cloud deployments. Each firewall vendor may have its own security control, but how are each firewall controls integrated with your multi-vendor hybrid estate? Managing Security in the Cloud and On-Premises Estate To manage hybrid networks – multi-vendor, public and private cloud, and on-premises – a centralized security policy management automation solution that provides visibility and change automation into the entire network ensure clarity over the entire network, maintaining the same security model over the entire hybrid network environment. This is where AlgoSec can help maintain a strong security posture in Amazon Web Services, other public clouds, and across your entire security estate. How AlgoSec helps with AWS Security With the AlgoSec Security Management Suite, including AlgoSec CloudFlow, users get visibility of their entire network estate – on-premises, in public clouds such as AWS, and in private clouds, such as Cisco ACI and VMWare NSX. AlgoSec addresses AWS security concerns by delivering business-driven security management across on-premise, hybrid and multi-cloud environments. With AlgoSec, enterprises fend off AWS and other public cloud security threats by maintaining a uniform security policy across their entire network and cloud estates. From a single console, security teams can see across their on-premises and virtual networks and into all of their clouds. They obtain accurate policy change automation across their physical and virtual firewalls as well as into their public cloud deployments via cloud-vendor and third-party controls. Within AWS, AlgoSec’s CloudFlow lets AWS users manage network security controls, such as security groups in one system across multiple clouds, accounts, regions and VPCs. The AlgoSec approach offers numerous AWS security benefits for the enterprise - Central management of the complexity of the multiple layers of multi-cloud, including public and private cloud, and on-premise security controls. Manage network security controls, such as security groups, in one system across multiple clouds, accounts, regions and VPCs. Leverage a uniform network model and change-management framework that covers the hybrid and multi-cloud environment. - AlgoSec automatically discovers, maps and migrates application connectivity to Amazon Web Services Security Group rules through easy-to-use workflows. Get a holistic view of all your cloud accounts, assets and security controls – in a single platform. As part of the AlgoSec Security Management Solution, get a full network map of your entire network estate – both on-premises and public and private clouds. - Minimizes the attack surface by, prior to making any changes, assessing all proposed network security policy changes for risk to ensure secure network access and to avoid application outages. Proactively detect misconfigurations in access and other configurations to protect cloud assets, including cloud instances, databases, and serverless functions. Identify risky rules and their last usage date to. Gain the comfort to remove them so that you can avoid data breaches and improve your overall security posture. AlgoSec delivers unified security policy management across traditional and next-generation firewalls deployed on-premise as well as cloud security controls to ensure that the entire enterprise environment is always secure and compliant. Achieving Visibility and Security in AWS and across the Hybrid Network | AWS & AlgoSec Joint Webinar As enterprises rapidly migrate data and applications to public clouds such as Amazon Web Services (AWS), they achieve many benefits, including advanced security capabilities, but also face new security challenges. AWS lets organizations operate applications in a hybrid deployment mode by providing multiple networking capabilities. To maintain an effective security posture while deploying applications across complex hybrid network environments, security professionals need a holistic view and control from a single source. Yet, security isn’t just the responsibility of the cloud providers alone. Organizations need to understand the shared responsibility model and their role in maintaining a secure deployment. While AWS’s cloud framework is secured by AWS, the challenge of using the cloud securely is the responsibility of your organization’s IT and CISOs. As multiple DevOps and IT personnel make frequent configuration changes, the shared responsibility model helps achieve visibility and maintain cloud security. In this webinar, Yonatan Klein, AlgoSec’s Director of Product, and Ram Dileepan, Amazon Web Service’s Partner Solutions Architect, will share best practices for network security governance in AWS and hybrid network environments. CSA Study: Security Challenges in Cloud Environments Cloud computing provides improved security, agility, and flexibility. However, integrating this new service into legacy IT environments comes with great concern.
<urn:uuid:6cfd113e-cdad-4ff3-ab15-cc8b9d7ccb51>
CC-MAIN-2022-40
https://www.algosec.com/resources/aws-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00272.warc.gz
en
0.930178
1,631
2.640625
3
Everywhere there are smart devices collecting and sharing data. It is estimated that by 2020, there will be more than 50 billion such devices. Can you imagine the amount of data that will be produced in the upcoming years? According to the recent findings by Statista, it is predicted that 50.5 zettabytes of data will be created by 2021. Below is a graphical representation of their findings from the year 2010 to 2025 for the volume of data created. Most of the data is unstructured data of unknown value. Companies deal with terabytes of data out of which a big portion is of low-quality. Unlike traditional data, it is difficult to fit today’s data into databases. In contemporary times, it is crucial to balance good and bad data. Poor quality content can have a negative impact on business analytics. This can be extremely severe for companies, as today data is equivalent to money. The more data (good quality data), the more money the company is capable of generating. There are a variety of cleansing techniques that every company, whether an enterprise or startup, is practicing. The big data needs analysis that helps your business to get associated with only optimum quality of information. And let me tell you- it is not easy to segregate this data after analysis. Some of the benefits of big data analytics are: - Enhance productivity: Business user productivity increases by using big data tools like Spark and Hadoop. These tools allow users to analyze data quickly which enhances their personal productivity. - Saves cost: Many businesses of all sizes from large to small are using big data analytics to save the cost of their operations. - Improves decision making: When you get insights into the data, it allows the companies to grow and outstand the competition. Big data analytics allow analysts to make better decisions. - Better customer service: In contemporary times, the customer contact points like social media and CRM systems provide companies with great information which can be efficiently used with big data analytics. Challenges to big data analytics - Difficult to store: The big data is so big that it becomes tough to find a secure place to store it. - Fraud detection: The huge volume of data makes it difficult for a data scientist to clean it up. - Security: All these issues directly hamper the security of the floating data which keeps on growing. The huge amount of data poses a threat to security. It is tough to manage and secure big data. There is another technology with the integration of which the challenges of big data analytics can be overcome. And this technology is blockchain. Blockchain- A weapon to overcome big data analytics challenges Before giving you the reasons why blockchain and big data form the perfect relationship. I want to enlighten the actual meaning and certain benefits of blockchain technology. It is a trending technology that allows storing data with high security. Blockchain technology is being adopted in various industries like healthcare, education, recreation, and many others. According to a survey by Statista, it is being estimated that by the year 2022 more than 12 billion dollars will be spent in blockchain programming. The data in the blockchain is uneditable that zero downs the risk of theft and fraudulence. Let’s explore the top benefits that big data analyst can reap: Benefits of blockchain The data stored in the blockchain is not centralized, which means that it is not owned by one entity. Therefore, hampering one entity can never allow someone to steal the data. All kinds and types of data are stored in the blockchain ledger. This is why, blockchain development services are being availed in almost every arena, from psychology to medical. In the blockchain, it is not tough to trace the data. You can easily follow the thread and find its point of origin. The biggest benefit of blockchain is the security that it provides. There is no other technology that is capable of providing the security of this level. How do these benefits of Blockchain serve big data analytics? In a nutshell, the unique advantageous characteristics of blockchain provide clean and fraud-proof data at the end. This ability provides a golden chance for companies to get their big data analytics done in an efficient way. Here are the properties of blockchain that make its relationship with big data close to PERFECT There are many misconceptions about blockchain transparency. Do you know what exactly happens in blockchain? There are two keys that safeguard the data. One is a private key which is secret and another is public which is shared with others. To log into the blockchain account you require to enter both the keys. The match of these keys is the only way to authenticate the user in the blockchain account. So, with the public key of a big company, you can keep a record of their transactions. This forces top-notch brands to stay accountable and transparent. The transactions which you will see are from the public address of those companies. The word immutability means the objects that can be changed or altered. The data stored in the blockchain is of a similar kind. It can be viewed but can not be hampered. This makes it extremely valuable for the finance industry. Moreover, other industries like education, healthcare are also benefited by this feature of blockchain. The technologies that are centralized, forces everyone to interact with one entity. Interaction with that entity will allow you to attain or store any kind of data. But, in the blockchain which is completely decentralized, the story is a bit different. Like for example, when you search for something over the web, you send a copy of your query to the server. In the case of the blockchain network, there is no third party involvement. It is like sending money to someone, without the knowledge of the bank. Conclusion: Integration of big data with blockchain The combination of these technologies is unbeatable. The requirements and challenges of big data are perfectly met by blockchain technology with its ability to provide supreme transparency and security. The blockchain developers can integrate big data with blockchain. This is the only way to improve your business analytics.
<urn:uuid:842a1b59-8538-42e2-b573-e0f537d9d5cb>
CC-MAIN-2022-40
https://resources.experfy.com/bigdata-cloud/big-data-and-blockchain-an-unbeatable-match/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00272.warc.gz
en
0.932868
1,279
2.953125
3
In an information technology landscape full of platforms, the ‘data science platform’ is now an official thing – but what is it, or what is one? The curiously named John Snow Labs company is a specialist in Artificial Intelligence DataOps (that’s operations with a specific focus on data crunching) and its team thinks it can define what this term data science actually means. The company says that professionals in healthcare, pharmaceutical, finance and other sectors (presumably legal is in there too) are using data science platforms when trying to extract factual information from long, free-text documents. So can we pin down a more complete definition? According to KD Nuggets, “A data science platform a cohesive software application that offers a mixture of building blocks used for creating many kinds of data science solutions, as well as drawing actionable insight for business processes, products and services. ” According to Ali Naqvi, data lab product Manager at John Snow Labs, “A solid data science platform includes big data integration, data wrangling tools, data discovery and exploration, machine learning algorithms, experimentation tools, team collaboration tools and automated tools to deploy, test and monitor trained models in production. It will have a unified user interface, centralised administration and security controls — and an infrastructure that is highly robust and scalable to meet the demands of enterprise, business-critical needs.” The firm emphasises that organisations should use data science platforms to create maturity and discipline around data science as an organisational capability, instead of only a technical skill held by a select few. Markets and markets estimates that the data science platform market is estimated to grow from USD 19.58 billion in 2016 to USD 101.37 billion by 2021. Magical analyst house Gartner now uses data science alongside machine learning to create one of its quadrants. As a piece of terminology in the ever-widening lexicon of IT, the data science platform has now come to be.
<urn:uuid:55c9d975-28d8-4b3e-beba-7f70251e31f8>
CC-MAIN-2022-40
https://www.computerweekly.com/blog/CW-Developer-Network/What-is-a-data-science-platform
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00272.warc.gz
en
0.917687
404
2.765625
3
Feb. 12 — For the first time, scientists have observed ripples in the fabric of spacetime called gravitational waves, arriving at the earth from a cataclysmic event in the distant universe. This confirms a major prediction of Albert Einstein’s 1915 general theory of relativity and opens an unprecedented new window onto the cosmos. Gravitational waves carry information about their dramatic origins and about the nature of gravity that cannot otherwise be obtained. Physicists have concluded that the detected gravitational waves were produced during the final fraction of a second of the merger of two black holes to produce a single, more massive spinning black hole. This collision of two black holes had been predicted but never observed. The gravitational waves were detected on Sept. 14, 2015, at 5:51 a.m. Eastern Daylight Time (9:51 UTC) by both of the twin Laser Interferometer Gravitational-wave Observatory (LIGO) detectors, located in Livingston, La., and Hanford, Wash. The LIGO Observatories are funded by the National Science Foundation, and were conceived, built, and are operated by Caltech and Massachusetts Institute of Technology. The discovery, accepted for publication in the journal Physical Review Letters, was made by the LIGO Scientific Collaboration (which includes the GEO Collaboration and the Australian Consortium for Interferometric Gravitational Astronomy) and the Virgo Collaboration using data from the two LIGO detectors. NCSA’s Role in the Discovery Thirty years ago, the National Center for Supercomputing Applications (NCSA) was founded at the University of Illinois at Urbana-Champaign by Larry Smarr based on the premise that numerically modeling scientific problems, such as the colliding of black holes, required high-performance computing to make progress. Smarr’s doctoral thesis had itself been on the modeling of the head-on collision of two black holes. In 2014, Smarr was honored with the Golden Goose award to highlight the impact that his black hole research had on creating NCSA and the NSF supercomputing centers program which led to the public Internet revolution via the creation of the NCSA Mosaic web browser, the first browser to have visual features like icons, bookmarks, and pictures, and was easy to use. At NCSA, Smarr formed a numerical group, led by Edward Seidel—the current NCSA director. The group quickly became a leader in applying supercomputers to black hole and gravitational wave problems. For example, in 1994 the very first 3-dimension simulation of two colliding black holes providing computed gravitational waveforms was carried out at NCSA by this group in collaboration with colleagues at Washington University. NCSA as a center has continued to support the most complex problems in numerical relativity and relativistic astrophysics, including working with several groups addressing models of gravitational waves sources seen by LIGO in this discovery. Even more complex simulations will be needed for anticipated future discoveries such as colliding neutron stars and black holes or supernovae explosions. NCSA has also played a role in developing the tools needed for simulating relativistic systems. The work of Seidel’s NCSA group led to the development of the Cactus Framework, a modular and collaborative framework for parallel computing which since 1997 has supported numerical relativists as well as other disciplines developing applications to run on supercomputers at NCSA and elsewhere. Built on the Cactus Framework, the NSF-supported Einstein Toolkit developed at Georgia Tech, RIT, LSU, AEI, Perimeter Institute and elsewhere now supports many numerical relativity groups modeling sources important for LIGO on the NCSA Blue Waters supercomputer. “This historic announcement is very special for me. My career has centered on understanding the nature of black hole systems, from my research work in numerical relativity, to building collaborative teams and technologies for scientific research, and then also having the honor to be involved in LIGO during my role as NSF Assistant Director of Mathematics and Physical Sciences. I could not be more excited that the field is advancing to a new phase,” said Seidel, who is also Founder Professor of Physics and professor of astronomy at Illinois. Gabrielle Allen, professor of astronomy at Illinois and NCSA associate director, previously led the development of the Cactus Framework and the Einstein Toolkit. “NCSA was a critical part of inspiring and supporting the development of Cactus for astrophysics. We held our first Cactus workshop at NCSA and the staff’s involvement in our projects was fundamental to being able to demonstrate not just new science but new computing technologies and approaches,” said Allen. Eliu Huerta, member of the LIGO Scientific Collaboration since 2011 and current leader of the relativity group at NCSA, is a co-author of the paper to be published in Physical Review Letters. Huerta works at the interface of analytical and numerical relativity, specializing in the development of modeled waveforms for the detection and interpretation of gravitational wave signals. Huerta uses these models to infer the astrophysical properties of compact binary systems, and shed light on the environments in which they form and coalesce. “The first direct observation of gravitational waves from a binary black hole system officially inaugurates the field of gravitational wave astronomy. There can be no better way to celebrate the first centenary of Einstein’s prediction of gravitational waves. We can gladly say that Einstein is right, and that the beautiful mathematical framework he developed to describe gravity is valid even in the most extreme environments. A new era has begun, and we will be glad to discover astrophysical objects we have never dreamt of,” said Huerta. Stuart Shapiro, a professor of physics and astronomy at Illinois, was appointed an NCSA research scientist by Smarr two decades ago. A leading expert in the theory that underpinned the search for gravitational waves, he has developed software tools that can simulate on NCSA supercomputers like Blue Waters the very binary black hole merger and gravitational waves now detected by LIGO. Shapiro said he is thrilled by the discovery. “This presents the strongest confirmation yet of Einstein’s theory of general relativity and the cleanest evidence to date of the existence of black holes. The gravitational waves that LIGO measures can only be generated by merging black holes—exotic relativistic objects from which nothing, including light, can escape from their interior,” said Shapiro. “Work at NCSA helps open windows into the universe,” said Peter Schiffer, vice chancellor for research at the University of Illinois at Urbana-Champaign. “This is a wonderful fundamental discovery, and it’s exciting that the high performance computing capabilities that we developed to address challenges like this one are also being used to solve other significant societal problems.” Black holes are formed when massive stars undergo a catastrophic gravitational collapse. The gravitational field of these ultra compact objects is so strong that not even light can escape from them. Gravitational waves are generated when ultra compact objects—black holes, neutron stars or white dwarfs—are accelerated to velocities that are a significant fraction of the speed of light. Gravitational waves couple weakly to matter, which means that they can travel unimpeded throughout the Universe and that only extremely sensitive detectors such as LIGO can detect them. LIGO research is carried out by the LIGO Scientific Collaboration, a group of more than 1,000 scientists from universities around the United States and in 14 other countries. More than 90 universities and research institutes in the collaboration develop detector technology and analyze data; approximately 250 students are strong contributing members of the collaboration. The LIGO Scientific Collaboration’s detector network includes the LIGO interferometers and the GEO600 detector. The GEO team includes scientists at the Max Planck Institute for Gravitational Physics (Albert Einstein Institute, AEI), Leibniz Universität Hannover, along with partners at the University of Glasgow, Cardiff University, the University of Birmingham, other universities in the United Kingdom, and the University of the Balearic Islands in Spain. LIGO was originally proposed as a means of detecting these gravitational waves in the 1980s by Rainer Weiss, professor of physics, emeritus, from MIT; Kip Thorne, Caltech’s Richard P. Feynman Professor of Theoretical Physics, emeritus; and Ronald Drever, professor of physics, emeritus, also from Caltech. Virgo research is carried out by the Virgo Collaboration, consisting of more than 250 physicists and engineers belonging to 19 different European research groups: six from Centre National de la Recherche Scientifique (CNRS) in France; eight from the Istituto Nazionale di Fisica Nucleare (INFN) in Italy; two in the Netherlands with Nikhef; the Wigner RCP in Hungary; the POLGRAW group in Poland and the European Gravitational Observatory (EGO), the laboratory hosting the Virgo detector near Pisa in Italy. The discovery was made possible by the enhanced capabilities of Advanced LIGO, a major upgrade that increases the sensitivity of the instruments compared to the first generation LIGO detectors, enabling a large increase in the volume of the universe probed—and the discovery of gravitational waves during its first observation run. The U.S. National Science Foundation leads in financial support for Advanced LIGO. Funding organizations in Germany (Max Planck Society), the U.K. (Science and Technology Facilities Council, STFC) and Australia (Australian Research Council) also have made significant commitments to the project. Several of the key technologies that made Advanced LIGO so much more sensitive have been developed and tested by the German UK GEO collaboration. Significant computer resources have been contributed by the AEI Hannover Atlas Cluster, the LIGO Laboratory, Syracuse University, and the University of Wisconsin-Milwaukee. Several universities designed, built, and tested key components for Advanced LIGO: The Australian National University, the University of Adelaide, the University of Florida, Stanford University, Columbia University in the City of New York and Louisiana State University.
<urn:uuid:47ad2280-181e-4a5c-9ae3-152ae8eb493b>
CC-MAIN-2022-40
https://www.hpcwire.com/off-the-wire/gravitational-waves-detected-100-years-after-einsteins-prediction/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00272.warc.gz
en
0.93743
2,122
3.671875
4
When I hear the term Blockchain, we tend to think about cryptocurrencies, such as Bitcoin, or more recently non-fungible tokens (NFTs). But there are many other use cases for Blockchain technology that has the potential to provide answers to challenges that are very difficult to resolve without it. In manufacturing, for instance, there is a great need for transparency and trust across the entire value chain. There are several areas in manufacturing that can benefit from the trust and transparency provided by Blockchain: 1. Supply Chain Monitoring It is necessary for manufacturing to be able to trust all the components that go into your final product. If the wrong part or even a counterfeit is introduced into the product somewhere in the supply chain, then it isn’t possible to trust the integrity of the produced goods. Modern manufacturing companies are now reliant on worldwide supply chains. Manufacturing plants must often be placed in locations far removed from natural resources and raw materials. As a product makes its way through the value chain, often crossing international boundaries, it becomes very difficult to track the provenance of materials and manufactured goods. Trading relationships can be strained by a lack of information and visibility. When a single link in the chain is disrupted, an entire operation’s resilience can be called into question. What is needed is an indisputable record of every transaction between every party that participates in the workflow. In this way, trading partners can be comfortable knowing that conflicts can be resolved quickly and fairly. IBM has delivered Blockchain solutions for transparent, resilient supply chains used in vaccine distribution, food, and container logistics. 2. Predictive Maintenance Using IoT and predictive analytics, manufacturers can monitor the service parts in the process. This is particularly difficult in very large global supply chains where parts pass through multiple partners and even national borders. Using Blockchain to help ensure visibility at every step of the way, a global service supply chain can ensure that repairs are made quickly and at the right point in the process. 3. Quality Assurance To guarantee and document the quality of a finished good, the manufacturing company must be able to account for the quality of all the components or raw materials throughout the production lifecycle. It can be very costly to share this kind of documentation across a broad range of partner companies, due to the need for a central IT platform shared by all those companies. Innovations offered by Blockchain provide the ability to share this information in a way that all parties involved can trust and rely on in a less costly manner. 4. Regulatory Compliance Regulatory compliance can be a very complex process, making it a prime candidate for Blockchain improvements. One key aspect of compliance is the auditing of change, which is exactly what Blockchain is designed for. Every transaction is recorded and documented as to who did what when. Regulators could automate the analysis of the audit trail provided by Blockchain, eliminating countless hours of manual work normally required by compliance requirements. One of the biggest hurdles to widespread sustainability is the difficulty of reporting, and lack of trust between organizations that have differing ideas, agendas, and needs. Without Blockchain technology, this communication requires agreement on a central party as an intermediary. If that intermediary exerts excessive control, then trust is lost, and it becomes very difficult to attract other parties to the collaboration. For a comprehensive analysis of Blockchain’s capabilities on sustainability see this article. After the Blockchain technology is put into place, there will inevitably be a flood of data that is made available. Sorting through all this information to foster transparency and provide business insights requires sophisticated analysis capabilities. Artificial Intelligence is the ideal solution, with the ability to leverage machine learning models to return actionable insights and recommendations. The reverse is also true. Blockchain can assist AI by providing a trusted, transparent means to spread out the machine learning process across a wide range of devices and applications, rather than relying on a central AI system. Manufacturing companies tend to have very long upgrade cycles. The operational technology in place may not be modern enough to support Blockchain process monitoring. While new manufacturing facilities should be able to incorporate Blockchain with relative ease, replacing outdated equipment could be a much longer road.
<urn:uuid:b0153beb-8a5c-4d5f-ab69-ebbe35885e23>
CC-MAIN-2022-40
https://accelerationeconomy.com/industries/manufacturing/blockchain-for-manufacturing-cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00272.warc.gz
en
0.952195
852
2.765625
3
Researchers at the University of Liverpool made a splash in the media recently when they announced that they had demonstrated the first virus to infect a wireless network. Researchers at the University of Liverpool made a splash in the media two weeks ago when they announced that they had demonstrated the first virus to infect a wireless network. In a laboratory setting, the virus, dubbed Chameleon, moved from wireless access point to wireless access point, and while it didn’t affect the network, it did report the credentials of connected users. Apparently, however, the virus was not able to infect access points that were encrypted and password protected. So basically what the researchers demonstrated was that vulnerable networks are … well … vulnerable. "First, what they did is theoretical. They haven't proved to anybody that they can do it," noted Martin Lindner, principal engineer in the CERT Division of the Carnegie Mellon University Software Engineering Institute. “What I think they're alluding to is that they can compromise access points themselves. But that would be no different than compromising a PC, a router or any other device on the network. The new part is that they are talking about taking control of a piece of hardware that most people don't really think is worth taking control of.” And in any case, Lindner said, the security community is already well aware of the vulnerability of access points. “If I'm the IT guy at an agency, I should have a regimen in place that tracks what access points I own and operate, and I’ll be surveying the building on a regular basis looking for things that claim to be my network that I don't know about,” Lindner said. “If you are doing your due diligence looking for rogue access points, you have little risk that one of your employees is going to connect to a network you don't control.” If there’s a lesson to be learned from Chameleon – apart from the obvious one not to assume you’re secure on a public Wi-Fi network – it is the importance of implementing end-to-end encryption. “You still might have WPA2 for wireless encryption, but you then would be tunneling a direct path between the client and the server using end-to-end encryption. So even if the guy had control of the access point, the information would still be garbage,” Lindner said. Unfortunately, Lindner added, some federal agencies have lagged in implementing end-to-end encryption. “It's probably not as prevalent as it could be,” he said. “But it is clearly on the radar.” Another thing that would help is adoption of IPv6, which natively supports end-to-end encryption. “There is a push – slow, but it is there – for IPv6,” Lindner noted.
<urn:uuid:442da288-0bb3-429b-b081-1da34ec37cbd>
CC-MAIN-2022-40
https://gcn.com/cybersecurity/2014/03/wi-fi-virus-much-ado-about-almost-nothing/317913/?oref=gcn-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00272.warc.gz
en
0.975964
597
2.8125
3
Wombat released its social engineering training module to defend against social engineering threats, including spear phishing and social media-based attacks. Commonly defined as the art of exploiting human psychology to gain access to buildings, systems or data, social engineering is evolving so rapidly that technology solutions, security policies, and operational procedures alone cannot protect critical resources. A recent Check Point sponsored survey revealed that 43 percent of the IT professionals surveyed said they had been targeted by social engineering schemes. The survey also found that new employees are the most susceptible to attacks, with 60 percent citing recent hires as being at “high risk” for social engineering. A combination of social engineering assessments, which stage mock attacks on employees for the purposes of training, and a library of in-depth training modules to educate and reinforce concepts, work together to deliver measurable employee behavior change. Employees who fall for mock attacks are very motivated to learn how to avoid real attacks. The social engineering training module explains the psychology behind these attacks, and gives practical tips for recognizing and avoiding them, which employees apply immediately during the training to lengthen retention. The social engineering training module is the latest module available in Wombat’s Security Training Platform that helps companies foster a people-centric security culture and provide security officers with effective education tools. With the platform, security officers can: - Take a baseline assessment of employee understanding - Help employees understand why their security discretion is vital to corporate health - Create a targeted training program that addresses the most risky employees and/or prevalent behaviors first - Empower employees to recognize potential threats and independently make correct security decisions - Improve knowledge retention with short interactive training sessions that work easily into employees’ busy schedules and feature proven effective learning science principles - Monitor employee completion of assignments and deliver automatic reminders about training deadlines - Show measurable knowledge improvement over time with easy-to-read reports for executive management.
<urn:uuid:6956be3d-b416-4e39-9cf1-6715f5f3e346>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2013/01/15/wombat-unveils-social-engineering-security-training-module/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00272.warc.gz
en
0.945499
388
2.53125
3
I get so, so tired of explaining that Linux isn’t that hard. Indeed, if you’re reading this on an Android phone or on a Chromebook, congratulations! You’re using Linux, and you very well might not have known it. But then there are the Linux distributions that do require expertise to make the most of. Why would you want to go to the trouble? Because you’re a programmer, an engineer, or a system administrator who wants to get the most from Linux. Or, you’re a power user, and you want to push your computer as far as you can take it. If that’s you, then these are the distributions for you. You knew Fedora, Red Hat‘s community Linux distribution, would be first on my list. It’s the mainstream distro that pushes Linux’s limits. It comes powered by the newest Linux kernel and with the latest open-source software. In particular, Fedora is the Linux of choice for programmers. No less a figure than Linus Torvalds uses Fedora for his development work. Need I say more? Sometimes, however, when you’re running a leading-edge distro, you can cut yourself. There’s a reason why Fedora’s also known as a bleeding-edge Linux. On the other hand, Fedora is easy to install and set up. You don’t need to be a Linux expert to get up and running with it. For programmers, Fedora also boasts an excellent Developer Portal. It features dedicated guides on developing command line, desktop, mobile, and web apps. The Fedora Developer Portal also comes with an excellent guide for developing hardware devices such as Arduino and Raspberry Pi. Last but not least, it comes with such development tools as the top-notch integrated development environment (IDE) Eclipse for Java, C/C++, and PHP and Vagrant, a tool for creating reproducible, portable container or virtual machine (VM)-based development environments. Unless you’re working on the Debian/Ubuntu family programs, Fedora should be your first choice for a development operating system. For developers in that group, I recommend the newest version of Ubuntu. Do you want to set up a Linux desktop to work and look exactly the way you want it to? If that’s you, then Arch Linux deserves your attention. With Arch, everything is under your control. That’s both the good news and the bad news. While Arch’s slogan is “Keep it simple,” simple is in the eye of the user. As someone whose first “desktop” was the Bourne shell, it’s not that hard. But, for those who didn’t grow up with a command-line, it’s another matter. You see, Arch only comes with a command shell. It’s entirely up to you which desktop environment you’ll use and exactly how it will be customized. With sweat and toil, you can get it to fit your exact requirements and needs. That’s not easy. Even with the help of its excellent ArchWiki documentation site, you’re in for a lot of work. But, when you are done, you’ll have a unique desktop to call your own. Or, if that sounds like too much work, you can use Manjaro Linux. This distro takes much of the blood, sweat, and tears out of installing and running Arch. It comes in three main desktop editions: GNOME, KDE Plasma, and XFCE. At the same time, though, if you want to switch Linux kernels, Manjaro is one of the few distros that makes it easy to switch operating system gears. It supports multiple kernels simultaneously. You just re-boot your system, make your selection in the boot menu, and you’re back to your desktop with a new kernel running underneath. Is this something most people will want to do? No. But, if you’re serious about testing the Linux kernel, then Manjaro is for you. Do you really, really want to get deep in the weeds with Linux? If so, then the source-code-based distro Gentoo is for you. For starters, there is no installation program for Gentoo. As its developers say, ” You’re the installer.” That means, “you can apply all the customizations you desire” — once that is, you’ve absorbed the Gentoo Handbook. Unless you’re an expert Gentoo user, I urge you to keep a copy of the Handbook up and running on another computer. You’re going to need all the help you can get to get Gentoo up and running. Once you do, you’ll also need to learn the ins and outs of the Portage package system. Unlike almost all other Linux distributions, which use binary software packaging systems such as Red Hat’s RPM and Debian’s APT, Portage is source-code based. So, for example, if you want to install a program in Portage, you actually compile the application’s source code on your machine. You can also “edit” the source by using USE flags customizations. Easy to do? Heck no! But if you want absolute control over what’s on your desktop, Gentoo is for you. But, say you want a lot of power but not quite so much work? Then, just like Arch and Manjaro, you can use Sabayon Linux with Gentoo. This distro’s developers’ goal is to deliver the best “‘out of the box user’ experience by providing the latest open-source technologies in an elegant format. In Sabayon everything should just work. We offer a bleeding edge operating system that is both stable and reliable.” Essentially, Sabayon makes most of the Gentoo setup decisions for you. You still get a lot of control, but you don’t have to turn every knob and flip every switch to get a working system. Looking ahead, Sabayon is rebranding as MocaccinoOS. The main difference between this and Gentoo is it uses the new container-based packaging system, Luet. This is still in beta, and I can only recommend this version for experienced developers and users. And, now for something different. Kali Linux is a Linux distribution designed for penetration testing or — yes — hacking. Thanks to Mr. Robot, Kali Linux is the best known of the hacking distributions. Kali Linux is the work of developers at the security firm Offensive Security. It’s built on Debian. Historically, it goes back to the Knoppix-based digital forensics and penetration testing distro BackTrack. While installing and setting up Kali is as easy as setting up any Debian distribution, its default software packages are where things take a different course. For example, instead of LibreOffice for your default office suite or Thunderbird for your e-mail client, neither one is provided by default. Instead, it comes with such security programs as OWASP ZAP, for beating on websites for security problems; SQLMAP, which automates detecting and exploiting SQL injection vulnerabilities; and THC Hydra, a popular password cracker. Now Kali Linux can’t turn you into a hacker or a security maven. To do that, you really must know what’s what with computers, coding, and security. It just provides you with the tools and expert needs to get started. If you just want to pretend to be a hacker, start at Hacker Typer. Enjoy! The flip side of breaking into systems, or checking to see if they can be broken into, is repairing already busted systems. The best of these repair Linux distros is SystemRescue. This operating system, also known as SystemRescueCD, which gives you an idea of how old it is, is designed to repair busted computers. This is the distribution I, and other Linux experts, need to help our Windows cousins when they run into failed Windows installations and corrupted hard drives. It is not meant as a permanent operating system. Instead, you boot it from a USB drive, DVD drive, or, yes, even now, a CD drive. Once up, you can use it to explore a semi-dead computer and attempt to bring it back to life. It’s not simple to use. Like Kali, it gives you the tools you need to get the job done. In this case, it comes with programs such as GNU Parted, for manipulating disk partitions and filesystems; ddrescue, which is a data recovery tool that works by copying data at the block level from corrupted storage devices; and rsync, a program for cloning data from a failing drive across your local network to another, stable computer. None of these tools are easy. I cannot recommend enough that you read the SystemRescue manual before trying to rescue a failing system. That said, once you know what you’re doing, you can expect to hear from friends and families anytime their Windows PCs go seriously wonky. What are the best books on Linux? Nothing beats working with Linux, but there are books that can help you master it. The best way to learn Linux is to use it. And to use the “man” command, as in RTFM. That said, there are also some helpful books out there to take you from someone who knows a thing or two about Linux to a real pro. I’ve one word of warning: Be sure to get the most recent edition of any of these books. A book that brings you up to speed on how init gets a Linux instance running won’t do you any good since it’s largely been replaced by systemd. Here are some of my favorites: - How Linux Works, 3rd Edition: What Every Superuser Should Know 3rd Edition. This book by Brian Ward covers the historic basics and their modern equivalents. So, for example, besides just covering Linux disk partitions, it also covers Logical Volume Manager (LVM). - The Linux Command Line: A Complete Introduction 1st Edition by William Shotts delivers just what it promises. After you absorb this, you’ll know now only how to make your way around the Bash shell, the most popular Linux shell, but the fundamentals of how to use such powerful shell programs as sed, grep, and awk. There was a time I made a living from having mastered that last trio. - Linux Command Line and Shell Scripting Bible 4th Edition by Richard Blum and Christine Bresnahan. Mastered everything in Shotts’ book? OK then, you’re ready to move on to this massive tome. This new edition, published in early 2021, walks you through the basics and moves from there to more advanced topics. It does this with easy-to-follow tutorials and examples. - Linux Cookbook: Essential Skills for Linux Users and System & Network Administrators 2nd Edition by Carla Schroder. Carla knows her Linux. She’s been at it for almost as long as I have. This new update to her earlier classic delivers the goods. Essentially its recipes are mini-how-tos for some of the most common situations you’re likely to run into if you’re a Linux power user or system administrator. Written in an amusing style, I highly recommend this book. What are the essential Linux websites? If you really want to know Linux, you want to read everything I’ve ever written… Well, maybe not. Seriously, here are the sites you must keep bookmarked if you’re a real Linux pro. To really know what’s going on with the Linux kernel, you must keep an eye on the Linux Kernel Mailing List (LKML). Note I don’t say read it. I’m not sure anyone can actually read everything posted to the list. Its message volume is insane. But, as you gain experience with it, you’ll be able to separate the wheat from the chaff. For instance, it’s a safe bet that anything Linus Torvalds posts are worth at least a glance. I recommend getting a handle on the LKML by reading its FAQ. It will make understanding what’s going on much easier. If that’s too much for you — like, I don’t know if you have a life or something — you can subscribe to the LWN.net. There are many Linux news sites, but there’s only one LWN. Run by Linux kernel maintainer Jon Corbet, LWN goes deep into the ins and outs of Linux kernel, open-source software, and coding. For example, I can tell you about the latest Fedora release; LWN will tell you about the Fedora community debate over whether non-free Git forges should be used in developing the distribution. Let’s say, though, that you just want to keep up on general Linux news and not the hardcore tech and programming information. If that’s you, the aggregate site, Linux Today, does a good job of gathering up Linux news stories, features, and the latest tutorials. Here, I’ll add, you’ll also find links to many of my stories. Do you want to know exactly how your new processor might work with Linux? Then Phoronix is for you. This site covers kernel news, but it’s most well known for its detailed reporting and benchmarking on the latest Linux distros and hardware. So, if you want to know the current state of Linux support for Intel’s Software Guard Extensions (SGX) or how Linux and Mesa Drivers compare on Intel Core i5 12600K/UHD Graphics 770, with each other in raw performance, this is the site for you. Finally, for those of you who like to know about every Linux distribution out there, your site of choice is DistroWatch. It tracks every — and I mean every — Linux distribution out there. By my count, there are about 600 distros out there these days, and most of them are still being actively developed. This is the place to go to keep track of them all. I’ve been running Linux for 29 years. Linux is 30 years old. I know this operating system every which way you can. Before that, I’d cut my teeth on Version 7 Unix. In other words, I have a clue about Linux. The opinions I give here are based on all that experience and the experience of the many Linux kernel developers and distribution programmers I’ve known over the years. That said, if there’ are any mistakes, they’re all mine.
<urn:uuid:9bce93ba-eb70-4fec-80a8-578d58a3b471>
CC-MAIN-2022-40
http://dztechno.com/best-desktop-linux-for-pros-2021-our-top-5-choices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00272.warc.gz
en
0.926537
3,131
2.515625
3
Twitter has a credibility problem. Fake information and photos proliferate the platform, especially after a natural disaster. For example, Hurricane Sandy photos depicted flooding of the whole East Coast and the Statue of Liberty being felled by a tsunami. For every problem, there’s a startup. Tweetcred appends a ‘credibility ranking’, rating each tweet from one to seven. The algorithm looks at 45 inputs, including tweet length, if a URL was included, and the number of followers of the tweet source. Tweetcred also learns over time, and users can tweet their own ratings to improve its accuracy. The Tweetcred score can have a positive impact in for those who Twitter in their professional work. Journalists would find it helpful to concentrate on the most credible sources in breaking news events. Traders might use it as another data point in assessing, say, the impact of Hurricane Sandy on insurance claims. There remains the underlying issue of Twitter use as a social outlet. Fake pictures went viral because they were the most incredible, not the most credible. Most people didn’t post these fake photos, they simply retweeted them. It remains to be seen how Twitter addresses the problem of the platform being used for both professional and social use, and how to ensure a positive trait for one use case doesn’t translate into a negative impact for the other. Read the full article here.
<urn:uuid:a4763eda-027d-42ae-aa3c-2958e8b3f133>
CC-MAIN-2022-40
https://dataconomy.com/2014/05/solving-twitters-credibility-problem/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00272.warc.gz
en
0.946037
290
2.515625
3
We live in an age where we have unprecedented access to almost any information we need. With the emergence of new technology like artificial intelligence (AI), facial recognition, big data and more, the human experience is being changed forever. Almost anything you need is just a tap away; but this access comes at a price—data for data. A simple online search may seem harmless, but before you know it, you’re being bombarded with ads offering you exactly what you were looking for. How exactly does this work? In basic terms, most times you use a technology application, you—often unknowingly—reveal your personal data including your age, location, and sex amongst other things. This data is collected and analyzed and then used to personalize your online experience. For example, an article on Watchblog talks about how your online shopping habits are monitored by tracking companies that later leads to better personalized online experiences. This information is then sometimes sold by those tracking companies to larger entities for future use. AI, although a huge boon, comes with a real risk: the potential breach of human rights, specifically, your privacy. Today, AI and facial recognition are being used to generate or mine sensitive personal information as well as identify and profile people. With such huge implications, the very topic has caused a societal divide—with some arguing that AI and facial recognition should be banned entirely, and others promulgating the world of possibilities it opens up to humans, albeit with some risk. AI and facial recognition allow for new possibilities when it comes to diagnosis and treatment. For instance, AI helps keep track of and identify patterns in past medical records and links this information to possible diagnosis and treatment options. As noted in an article on Towards Data Science, facial recognition is currently being used in healthcare to dispense a patient’s prescriptions based on their biometric face scan. This allows for a more efficient medication pick-up system. That’s not all though—the future of facial recognition in healthcare is very promising. To quote the article “Some facial recognition software providers claim that their products can help monitor blood pressure or pain levels by identifying key facial markers, and this could prove a useful tool in the future for both physicians and end-users.” Driving technology was one of the first arenas AI ventured into. In the past few decades, it has allowed the possibility of cruise control, autopilot, lane tracking, increased driver safety, and GPS systems. Many companies are also using it to increase vehicle management efficiency for business owners and encourage safer driving and worker productivity. However, with the introduction of autonomous vehicles comes the fear of how safe self-driving cars actually are, and whether the general public at large can trust a robot to make safe, traffic-friendly choices. While the pros and cons of self-driving vehicles are still up in the air, there is much to be said about how autonomous vehicles will improve commercial fleet safety, especially once all the bugs are worked out. For instance, self-driving car fleets are likely to have AI-powered safety features that reduce car accidents. These autonomous fleets will also utilize advanced GPS tracking software that will transfer data to organizations that can make business decisions regarding their movement, so as to operate at maximum productivity and safety. Around the world, government security agencies and corporations are using AI and facial recognition to help reduce crime. Using this technology, officials are now able to identify anything from the exact source location of a gunshot to predictive criminal behavior, patterns of fraud, and areas and occasions prone to criminal activity. The sheer amount of possibilities this opens up for law and forces to correctly assess and deal with criminals means leaps and bounds for national and personal security. These are just a handful of the many benefits of AI and facial recognition technology, in terms of bettering our daily lives and offering more convenience in modern society. As with all other things, AI and facial recognition come with their fair share of disadvantages. For one, AI can only glean the insights it needs to function optimally from unfettered access to data. This often results in limited cybersecurity and increased avenues for cybercriminals to gain access to sensitive information. In fact, even seemingly safe home devices like baby monitors, smart fridges and home assistants, all pose a threat to one’s security by providing hackers with a way to infiltrate one’s personal space. The AMA Journal of Ethics further lists some of the implications of AI and facial recognition technology, in terms of violating a user’s privacy. Informed consent is one such issue, with an increasing need for healthcare providers that utilize facial recognition to inform their patients about the potential uses of patient data. To quote the journal, “In particular, patients might not be aware that their images could be used to generate additionally clinically relevant information. While FRT systems in health care can de-identify data, some experts are skeptical that such data can be truly anonymized; from clinical and ethical perspectives, informing patients about this kind of risk is critical.” Secondly, the journal article delves into the ethical issue of bias. When it comes to diagnosis based on facial recognition, there is a large margin for error of biased, and even racially profiled results. When the data pool for AI and facial recognition is not racially diverse, the results too can be skewed in nature. This issue is not only pertinent to healthcare but is also important when it comes to the criminal justice system’s use of AI and facial recognition. For instance, consider the usage of a facial recognition system to identify gay men. The technology, as quoted in the AMA Journal of Ethics, “simply identified the kind of grooming and dress habits stereotypically associated with gay men”, illustrating how bias can shape facial recognition findings. Another issue to consider is the role of facial recognition and AI when it comes to democratic freedom. As stated in the Towards Data Science article, “The principles behind democratic freedom mean having the right to choose, the freedom to gather and share views. Despite the many uses of facial recognition tech, this is one area it takes away.” Facial recognition and AI makes it very possible for governments to actively spy on their citizens, without their knowledge about the same, and under the guise of national safety protocols. This may sound like a dystopian movie plot, but the reality is that advanced technologies like these can very easily be used for less-than-ideal means. This is something we need to be aware of before allowing technology to further permeate into aspects of our daily lives. As noted above, AI and facial recognition technology have many advantages. However, when used maliciously, these technologies can do more harm than good. It is thus vital that these technologies are implemented with the utmost caution, and while keeping ethical frameworks about user privacy in mind. Providing notice, asking for consent, and adequate privacy and data protection are thus essential to the successful use of AI and facial recognition in the future.
<urn:uuid:7de5c305-3269-4d91-8c64-11f82bbe81db>
CC-MAIN-2022-40
https://bdtechtalks.com/2020/03/18/ai-facial-recognition-ethics-privacy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00272.warc.gz
en
0.946613
1,440
2.90625
3
Virtualization -Jumpstart Lecture 4: - Configuring and managing data centres with Microsoft – - Configuring and managing data centres with VMware – - Why Microsoft virtualisation - Why VMware virtualisation - Exam review For more Free Events Virtualization in Windows Server is one of the foundational technologies required to create your software defined infrastructure. Along with networking and storage, virtualization features deliver the flexibility you need to power workloads for your customers. Many versions of Windows 10 include the Hyper-V virtualization technology. Hyper-V enables running virtualized computer systems on top of a physical host. These virtualized systems can be used and managed just as if they were physical computer systems, however they exist in virtualized and isolated environment. Special software called a hypervisor manages access between the virtual systems and the physical hardware resources. Virtualization enables quick deployment of computer systems, a way to quickly restore systems to a previously known good state, and the ability to migrate systems between physical hosts. The following documents detail the Hyper-V feature in Windows 10, provide a guided quick start, and also contain links to further resources and community forums.
<urn:uuid:2aa9e972-0459-4aee-851b-8904de96e7e3>
CC-MAIN-2022-40
https://www.erdalozkaya.com/virtualization-jumpstart-4/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00272.warc.gz
en
0.906069
238
3.09375
3
Cloud Computing, which is currently one of the most in-demand technologies, gives every organization a new shape by providing on-demand virtualized services/resources. From small to medium to large, every firm uses cloud computing services to store information and access it from anywhere and use only the internet. We shall learn more about cloud computing’s internal architecture in this post. Transparency, scalability, security, and intelligent monitoring are just a few of the critical restrictions that any cloud infrastructure should face. The current study into other significant constraints assists cloud computing systems in developing new features and tactics that can provide more advanced cloud solutions. What is Cloud Computing, and how does it work? Internet based services like storage, software, analytics and databases are collectively called cloud computing. It involves all the services that can be delivered without being physically near the hardware qualifications. Netflix, for example, employs cloud computing to provide video streaming services. G suite is another such example of a cloud computing service. Cloud computing, expressed, is the distribution of on-demand resources (such as a server, database, software, and so on) over the internet. It also enables the creation, design, and management of cloud-based applications. You can learn more by taking up a few Cloud Courses online. A mix of event driven and service oriented architecture make up the cloud computing. The architecture of cloud computing is separated into two parts: The service provider uses the back end. It oversees all of the resources needed to deliver cloud computing services. It includes a massive quantity of data storage and security measures, virtual machines, deployment models, servers, and traffic management mechanisms, among other things. - An application is a piece of software or a platform that a client can use in the back end. That is, it delivers the service at the back-end in accordance with the client’s needs. - Back-end service refers to the three major categories of cloud-based services: SaaS, PaaS, and IaaS. It also controls which services the user has access to. - Runtime Cloud – In the back-end, runtime cloud refers to providing an execution and runtime platform/environment to the virtual machine. - Storage – Storage in the back-end refers to the provision of a flexible and scalable storage solution and data administration. - Infrastructure – In the back-end, cloud infrastructure refers to the hardware and software components of the cloud, such as servers, storage, network devices, virtualization software, and so on. - Back-end management refers to the administration of back-end components such as applications, services, runtime clouds, storage, infrastructure, and other security methods. - Back-end security refers to installing various security techniques in the back-end to provide secure cloud resources, systems, data, and infrastructure to end-users. - Internet – An Internet connection serves as a medium or a bridge between the front-end and the back-end, allowing them to interact and communicate. The client interacts with the front end. It includes client-side interfaces and applications for interacting with cloud computing services. Web servers (such as Chrome, Firefox, and Internet Explorer), thin and fat clients, tablets, and mobile devices make up the front end. For instance, to access the cloud platform, you may utilize a web browser. Client Infrastructure – The front-end components are referred to as client infrastructure. It includes all of the apps and user interfaces needed to use the cloud platform. Although no two clouds are the same, there are a few common cloud architectural models. Public, private, hybrid, and multi-cloud architectures are among them. Here’s how they stack up: - Public cloud architecture is one in which computing resources are owned and maintained by a cloud services provider. The Internet is used to exchange and redistribute these resources among multiple tenants. Reduced operational expenses, easy scalability, and little to no maintenance are among the advantages of the public cloud. - Private cloud architecture describes a privately owned and controlled cloud, typically in a company’s on-premises data center. Private clouds Although private cloud architectures are often more expensive than public cloud systems, they are more customized and provide more strict data security and compliance alternatives. On the other hand, private clouds can span many server locations or leased space at widely dispersed colocation facilities. - Hybrid cloud architecture: A hybrid cloud system combines the public cloud’s operational efficiencies with the private cloud’s data security features. Hybrid clouds consolidate IT resources by combining public and private cloud architectures, allowing enterprises to shift workloads between environments based on their IT and data security needs. - A multi-cloud architecture makes several different public cloud services. A multi-cloud architecture provides more flexibility in selecting and deploying cloud services that are most likely to meet varying organizational requirements. Cloud Computing Architecture Benefits: By using cloud computing architecture, organizations can minimize or remove their dependency on an on-premises server, storage, and networking infrastructure. Organizations that implement cloud architecture frequently move IT resources to the public cloud, obviating the need for on-premises servers and storage, as well as IT data center real estate, cooling, and power, and replace them with a monthly IT expense. One of the main reasons for cloud computing’s current popularity is the transition from capital investment to operating expense. others are given below : - It simplifies the overall cloud computing infrastructure. - Reduces the amount of data processing required. - Assists in the provision of strong security. - It becomes more modularised. - As a result, catastrophe recovery is improved. - Provides easy access to users. - Reduces the cost of IT operations. Components of the Cloud Computing Architecture In the following section the key details of cloud computing architecture has been discussed: Hypervisor: It refers to a virtual machine monitor that delivers Virtual Operating Platforms to all users. It also handles guest operating systems in the cloud. On the back end, it operates a distinct virtual machine with software and hardware. Its major goal is to divide and distribute resources. Software for Management: Its job is to oversee and monitor cloud operations while implementing various techniques to improve cloud performance. it also looks after the management of disaster contingency plans, compliance auditing auditing auditing auditing auditing auditing auditing auditing auditing auditing auditing audit Software for Deployment: It includes all of the essential installs and configurations to start a cloud service. A deployment software is used for every cloud service deployment. Network: It is the link between the front-end and the back-end. Additionally, every user has access to cloud resources. It assists users in connecting to the network and customizing the route and protocol. It is a cloud-based virtual server that is highly adaptable, secure, and cost-effective. Cloud Server: Cloud storage is a cloud computing approach to store data over the internet by a cloud computing technician who manages and administers the data storage as a service. Cloud Storage: Every piece of information is saved and accessible by a user from anywhere on the internet. It can be scaled at runtime and is accessed automatically. Cloud storage data can be changed and retrieved via the internet. Organizations may securely construct applications and employ cloud services depending on customer requirements thanks to cloud computing architecture. In this article, we briefly glossed over cloud computing and the benefits of cloud computing architecture, cloud computing architecture, and cloud computing architecture components in this article. As a result, we now have a thorough understanding of Cloud Computing Architecture. Several Cloud Certification Courses are open to any Cloud Architect who wants to learn more about Amazon Web Services.
<urn:uuid:96436eba-9fa1-4a92-801b-94f5b0d4a0a7>
CC-MAIN-2022-40
https://dashwire.com/introduction-to-cloud-computing-architecture
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00472.warc.gz
en
0.913082
1,595
3.40625
3
Protecting the Sanctity of the Ballot Box Against Cyberthreats Depends On Legislation, Enforcement, and Sharing Up-To-Date Threat Intelligence Data Confidence in the honesty of election systems has hit record lows, yet the federal executive branch still has not articulated an overarching strategy and plan of action to secure them. Disparate election systems operate with little standardization and no unified oversight, making them particularly vulnerable in the face of growing cybersecurity threats. Government entities will need to ensure that every citizen has the right to a secure vote, in order to ensure that all constituents have confidence that their votes will count. Here are some of the issues addressed: - The varying levels of cyber-readiness at the state and municipal levels - Different legislation that has been passed or halted around election security - How threat intelligence technology can be adopted as the first line of defense Get the report!
<urn:uuid:0fffb90f-c2fc-4ea1-881a-583abfbaaaa9>
CC-MAIN-2022-40
https://www.anomali.com/resources/whitepapers/cso-online-the-changing-landscape-of-us-election-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00472.warc.gz
en
0.943715
187
2.5625
3
People talk about “the cloud” all the time these days, but what do they really mean? There’s no agreed-on definition, which can render some conversations nearly inscrutable. We can’t pretend to have the final answer—if there will ever be such a thing—but here’s how we think of “the cloud.” (And now we’ll stop quoting it.) At a basic level, many people seem to equate the cloud with anything that’s online or with the Internet as a whole. That’s not incorrect, since everything in the cloud does take place online and is on the Internet, but it’s also not helpful. Cloud Services Replace Local Hardware and Software It’s more useful to think of the cloud as a way of referring to services made available over the Internet as a replacement for hardware or software on your Mac. These services largely fall into three broad categories: storage and backup, data syncing, and apps. - Storage and backup: To add storage directly to your Mac, you’d connect an external hard drive or SSD. Cloud-based services like Dropbox, Google Drive, iCloud Drive, and OneDrive all provide the same basic function—more space to store data. Of course, they also go further, providing syncing between your devices and sharing with other people. Plus, just as you probably use Time Machine to back up to an external drive, you can use Backblaze to back up to the cloud. - Data syncing: Before the cloud was a thing, syncing your contacts, calendar, and email between two Macs generally required either special software (like ChronoSync) or going through the export/import dance. Cloud-based services for such bits of data—including Apple’s iCloud syncing for Calendar and Contacts and Google Calendar—make it so the same information is available on all your devices all the time. They often provide a Web-based interface as well so you can access your data from someone else’s computer. - Apps: An app like TextEdit runs on your Mac, but cloud-based apps like Google Docs provide app-like functionality while running in a Web browser. These days, many things that can be done directly on a computer can be done in a Web browser: word processing, spreadsheets, image editing, video streaming, video chat, and more. Cloud Services Rely on “Cloud Computing” Apps on your Mac use its processor and memory. You might also have used a network server; you use the apps on the server over the network, but they’re running on that particular server. In contrast, cloud services run on massive clusters of computer resources spread across many computers and even multiple data centers. When you’re typing into Google Docs, the processing resources that make that possible don’t come from a single computer dedicated to you—they’re provided to you and millions of others simultaneously by Google’s worldwide computer clusters. Pros of the Cloud There’s a lot to like about the cloud and what it makes possible: - It’s accessible from nearly anywhere: As long as you have a high-speed Internet connection, you can access cloud-based services from anywhere in the world. And while not everywhere in the world has high-speed Internet access, it’s becoming more widely available all the time. Heck, you can now use the Internet on many commercial airplanes. - It’s somebody else’s problem: That’s not entirely true, of course, but using a cloud-based service means the staff of the data center deals with failing computers or hard drives, network problems, and other maintenance. You just need a functional computer and Internet connection. - It’s easy to switch devices and even platforms: Moving to a new iPhone or iPad is nearly trivial these days, thanks to being able to restore from an automatically created iCloud backup. And if you use Gmail, for instance, it would work just the same if you wanted to switch from an Android phone to an iPhone. - It’s more flexible: If you decide to try a cloud service, it’s usually just a matter of setting up an account or signing in with an existing one. There’s no need to download and install software, or to clean up after the installer. Plus, if you need more storage space or additional features, it’s usually just a matter of upgrading an account and paying more—you don’t have to buy another hard drive or a whole new app. - Costs are lower and more predictable: Many cloud services are entirely free, like Gmail and Google Docs, whereas others rely on monthly or annual subscriptions. Generally speaking, such subscriptions cost less than buying equivalent desktop software and all their upgrades. Whether or not a cloud app is cheaper, it’s a predictable expense you can build into a budget. Cons of the Cloud Of course, not everything about the cloud falls into the silver lining category. Some problems include: - You can’t control when apps are upgraded: With desktop software, you can pick and choose when to upgrade, at least to some extent. Cloud apps, on the other hand, are upgraded whenever the developer wants, sometimes at inconvenient times or in major ways that might be hard for you to use. On the other side of the equation, you don’t have to spend time downloading and installing upgrades, or even thinking about whether to install them. - You have limited control over your data: Although well-run cloud services are significantly less vulnerable to failure, damage, or theft than your Mac is, there’s no avoiding the fact that you can’t do much to prevent such problems. Backing up cloud-based data can be challenging, as can exporting it for use elsewhere. - Subscriptions can add up: Any one cloud service may be reasonably priced, but if you end up with 10–15 subscriptions, the total annual cost may seem exorbitant. To be fair, major software packages used to cost hundreds or even thousands of dollars, and we all use many more apps and services than we did in the past. - Security is a concern: While cloud providers may do a better job than you could of guaranteeing uptime and even backing up data, the fact remains that everything on the cloud is protected by passwords. If you reuse passwords or rely on weak ones, you could be in for a world of hurt. That’s why we always bang the drum for relying on a password manager for strong, unique passwords and turning on two-factor authentication whenever possible. - Privacy can be a problem: Many free and ad-supported cloud services—most notably Facebook and Google—make their money by collecting data about you and using it to sell advertisers access to you. One reason to pay for a cloud service is that then you’re the customer, and as the saying goes, if you’re not the customer, you’re the product. We’re not here to sell you on the cloud in general or scare you away from using it. In today’s world, there’s almost no way to avoid it, nor should you try to do so. Hopefully, now that you have a better idea of what the cloud really is, you can make more informed decisions about which cloud services can improve your technological life and which ones won’t.
<urn:uuid:0ca7c2f9-70e2-421d-a7c8-6a2f04b5fc3d>
CC-MAIN-2022-40
https://www.computerhardwareinc.com/what-is-the-cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00472.warc.gz
en
0.926213
1,562
2.890625
3
This is the CTOvision guide on the megatrend of Artificial Intelligence. This report gives a high level overview of the most important factors of the trend, gives updated insights into the activities of the major AI companies, and succinct descriptions of AI tech. It also points to key CTOvision reporting to help readers dive deeper into the topic. We start with a definition of AI: Artificial Intelligence (AI) is the application of thinking machines to real world problems. This definition is unique. We like it because it focused on practitioners. Once you take a practitioners view you see AI is really about far more than algorithms. There are a wide range of technical and non-technical factors that must come together to deliver results. Examples of tech components and non-tech components of real world AI solutions follow: - Analytic Algorithms (including Machine Learning, Deep Learning) - Natural Language Processing - Computer Vision - Data Management - Hardware architectures - Technical security measures Non Tech Components: - New business strategies - Cybersecurity policies - Business risk policies - Legal and regulatory regimes - Training and testing - Operation and maintenance - Hiring, promotion, career management Business Impact of AI: Here are the key points we recommend any enterprise tech professional consider regarding AI: - With business models returning profit now, all indications are AI will continue to improve. - The evolution of AI has been accelerating due to its coupling with incredibly low cost cloud computing. - Creators use a “generate and test” approach to creating functionality, with no accepted protocol for security or testing in AI. This is a huge negative. - There are four major problems with AI today: 1) Some of the most capable AI is not scrutable (you can’t see how it works), 2) AI can be easy to deceive, trick or hack, 3) AI can be unfair, unethical and unwanted, and 4) AI can be leveraged by competitors and even criminals to your detriment. - There are ways to balance the risks and opportunities around AI. - AI, especially Machine Learning, is playing a huge role in modernizing the cybersecurity industry. - AI is also being used by cyber criminals, with many in the security community predicting AI enabled malware coming soon. - AI can be easier to deceive than current computer software (see Generative Adversary Networks: A very exciting development in Artificial Intelligence). - There are many lessons that can be learned from others on ways to improve your corporate governance over AI including ethics around AI. - More on optimizing AI for business can be found at OODAloop.com Open AI questions decision-makers should track: - Will job displacement caused by AI be a crisis? Will government put regulations on companies because of this? - Will companies use AI in ways their customers regard as ethical? - Will there ever be a widely-accepted security framework for AI? - Can behavioral analytics enhance security? - How can machine learning improve cybersecurity? The field is growing dramatically with the proliferation of high powered computers into homes and businesses and especially with the growing power of smartphones and other mobile devices. AI requires lots of data to be effective and with the proliferation of mobile devices there is more data now than ever. Due Diligence Assessments and Artificial Intelligence The trend of Artificial Intelligence is an increasingly important element of corporate Due Diligence since it is so disruptive business models. - On the sell side: Firms should ensure their use of AI is done securely and ethically (see our special report at OODAloop.com on “When AI goes Wrong” for insight into issues and mitigation strategies). This applies to any firm that uses any AI enabled capability. However, firms that produce AI (vendors) should pay particular attention to this, it will make a big difference in how well a firm will be valued. - On the buy side: Buyers should pay particular attention to the use of AI in the target to ensure a well thought out architecture that mitigates risks. External and independent verification and validation of AI ethics and security policies and practices are key, as well as the degree that the target is complying with appropriate compliance regimes. Strategically, the acquisition of technology firms is an art requiring assessment of how unique the capability is and how much in demand it will be in the market. We provide a special focus on due diligence for artificial intelligence companies via our parent company, OODA LLC. The Technologies of Artificial Intelligence There are many key technologies used in fielding AI. These are the components of AI technologies we recommend tracking: Machine Learning: Machine Learning is a subset of AI that focused on giving computers the ability to learn without being explicitly programmed. Machine Learning involves the automated training and fitting models to data. ML is the most widely used AI related technology and is frequently the front end of more complex solutions. This is a broad technique with many methods. Methods commonly taught and applied in ML solutions all have different strengths and weaknesses and part of the art of ML is knowing which applies to the need at hand. Neural Networks: Considered a more complex form of Machine Learning, this approach uses data flow mappings similar to artificial “neurons” to weigh inputs and related them to outputs. This approach views problems in terms of inputs, outputs and variables that associated inputs with outputs. Deep Learning: Highly evolved neural networks with many layers of variables and features. Important to most modern image and voice recognitions and for extracting meaning from text. Deep learning models use a technique called “back propagation“ to optimize the models that predict or classify outputs, which adds to complexity of the end model. The end model may have so many 1000’s of variables that no human can really understand how the model functions or how a conclusion was arrived at. Natural Language processing: This class of technology analyzes and understands human speech and text. Used in modern applications of speech recognition including chatbots and intelligent agents. NLP also requires training data, in this case the output is knowledge about how language relates, often referred to as a “knowledge graph” for a particular domain. Rule-based expert systems: This is an older approach to AI solutions. It involves establishing sets of logical rules derived from the way people actually work. Used in many processes where sets can be clearly defined. This was the dominant form of AI in the past and is still around today, but is really just complex programming. Imagine a large number of “if-then” statements in a program, but in this case the rules were built by domain experts. Robots and Robotics: This is the automation of physical tasks. Primarily used in factory and warehouse tasks but growing use in heath care, small businesses and homes. Training data for robots is critically important, but in this case the training data may include location for movement or a wide variety of expected changes in the environment. Robotic Process Automation: This is the automation of structured digital tasks in the enterprise or factory. This is a highly evolved form of scripting actions. It is a combination of software and workflows built to help automate business processes. RPA is at its best when it provides users with the benefits of other AI capabilities like Machine Learning. Other Related AI Terms and Concepts: Supervised Learning: The most common type of training for AI models. Data is labeled by humans so the algorithm can be taught based on what was established by humans. This is very similar to older techniques of statistics like regression analysis. Once a model has been developed using supervised learning, it can be used with new data to provide predictions. This is called “scoring”. Training models on labeled data generally takes large quantities of data that have known outcomes, and in many use cases the outcome that is being sought is actually a rare occurrence (this is called a “class imbalance”). Unsupervised Learning:This is the development of AI models in ways that detect patterns in data that are not labeled and results are not known. Training Data:The data used for the development of the model. This is often validated using another subset of data for which the outcome to be predicted is known. The methods and concepts above are almost always combined in any real world AI solution. The AI Vendor Community Most AI capabilities today have their roots in academia, but real implementations are being driven by the corporate world. There are many reasons for this. One is the profit motive. Another is the large collections of data available to the big firms. We provide more focused reporting on the firms driving AI forward in our Disruptive IT Directory in the categories of Tech Titans and Artificial Intelligence Companies. It is especially important to track the AI developments of Google, Microsoft, IBM, Amazon, Apple, You can see our reference to Truly Useful AI You Can Use Right Today. For Further Study Artificial intelligence software is assisting people in most every discipline. The many functions of AI are considered to be threatening jobs across multiple industries, but others consider it a great producer of jobs since it will help create entirely new industries and free more humans to innovate and create. This impact on jobs is best considered in conjunction with the megatrend of Robotics, since together those two trends are going to impact some of the largest sectors of jobs in the U.S. (consider, for example, the impact on retail and shipping). For alerts on future posts on this topic see CTOvision Newsletters. Some of the AI companies we are tracking include: For more on these topics see: - Recent Crashes of Boeing 737 Max 8 Aircraft Could Have Been Death By Algorithm: If so they are not the first - Amazon opens its internal machine learning courses to all for free - Is Artificial Intelligence Dangerous? Probably not, but here are a few categories to noodle on - Crafting an AI-Relevant, Data-First, Agile Methodology for AI & ML Projects - Eric Schmidt Provides Insights Into The Future of Artificial Intelligence and Machine Learning at RSAC2017 - From Machine Learning to Machine Reasoning - The next big intersection of AI and business will be at AI World - Could this be the real reason to be afraid of AI? - Super Bowl Ads Indicate Big Businesses Think You Are Afraid of AI - MYCIN, Watson, and AI History There are seven key megatrends driving the future of enterprise IT. You can remember them all with the mnemonic acronym CAMBRIC, which stands for Cloud Computing, Artificial Intelligence, Mobility, Big Data, Robotics, Internet of Things, CyberSecurity.
<urn:uuid:1905c813-fdca-4402-ad83-afb01b2d435c>
CC-MAIN-2022-40
https://ctovision.com/guide-to-artificial-intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00472.warc.gz
en
0.936085
2,214
2.6875
3
Technological change forces economic growth. Technology extends the science of discovery and produces artifacts used in everyday life. It’s the small technical discoveries that make larger scientific endeavors possible. It’s also these seemingly unrelated breakthroughs that make their way into our daily lives. Apparently insignificant discoveries become significant In the 1960s, NASA conducted an extensive test program to investigate the effects of pavement texture on wet runways. The goal was to better understand braking performance and reduce the incidences of aircraft hydroplaning. The result of years of technical scientific studies was that, in 1967, grooving of pavement became an accepted technique for improving the safety of commercial runways. One of the first runways to feature safety grooving was the Kennedy Space Center’s landing strip. However, the applications of this technique extended well beyond NASA facilities. According to the Space Foundation, safety grooving was later included on such potentially hazardous surfaces as interstate highway curves and overpasses; pedestrian walkways, ramps and steps; playgrounds; railroad station platforms; swimming pool decks; cattle holding pens; and slick working areas in industrial sites such as refineries, meat-packing plants and food processing facilities. If you asked a cattle rancher in 1970 if his work would be affected by NASA’s research on braking patterns exploring ground vertical load, instantaneous tire ground friction coefficient or free-rolling wheel angular velocity, the answer would probably have been an emphatic “not a chance.” Likewise, if you had told the workers on a road crew in 1970 that they’d be spending many years of their lives adding grooves to the surfaces of existing highways, bridges and exit ramps, their response would have been less than welcoming. It would have been impossible to convince these professionals of the coming changes. The impact of technology on daily life starts with scientific and technological discoveries that initially appear isolated or narrow in context. But we know better. 5 MIT projects to watch The MIT Internet Trust Consortium, established in 2007, focuses on developing interoperable technologies around identity, trust and data. The consortium’s mission is to develop open-source components for the Internet’s emerging personal data ecosystem in which people, organizations and computers can manage access to their data more efficiently and equitably. The goal is to build emerging personal data ecosystems for individuals and organizations. That ideological desire fits in nicely with the growth of blockchain technologies. Currently, there are five cutting-edge MIT projects that could change the future of the internet: MIT ChainAnchor, (permissioned blockchains), Project Enigma (autonomous control of personal data), OpenPDS2.0 (a personal metadata management framework), DataHub (a platform with the ability collaboratively analyze data) and Distributed User Managed Access Systems (DUMAS) (a protocol for authorizing and accessing online personal data). The white papers for each project are interesting to read. When thinking in a healthcare mindset, it’s easy to think of their applications to health and wellness. Here are links to the project white papers: The proposed permissioned blockchain system is in contrast to the permissionless and public blockchains in Bitcoin. The system addresses identity and access control within shared permissioned blockchains, providing anonymous but verifiable identities for entities on the blockchain. When applied to healthcare ChainAnchor could, for example, enable participants of a medical study to maintain their privacy by allowing them to use a verifiable anonymous identity when contributing (executing transactions on the blockchains). Enigma is a peer-to-peer network that allows users to share their data with cryptographic privacy guarantees. The decentralized computational platform enables “privacy by design.” The white paper says that, for example, “a group of people can provide access to their salary, and together compute the average wage of the group. Each participant learns their relative position in the group, but learns nothing about other members’ salaries.” Sharing information today is irreversible; once shared a user is unable to take that data back. With Enigma, data access is reversible and controllable. Only the original data owners have access to raw data. In the context of healthcare, patients could share information regarding personal genomics linked to disease registries and clinical treatments aligned to healthcare outcomes, knowing that their original data was not shared. OpenPDS introduces SafeAnswers, an innovative way to protect metadata (application, document, file or embedded) at an individual level. As the white paper explains, SafeAnswers, “allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals’ metadata.” SafeAnswers gives individuals the ability to share their personal metadata safely through a question-and-answer system. Previous mechanisms for storing personal metadata (cloud storage systems and personal data repositories), don’t offer data aggregation mechanisms. The challenge is that once access is enabled, the data is broadly accessible. SafeAnswers reduces the dimensionality of the metadata before it leaves the safe environment, therefore ensuring the privacy of data. Healthcare metadata examples could include patient account number, patient first and last name, and date of admission. Healthcare research could benefit from using aggregated metadata from patients without sharing the raw data. Research entities would send questions to an individual’s personal data store (PDS). This PDS would respond with an answer. Today, if metadata information was provided to researchers or accessed from a phone application, the patient could disable (uninstall) the app but wouldn’t know what information was shared. With the SafeAnswers system, a patient potentially would use their PDS URL to provide access to the health app. All of the patient’s metadata, therefore, could be tracked and recorded — visible to the patient. Later the patient could access metadata the application was using to create patient inferences. Also, the patient could permanently remove the metadata the application was consuming by either limiting or permanently restricting future access. No trusted third party. No entity to monitor access. Anonymously shared data. Discoveries that transform society The DataHub project and the Distributed User Managed Access Systems (DUMAS) projects offer additional pieces to solve the challenge of exchanging information while maintaining identity anonymity. Maybe they can apply to healthcare – if we’re creative. Highly technical advances have shaped the social economy for centuries. The creation of the sickle, a handheld agricultural tool with a curved blade typically used for harvesting grain crops, had a profound impact on the Neolithic Revolutions (Agricultural Revolution). Who would have imagined when it was invented (18000 to 8000 B.C.) that the sickle would form the basis for modern kitchen knives with serrated edges. Small, seemingly insignificant discoveries transform societies. How blockchain technologies will affect people on a daily basis is awaiting to be discovered. When blockchain applications enhance our lives, they may become as commonplace as highway safety grooving.
<urn:uuid:47b85c1d-a708-418b-aaa5-93829c624b6a>
CC-MAIN-2022-40
https://www.cio.com/article/238619/healthcare-colored-with-blockchains-open-source-foundation.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00472.warc.gz
en
0.931701
1,446
3.34375
3
The CIA has declassified documents about a reconnaissance drone project in the 1960s, Inceptive Mind reported Monday. The Aquiline project sought to develop reconnaissance drones based on birds’ flight characteristics. The drone has a wingspan of 10 feet for use in carrying infrared cameras, radio equipment, radar and other payloads. The agency envisioned Aquiline as a long-range vehicle that could navigate stealthily into denied locations to gather technical intelligence and support other agency missions, according to the CIA’s news release. The project did not advance to the operational phase, but the agency said the concept for Aquiline “proved invaluable as a forerunner” to multicapability unmanned aerial vehicles in use today.
<urn:uuid:189796ac-a4af-4d2e-94d7-1bd95f6289fb>
CC-MAIN-2022-40
https://executivegov.com/2020/08/cia-publishes-documents-on-aquiline-drone-program/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00472.warc.gz
en
0.942592
150
2.828125
3
Multi cloud vs hybrid cloud Multi cloud vs hybrid cloud – a detailed overview. To understand the differences between multi cloud and hybrid cloud, we should start discussing what cloud computing is. Cloud computing is the delivery of on-demand services, such as compute and storage, that do not require direct management by the user. This means an organization can create cloud computing infrastructures internally (private) within their organization, using external (public) suppliers of computing services, or use both private and public clouds (Hybrid). Cloud includes defined pools of computing resources that can be accessed on-demand, have broad network access, rapid elasticity, service is measured and allows resource pooling. Cloud computing has three basic technology types and many derived types based on the three basic types. The three basic types are Software as a Service (SaaS), Platform as A Service (PaaS), and Infrastructure as a Service (IaaS). These three basic types are used to support all other types or what can be called “anything” (XaaS) as a service. Anything can be delivered as a service that utilizes the basic cloud types. What does hybrid cloud mean? Let’s start with what hybrid means at face value. A hybrid is considered a blend of at least two different things to form one new thing. Such as what happens when cross-breeding plants, animals, etc. A hybrid is created of different types of things. A hybrid cloud can actually be multiple clouds integrated to support one service, such as a web application. This single service can contain any of the basic components of SaaS, PaaS, and IaaS configured on different cloud platforms, either public or private, to form a hybrid solution. Using two or more different clouds to support a basic cloud type (SaaS, PaaS, IaaS) can be considered a hybrid cloud solution from an architecture perspective. A hybrid cloud is the joining of two cloud systems together. Hybrid cloud can also include using cloud services that are not managed by the end-user in combination with on-premises infrastructure, which is hardware traditionally owned and maintained in a data center environment. What does multi cloud mean? The term multi means more than one and implies more than one of the same type of thing, such as multiple oranges or multiple chairs. With multi-cloud, organizations are utilizing services from multiple cloud service providers, such as Amazon Web Services, Microsoft Azure, or Google Cloud Platform. So to keep it simple, multi cloud is just the use of more than one cloud service provider. Many organizations have multi-clouds that are not integrated but deliver a service on their own. For example, an organization may choose to have their email service in a public cloud, an example used by many that use the Microsoft Office 365 Suite. They may also choose to leverage public cloud environments for product development but may choose to use Google Cloud Platform for component benefits, such as Kubernetes containerization. Key differences between multi cloud and hybrid cloud Multi cloud and hybrid cloud are sometimes used interchangeably to talk about cloud computing subjects. An organization may say that they have a hybrid cloud strategy in which they use both private clouds and public clouds for different reasons. A business unit in the organization may say they have a hybrid cloud implementation because they use both public and private for the same service or application, but segment sensitive data within the private cloud and non-sensitive data in the public cloud solution for the same delivery and support of a service. In either case, the organization is using more than one cloud. Multi-clouds can evolve to hybrid-clouds as the organization begins combining private cloud resources with public cloud resources. The biggest difference between multi cloud and hybrid clouds is the number of cloud service providers supporting the IT solution. When private and public clouds are used together in an integrated, collaborative fashion creating a hybrid solution, then this should be considered a hybrid cloud. Can a hybrid cloud also be multi cloud? Hybrid multi clouds are actually a mix of public and private clouds or public and private clouds along with on-premises data centers. Multi-clouds are not hybrid-clouds unless the combination or integration of the clouds, such as using both a public and a private cloud, creates the dynamics to make them hybrid. This is not always the case that it has two or more only public clouds or private clouds. The synergies between the clouds make it hybrid to form a new cloud or hybrid of the two clouds. Sometimes it is thought that a hybrid cloud consists of a combined usage of a public cloud and a private cloud. As mentioned before, a hybrid is just a hybrid, so if using multiple public vendors who have different capabilities or characteristics, this can be a hybrid cloud solution. When the organization uses both vendor clouds in an architecture to support a service, then it can be said they have a hybrid solution. Although they are using multiple clouds or a multi cloud solution, combining the cloud solutions in this fashion makes it a hybrid solution. Why the hybrid cloud model is the best approach Hybrid clouds allow flexibility in desired capabilities needed for business services. Having multiple clouds with the same capability can be very restrictive to the organization. Organizations may need specific capabilities only available in a particular cloud and maybe only from a specific cloud vendor. A hybrid cloud approach can help organizations manage governance, risk, compliance (GRC), and cost better by providing the flexibility to keep capabilities in private clouds and less risky capabilities to public clouds. This can help with cost management of the infrastructure and other resources needed internally for private clouds. Multi cloud vs hybrid cloud The best cloud is derived from using good cloud solutions to deliver the best-needed capabilities for the organization. There are many good clouds both for public or private usage. The organization should first develop its requirements based on business needs. This is sometimes considered to be a cloud-first strategy. Then review all capabilities for each cloud. After this analysis is done, hybrid clouds may be the best approach for organizational coordination and collaboration between applications and services. If an integrated solution is not needed, the organization may want to look initially at using multiple clouds for their benefits. Organizations with a multi cloud solution but no integration between the clouds may want to look closer at cloud data management solutions that can help with needed integration at the data level between different multi cloud implementations. The Avalanche Real-Time Connected Data Warehouse solution from Actian is the industry’s first and only hybrid cloud data warehouse to offer integration capabilities built into the product. Enabling you to harness diverse data sources for use in high performance analytics, in multiple clouds (AWS, Azure, Google Cloud) and on-premises.
<urn:uuid:c767e2ef-34d2-400c-9281-d7f5089e9a73>
CC-MAIN-2022-40
https://www.actian.com/multi-cloud-vs-hybrid-cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00472.warc.gz
en
0.937996
1,371
2.5625
3
There are few things that strike greater fear into the leadership of small businesses and nonprofit organizations than the thought of someone hacking into their private network. As a result, when a relatively easy way to help prevent data breaches by looking at passwords in a different light is developed, it makes sense to take the time to learn about it. One such new method that deserves your attention is known as three random words. Three Random Words The name of this new password technique, three random words, reinforces the simplicity and effectiveness of it. Instead of using easily cracked passwords or difficult to remember complex passwords, the concept behind three random words is to use basic easy to remember words that generally make no sense or have any relationship to the user. The UK National Cyber Security Centre (NCSC) is one of the main backers of this strategy, and they have been heavily promoting it in the UK. Other cybersecurity experts also support this method as an effective way to improve password security. Cybersecurity Threats Common Password Techniques The reason that three random words and other innovative password methods have been developed is because of some of the weaknesses associated with traditional password techniques. The most common user passwords that involve a birthday, a nickname, a relative or pet’s name, or a favorite hobby or activity can be easily guessed by hackers who pore over publicly available information on social media and other online sources. Techniques such as substituting an exclamation point for 1, zero for the letter O, or $ for an S are also well-known to cybercriminals. Complex passwords are also somewhat ineffective as they are hard for users to remember, and hackers have developed software tools that can reveal them. In addition, the advance of technology has assisted cybercriminals in their nonstop effort to breach networks and devices. With the increasing power of programs using algorithms designed to enter in countless random combinations of letters, numbers, and special characters, skilled cybercriminals are now able to crack a random alphanumeric password in less than three days. As a result, organizations must consider these significant threats when deciding upon an effective password protocol. Strengths Of Three Random Words The main strength of this technique is in the length of the password, which makes it more difficult and time consuming for hackers and their powerful algorithmic programs to crack. Cybersecurity experts note that while these programs can quickly guess a shorter password, they estimate it would take them hundreds of years to determine the letter combinations in three random words. These random letter combinations are much harder for hackers and their programs to predict. Even if these passwords aren’t as effective as these projections, a cybercriminal will quickly turn their attention to another target when they discover how difficult it will be to crack a three random words password. In addition, the ease of remembering a unique three random words password makes it user-friendly and readily adopted by technical and non-technical employees. One cybersecurity expert noted that for technology and security methods to be effective, they have to be easily understood and implemented by users. Three random words accomplishes these objectives perfectly. Improving The Three Random Word Method No password technique is perfect, and three random words is no exception. Some users, for example, might choose words that aren’t as random as they think or are relatively short. Cybersecurity experts recommend that your organization use this technique as one part of a password strategy, which could also utilize the power of Two-Factor Authentication or 2FA. The added component of account authentication provides an extra layer of protection even if a password is compromised. In addition, experts recommend combining the three random words technique with what is known as a compromised password deny list. This list specifically defends against a password dictionary attack, where a hacker uses a multitude of passwords obtained from a previous data breach to gain unauthorized access to an account or network. This list of previously used passwords will be blocked from further use with the use of the deny list, which will add extra password protection. Consult With A Trusted IT Support Partner The most important recommendation we can give regarding password techniques and protocol, as well as cybersecurity in general, is to consult with a trusted IT Support partner like Network Depot. Before undertaking any significant changes to your network security, please discuss it thoroughly with an experienced cybersecurity expert. A reliable IT partner has the experience to assist you in all aspects of your network security and will help you make the choices that are right for your unique business. Your IT partner is a valuable resource that can expertly advise your organization on how to implement and execute an effective password strategy that will keep your network secure against cybercriminals.
<urn:uuid:cd775743-978f-4ae2-bae5-10dbb4ea9cc8>
CC-MAIN-2022-40
https://www.networkdepot.com/better-password-security-is-available-with-three-random-words/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00472.warc.gz
en
0.947613
934
3.40625
3
Scientists using a powerful new technology that sequences RNA in 20,000 individual cell nuclei have uncovered new insights into biological events in heart disease. In animal studies, the researchers identified a broad variety of cell types in both healthy and diseased hearts, and investigated in rich detail the “transcriptional landscape,” in which DNA transfers genetic information into RNA and proteins. “This is the first time to our knowledge that massively parallel single-nucleus RNA sequencing has been applied to postnatal mouse hearts, and it provides a wealth of detail about biological events in both normal heart development and heart disease,” said study leader Liming Pei, Ph.D., a molecular biologist in the Center for Mitochondrial and Epigenomic Medicine (CMEM) at Children’s Hospital of Philadelphia (CHOP) and an assistant professor in the Department of Pathology and Laboratory Medicine in the Perelman School of Medicine at the University of Pennsylvania. “Ultimately, our goal is to use this knowledge to discover new targeted treatments for heart disease. In addition, this type of large-scale sequencing may be broadly applied in many other fields of medicine.” Pei and co-study leader Hao Wu, Ph.D., also of the CMEM and an assistant professor of Genetics at Penn Medicine, published their findings online Sept. 25, 2018 in Genes & Development. While massively parallel single-cell RNA sequencing (scRNA-seq) has been available to researchers in the past three years, it is technically challenging to study single cells in postnatal hearts due to the large size of cardiac muscle cells. To enable single-cell analysis of large cells such as muscle cells, or cells with complex morphology such as neurons, robust massively parallel single-nucleus sequencing (snRNA-seq) methods have been developed recently in Wu’s laboratory, as well as by others in the field. To date, massively parallel snRNA-seq has been applied only to the central nervous system. Pei and colleagues are the first to adapt the technology for use in postnatal heart tissue. The research team used the snRNA-Seq method termed sNucDrop-seq to analyze nearly 20,000 nuclei in heart tissue from normal and diseased mice. “We are excited to further develop sNucDrop-seq and apply it to mammalian postnatal hearts, which are of critical medical relevance but difficult to study with standard scRNA-seq,” said Wu. The current study focused on cardiomyopathy, a group of diseases characterized by progressive weakening of the heart muscle, and representing a leading worldwide cause of heart failure. Pei and colleagues used mice developed to model a type of pediatric mitochondrial cardiomyopathy. “The heart is a complex organ, with a multitude of cell types, and much still remains poorly understood about mammalian heart development and heart disease, especially during the postnatal period,” said Pei. “Our study provides key insights in three areas: normal heart development, heart disease, and gene regulatory mechanisms of a heart hormone called GDF15.” The sequencing tool identified major types of heart cells, such as cardiomyocytes, fibroblasts and endothelial cells, as well as rarer cardiac cell types. The study team found great variety among each cell type, as well as indications of functional changes in the heart cells during both normal and diseased conditions. For example, the researchers detected metabolic changes in fibroblasts, the fibrous cells that make the heart abnormally stiff in heart disease. Another finding concerned gene networks that regulate production of cardiac hormones in heart disease—specifically GDF15, which slows overall body growth, presumably to reduce the energetic demands on a damaged heart. Such signaling, said Pei, could reveal more about the biological mechanisms that underlie the growth restriction commonly seen in children with congenital heart disease. Greater understanding of cardiac biology, as provided in this research, said Pei, may lead to targeted therapies aimed at key gene networks that could offer better treatments for heart patients. “This research was a first step in defining the transcriptional landscape of normal and diseased heart at high resolution,” said Pei, who added that future work in his and his collaborator’s laboratory will investigate how heart disease progresses over a longer timespan than the early postnatal period. The research tool may also offer opportunities to investigate diseases in organs and systems beyond the heart. More information: Peng Hu et al, Single-nucleus transcriptomic survey of cell diversity and functional maturation in postnatal mammalian hearts, Genes & Development (2018). DOI: 10.1101/gad.316802.118 Journal reference: Genes & Development search and more info website Provided by: Children’s Hospital of Philadelphia
<urn:uuid:ef5c67ee-52da-45da-a700-7e77885323c3>
CC-MAIN-2022-40
https://debuglies.com/2018/10/05/scientists-uncovered-new-insights-into-biological-events-in-heart-disease-sequencing-rna-in-20000-cardiac-cells/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00672.warc.gz
en
0.931041
1,013
2.8125
3
Agile software development is the organization of people to work independently, make their own decisions, and contribute to the greater whole. Of course, the overarching purpose of agile development is to allow teams to adapt quicker to the demands of the market. Pair programming has emerged as a useful tool in the agile software development toolbox. Drawbacks of working alone Traditionally, programmers work alone, building a feature or function or a single app. But that’s not always ideal. Working by oneself can be monotonous but the reverse—working with a large group of people—limits your individual autonomy. In groups larger than 4, one person’s intelligence tends to dominate. So, working in a pair, for many programmers, is just right. Pair programming: two heads are better Pair programming is intentionally putting together two developers to accomplish a task. Both the business managers and the devs themselves appreciate this two-person approach, but often for different reasons. For the business managers with their eyes on KPIs, they like to think that pair programming increases productivity. Pair programming: - Reduces mistakes - Gets more done, faster - Allows the company to act when problems arise - Boosts programmers’ morale For the programmers, pair programming: - Lets you talk with a partner - Allows you to share your struggles and accomplishments when solving a problem - Promotes continuous cross-training - Builds team and coder relationships - Encourages both learning and teaching, increasing your sphere of value For the coder, pair programming can feel like you’re sticking it to the man, giving the pair a sense of purpose. Because, while the company leaders believe this work practice increases productivity, which may be true, the coder gets to have more fun. How pair programming works Pair programming is a practice where two programmers work together. Often one will write code while another reviews the code. They go back-and-forth with their roles, much in the same way as British Primary Pedagogy (multi-age classrooms), getting to be both the player and the coach. Both the writer and the reviewer. Or, in the words of the pair, the driver and the navigator. The pairing method is great for learning. Developers who program in pairs report they: - Learn faster - Make less mistakes - Spend less time on small problems and more time on large problems While the driver gets to show off their talents, or just get personal attention from another to learn, in a pair, each person gets to feel special. How and when breaks are taken are up to the pair. Improvise; trust the instincts. When the trust is lost, research has been done, and some swear by the Pomodoro Technique or the 52/17 method to define their work-to-break ratio. Pair programming increases security Perhaps the single largest benefit of programming in pairs is that it increases the security of whatever you’re building. The resulting product is likes to have security benefits like: - Fewer bugs - Internal auditing against from backdoors Through pair programming, an extra set of eyes catches potential bugs in the code. We know when code works, it’s excellent. But code can work for many, many reasons. In those ways the code can work, it can perform its task in unsuspecting ways, too. When code works the way a developer wants it to, but has many methods of working, or works for the wrong reasons, this becomes a vulnerability for adversaries. Cheats, or hacks, are made. If Mom said, “Go to bed at 8pm,” and her only verification for telling time were the clocks in the house, then it’s possible to change all the clocks in the house so bedtime is later. When multiple eyes look at code, they can catch simple bugs, and write better code to prevent easy hacks before they occur. While each pair reviews each other’s’ work, they can also notice if the other has built a backdoor into the software. Unless they’re a George Clooney and Brad Pitt duo, pair programming has a built-in security method society calls accountability, or integrity, that ensures the submitted code is clean. When to use pair programming Not every task or project is well suited for pair programming. Programming in pairs is great for training. The back and forth dialogue helps the pair understand a new concept or where another is coming from. It builds teams, and trust among teams. Pair programming, then, is ideal for: - Teaching students - Training new staff - Solving cross-functional problems - Promoting continuous cross-functional training - Ideating and innovating But, when the task is ahead, it is well-defined, the sprint needs to be made, and everyone knows what they’re doing, sometimes it is the good old-fashioned headphones-on, coffee-at-the-ready grind that a programmer needs to do to get the job done. For more on this topic, check out the BMC DevOps Blog or read these articles:
<urn:uuid:cf4ed7cf-2f41-4707-8bd7-f25b0140d398>
CC-MAIN-2022-40
https://www.bmc.com/blogs/pair-programming/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00672.warc.gz
en
0.939498
1,083
2.75
3
Securing the nation’s Critical National Infrastructure (CNI) is no easy task. Encompassing everything from our electricity and water supply, power plants, and emergency services through to our transportation facilities, they are the systems that keep the theoretical wheels turning for our nation. Because of their criticality, it is imperative they are protected against not only physical threats but the risks that come from the cyber world as well. For the security practitioners responsible for CNI, their focus is to ensure the business is protected from both external or internal cyber attacks and that any incident doesn’t result in business-critical data being left vulnerable to theft, compromise or exfiltration. Moreover, when it comes to securing supervisory control and data acquisition (SCADA) and other CNI systems, the security, or lack of, could result in a potential life or death situation. Take, for example, the recent incident in in Oldsmar, Florida, where a hacker attempted to poison the water supply by tampering with the sodium hydroxide levels to a lethal concentration via a remote access solution that enabled the hacker to control an operator’s machine. The attempt, which was fortunately spotted before any harm could be done, highlights the threat these facilities face. Bridging the digital divide The challenge in securing CNI can be boiled down to the fact that many of these systems were never designed to be connected to the Internet and integrated with a slew of other solutions and devices. Built on legacy technology, they ran as a standalone from other parts of the network and used air gapping as the primary defence. With no connection to the wider internet, there was no way for a hacker to interfere without physically accessing the machinery. Yet this changed as organisations undertook digital transformation projects and, due to the pandemic, the increased need for remote working solutions that encourage workers to connect to systems from anywhere, at any time. Due to this rapid digitalisation, many CNI systems have become vulnerable to cyber attack. At the end of 2020 we conducted research to see how big a threat these connected, yet unprotected, SCADA and IoT related devices really are. Assessing the scale of the threat What was evident was the sheer scale of critical devices that were open to potential attack due to a lack of security controls. We conducted a search on Shodan, a security-based search engine for Internet-connected devices, to hunt for visible connected devices and specifically focused on six groups of devices using SCADA. Despite a number of high-profile attacks on SCADA systems, we discovered 43,546 unprotected devices online. The majority of these were using protocols produced by Tridium (15,706) and BACnet (12,648). The rest consisted of protocols from Ethernet IP (7,237); Modbus (5,958); S7 (1,480) and DNP (517). There was some evidence that Modbus and S7 are being taken more seriously from a security perspective. The reason for this? Modbus and S7 are both mature technologies that have demonstrated continuous improvement to their security posture – perhaps as the result of many years in the public eye. However, other SCADA protocols do not appear to have made any concessions to cyber security. Delving further into the findings revealed that the United States topped the table in terms of the biggest attack surface with a total of 25,523 unprotected devices. Others high up the list of the top ten countries with unprotected devices included Canada as well as European countries such as Spain, Germany, France, and the UK. The majority of the devices found in the UK were Tridium devices of which there were 583. How can we plug the security gap? Taking a proactive approach to CNI security is imperative, but the first mistake security teams make is assuming they can clone their existing IT security strategy and implement it in exactly the same way, but this will not work. Instead, the security team needs to develop a specific security strategy that encompasses all of the Operational Technology (OT) elements and that works alongside the IT security strategy, while also considering the specialism and differences in the associated systems and technology. The best place to start from is ensuring the organisation has full visibility of the entire network, infrastructure and assets that are within and connected to the business. Without this, vulnerabilities are missed and provide a hacker with a clear route into the network. The importance of mapping the network and having a constantly updated and live list of active and dormant assets should not be underestimated. Furthermore, this asset inventory needs to be constantly maintained and updated to keep track of possible vulnerabilities as the infrastructure develops and grows. Secondly, the importance of having a proper, secure infrastructure cannot be overstated. These critical SCADA, IoT and CNI-related devices should be isolated from the company’s general IT network, usually behind a second firewall. The idea is that the networks are “separate but together”, not just one big network. Continuous security monitoring of the network and environment is critical. Finally, a continuous improvement in the networks is necessary. Firmware patches should be applied to firewalls and switches as soon as possible after testing, perimeter devices (such as firewalls or machines exposed to the Internet) being a priority. Strong internal controls should be applied to restrict traffic that might not be trusted, and networks should always follow the rule of least privilege, not only for devices, but for users as well. Establishing full visibility and control of all devices and networks that are Internet-connected is key as technology continues to become more digitally intertwined to accommodate the change in working practices. This is a global problem, and one which threat actors will continue to pressure test and launch targeted attacks against. Knowing what they have and where, means security teams will be much better informed and equipped to identify and mitigate cyber threats that seek to cause havoc to the foundations of a nation.
<urn:uuid:acfa181e-9910-4875-95e1-b1fed89f4798>
CC-MAIN-2022-40
https://www.cpomagazine.com/cyber-security/securing-the-devices-that-underpin-our-critical-infrastructure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00672.warc.gz
en
0.959827
1,209
2.734375
3
At heart, a virtual classroom is simply a class that meets virtually. In comparison, distance learning and e-learning classrooms are self-paced, requiring no set meeting times. While virtual learning sounds simple, it does come with a number of unique challenges, especially for children. Before Covid-19, adults who could not get to a physical classroom made up the majority of virtual learning students. This population is self-motivated and has hopefully already had social interactions in early life that sustain mental and emotional well-being. For children stuck at home, however, the virtual classroom must be designed so that their individual emotional needs can be met while they are also asked to study—at home, where they might not have had to do so before. A good classroom requires attendee/student data: names, class name or number, grades, participation rates, and so on. The software supporting the virtual classroom should record this data during every class period—ideally, on a real-time basis. Most online classrooms use third-party cloud-based software systems to provide a classroom-like environment and to record data. Every classroom should have a secure and responsive system that meets every student’s needs and quickly analyzes student data, especially student behavior data like how much time they spent in the virtual classroom and how much they participated in forums. Other useful data, especially for children, may include parental input, games, or NLP programs to identify depressed mood. Software integrations with machine translation or transcription services may also provide a lot of help to students. Students may find it difficult to be in class on time, no matter their age, if they are learning at home; the boundaries between school and home are difficult for anyone to navigate. If the virtual classroom is in a different time zone, this difficulty only increases. Additionally, not all households have access to technology to access the internet nor reliable internet service. Finally, moving to a virtual classroom setting unprepared is very difficult, even for teachers. Collecting and making use of student data in a new way comes with a learning curve that even the most user-friendly software service can’t make entirely smooth. Furthermore, when parents provide input to teachers, those teachers with no data science training may find it very difficult to integrate and analyze all the information. MDPI: Improvement of an Online Education Model with the Integration of Machine Learning and Data Analysis in an LMS Guide2Research: 50 Online Education Statistics: 2020 Data on Higher Learning & Corporate Training The researchers surveyed 105 educators and 10 administrators who determine their expectations and concerns before running a privacy and security analysis of 23 popular platforms, including Zoom and Microsoft Teams… The researchers found that 41% of 23 platforms assessed had policies that “permitted a platform to share data with advertisers, which conflicts with at least 21 state laws”. Around a quarter (23%) allowed a platform to share location data. AMEE provides Medical Courses that are available through Face-to-Face and online courses. Courses and Training is an online service offered by Intexfy to provide proper guidance to the concerned employees in the specific field. Alqami Education Courses provides courses to educate users on how to use Alqami data.
<urn:uuid:df691ca6-c0ce-47a5-b8a4-83565d75a4a3>
CC-MAIN-2022-40
https://www.data-hunters.com/use_case/virtual-classroom/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00672.warc.gz
en
0.945369
669
3.359375
3
It is estimated that over 5 billion people are using mobile telecommunications services globally. The sheer number of people using these mobile services makes it a great target for scams of all sorts. The fastest growing scam for consumers is called smishing – SMS phishing. As we become systemically reliant on mobile devices, the increased connectivity of the populace means that we are all at greater risk for vulnerability to cybercriminals. The goal of a smishing text message is to trick the mobile device user into willingly sharing sensitive information to ultimately use it for financial gain. Those malicious vehicles come in the form of an inconspicuous SMS text and normally contain some material that prompts you to click on a link within that text message. Cybercriminals will even go to the extent of creating identity fraud monitoring alerts that look real which turn on your fight or flight response. If you click on the link, it will take you to a website which will solicit you to provide your login details. The way they do this is by making the website landing page look very similar to a website that you would normally use, like a credit bureau site or even your bank, Netflix, and social media. The aim is for you to willingly supply them with the signin profile information that will be exploited by cyber criminals at some point the future. This smishing threat is ever present, and cybercriminals will not stop using it because it is an effective method of data collection for them. So, what can you do to prevent yourself from falling victim to this type of scam? - Do not open text messages from unknown people. If you receive a text from an unknown number, particularly one that contains a link, don’t engage with it. - Don’t open links in text messages if you accidentally open an unknown text. - Never login to an account from a link in a text message. Even if you believe the text to be from a reliable and trusted source, it’s a safer choice to go directly to their website on your browser to login rather than logging in from a link. - Do not provide sensitive information via. - If you inadvertently clicked on an SMS link and provided information, take immediate action. Based on what you provided, this can look like speedily changing your password, editing your account information, freezing your credit cards, or contacting your bank to report fraud. - See whether you have the ability to filter or block texts from unknown numbers on your phone. Although not all smartphones have this feature, doing this can prevent mishaps in the future. Smishing is copiously used by cyber criminals because it is a simple, inexpensive, and effective method of collecting information. It operates by taking advantage of human error and innate trust, which is the greatest weakness in cybersecurity and can never be stopped because everyone makes mistakes. Cybercriminals just need to send malicious links, hidden in ostensibly useful tweets, to catch a few customers off their guard and in order to be able to access the information. LibertyID provides expert, full service, fully managed identity theft restoration to individuals, couples, extended families* and businesses. LibertyID has a 100% success rate in resolving all forms of identity fraud on behalf of our subscribers. *Extended families – primary individual, their spouse/partner, both sets of parents (including those that have been deceased for up to a year), and all children under the age of 25
<urn:uuid:63ed71c3-0bcf-4192-b87b-02dae78fe4b2>
CC-MAIN-2022-40
https://www.libertyid.com/blog/mobile-is-the-greatest-target-for-scammers-a-new-threat/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00672.warc.gz
en
0.935748
695
2.734375
3
The impact of AI on cybersecurity Over the past few years, there has been rapid growth in the instances of cybercrimes. Data reveals that cybercriminals can penetrate 93% of business networks. Businesses are trying new strategies to secure their networks, systems, and devices. Unfortunately, hackers are getting smarter and are superseding these strategies to achieve their purpose. In the wake of the pandemic most organizations are adopting a digital-first approach as it helps them overcome the limitations of the legacy systems and bring agility*. The thrust on digital transformation across the globe has given an impetus to cyberattacks on data, networks, and devices. The increased exposure to cyberattacks is because of the use of technologies such as IoT and the cloud for the exchange of data. The cloud environment and embedded technologies do not provide a secure environment for data, making businesses vulnerable to cyberattacks. Most businesses are currently deploying conventional software to secure their data, network, and devices. Unfortunately, hackers keep using innovative methods to break in. This is the reason that 68% of business leaders feel their cybersecurity risks are increasing. Businesses are thus looking to leverage innovative technologies like AI to mitigate cybersecurity threats. Some of the technologies in AI that have come to the forefront for this purpose are Machine Learning (ML), Natural Language Processing (NLP), and Context-Aware Computing. These technologies can be applied on-premise or in the cloud. AI-based cybersecurity systems leverage a variety of software including APIs for speech, vision, and language recognition to name a few. In addition, machine learning algorithms are used for effective cybersecurity applications. The growth of AI in the cybersecurity market is projected to reach a whopping $38.2 billion by 2026 from $8.8 billion in 2019. AI-based technologies can analyse many events in minimal time and identify cyber threats. This analysis helps prevent cyberattacks. The advantage of using technologies like AI and ML is that they continually learn and improve from the analysis and experience to predict cyber threats before they occur. How does AI work in Cybersecurity? - AI extracts large volumes of structured and unstructured data. - Machine learning and deep learning technologies are leveraged to process this data. - AI learns from this process facilitating software development to mitigate cybersecurity risks. - AI software executes the needed steps in a matter of minutes and provides cybersecurity professionals valuable insights to make informed actions. Technologies such as AI and ML automate threat detection and provide valuable insights to cybersecurity professionals. A collaboration of artificial intelligence and human intelligence facilitates quick and effective responses to cyber threats and minimizes instances of cyberattacks. How can the cybersecurity industry benefit from artificial intelligence? Reduces manpower requirement Businesses across the globe are facing the risk of cyberattacks. Therefore, there is a surge in the demand for cybersecurity professionals. However, there is a dearth of skilled cybersecurity professionals. Automation with AI and ML in cybersecurity reduces the dependence on cybersecurity professionals. These technologies support cybersecurity governance in the organisation by reducing manpower requirements. An enterprise needs to continually monitor its cybersecurity data to prevent cyberattacks. Traditionally used software takes time for data analysis causing delays in the detection of cyber threats. Data reveals that in 2021 the time to identify a data breach was 212 days! AI-based technologies extract and analyse data in a few minutes. The availability of data enables cybersecurity professionals to respond in a timely manner to cyber threats. Quick action can reduce the instances of cyberattacks. Detects new threats AI offers predictive analysis and pattern recognition to detect even small changes that can lead to a data breach. This information enables cybersecurity professionals to take action to prevent cyberattacks. AI-based systems can also identify the areas that are vulnerable to data breaches so that cybersecurity professionals can allocate tools to prevent them. Prevents fraudulent transactions AI and ML keep sensitive data safe to avoid fraudulent transactions. For instance, in the case of credit card fraud, these technologies are quick to detect any unusual activity, uncustomary purchases made from a different device, or any odd transactions. In such cases, AI helps verify credit-card holders and minimize the number of fraudulent transactions. AI-based technologies go far and beyond to empower cybersecurity professionals to mitigate risks. Artificial Intelligence provides the most concrete cost mitigation in data breaches helping businesses save $3.81 million per breach. However, these risks continue to rise because cybercriminals deploy AI to achieve their ends. They also use technologies to predict the security measures that are likely to be implemented and look for more advanced techniques for hacking. Hence, the cybersecurity landscape is constantly evolving to find new methods to prevent cyberattacks. Human and technology collaboration is the way ahead to provide a safe and secure environment for a digital framework to thrive. *For organizations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed on organizational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like a living organism, will be imperative for business excellence going forward. A comprehensive, yet modular suite of services is doing exactly that. Equipping organizations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organizations that are innovating collaboratively for the future.
<urn:uuid:4ade2702-a0e9-46ad-a4e0-f658d468f653>
CC-MAIN-2022-40
https://www.infosysbpm.com/blogs/business-transformation/the-impact-of-ai-on-cybersecurity.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00672.warc.gz
en
0.923884
1,121
3.125
3
Malware infected computers is a problem that effects all Internet users, as the infected machines can be strung together in botnets to launch denial of services attacks and spam campaigns. Currently, cleaning up infected computers is the sole responsibility of individual users and ISPs are often reluctant to get involved. Under a newly proposed scheme by a researcher at the University of Cambridge, the government could be a place to turn. Richard Clayton of the Computer Laboratory at the University of Cambridge, recently presented a paper titled “Might Governments Clean-up Malware?“ at the Ninth Workshop on the Economics of Information Security (WEIS 2010) in which he called for government to provide subsidies for malware clean-up efforts. “For the Internet to be safer for everyone, “˜something must be done' to clean-up the infected computers,“ he said. “But there are a number of barriers to this ““ mainly to do with incentives.“ A central concern for individuals who are the victims of malware infection is the cost of cleaning their machines. “The cost cleaning up malware is obviously a key issue,“ Clayton said. “The perception of it being a complex task, with expert help expensive and essential, goes a long way to explaining why customers delay malware removal and why ISPs are generally so reluctant to offer assistance.“ To address the issue of cost, Clayton proposes that the government provide subsidies to encourage malware clean-up. “There might be a role here for government to step in and subsidize the clean-up,“ he said. “Such a subsidy will go a long way towards improving the incentive issues.“ The malware plague can hit even the most cautious of Internet user and can affect other users not directly infected. Users generally discover infections via detection software or being notified of a problem, generally by an ISP or system administrator. “Once the user is aware that they have malware on their computer then they should always wish to remove it, and if well-enough informed they will generally do so,“ Clayton said. However, the complexities and perceptions surrounding malware generally mean that users consult others to clean their machines. Often, individuals consult family and friends, computer stores or their ISP. Within Germany, Holland and Australia, ISPs have entered into a mutual agreement to deal with botnets. In the United States, Comcast has partnered with McAfee to provide clean-up services to customers. In Clayton's model, an ISP would receive an abuse report about a user and notify the customer of the infection. The customer could then clean their system using free tools or have their computer cleaned by a technician for a nominal fee. On top of the nominal fee, the government would pay the rest of the bill for cleaning the malware. In addition to reducing data loss among citizens, “the rapid correction of the malware infection should prevent any loss of confidence in using the Internet,“ Clayton said. “Keeping confidence in the Internet high is an essential prerequisite to tempting people online, and keeping them there,“ he added. While the private sector would provide the clean-up services, the involvement of the government would provide added credibility, according to Clayton. “The involvement of the government makes it easier to cajole ISPs into doing their part, and provides important assurance to citizens that the scheme is bona fide and that quality controls will be in place,“ he said. For Clayton, the lack of action by the majority of industry and the interconnectedness of the Internet keeps this issue within the purview of the government. “Given that almost every wickedness on the Internet is underpinned by malware-infected computers “¦ this is clearly a legitimate area for government to consider getting involved in, and putting up money to improve,“ he said.
<urn:uuid:e6560d96-0cf2-4902-b346-d9f2626d9abf>
CC-MAIN-2022-40
https://blog.executivebiz.com/2010/06/government-subsidies-for-malware-clean-up/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00672.warc.gz
en
0.946526
794
3.015625
3
Alternate Data Stream (shortened as ADS) is a feature of the Windows New Technology File System (NTFS) that, surprisingly, has both good and bad aspects. In this article, we’ll uncover both its two sides so that you can be prepared at using it. What are Alternate Data Streams? An Alternate Data Stream is a little-known feature of the NTFS file system. It has the ability of forking data into an existing file without changing its file size or functionality. Think of ADS as a ‘file inside another file’. ADS exists in all versions of Microsoft’s NTFS file system, and it has been available since Windows NT was released. It was originally intended to allow for compatibility with Macintosh’s Hierarchical File System (HFS). Currently, all Windows Operating Systems, including the latest Windows 10 OS, supports the ADS feature. So, what can you do with Alternate Data Streams? ADS can allow you to store any type of file, such as texts, audios, videos, images, or even nefarious codes like viruses or trojans. ADS contains metadata for identifying files according to various attributes, such as author, title, date modified, and more. Furthermore, hackers can use Alternate Data Streams to launch Denial of Service Attacks (DOS). Benefits of ADS Before we look at how an attacker can hijack ADS for malicious reasons, let’s talk about some of its benefits, as described below. - Windows Resource Manager leverages ADS to identify high risk files that shouldn’t be accessed. - The Windows operating system uses ADS to encrypt and store files in a secure manner. - The Windows Attachment Manager uses ADS as a file scanner. This explains why sometimes you receive warnings when you open a file downloaded from the Internet. - The SQL Database server uses ADS to maintain database integrity. - Citrix’s virtual memory uses ADS to boost DLL loading speed. - Anti-virus applications, such as Kaspersky, uses ADS to enhance the scanning of files. Creating an Alternate Data Stream Creating an Alternate Data Stream is not rocket science; it’s extremely easy. Basic DOS commands like type can be used, in conjunction with the [ > ] redirect symbol and [ : ] colon symbol, to fork a file into another file. Let’s demonstrate the steps of using ADS to hide information in a file. Step 1: Open the terminal and create a text file C:> echo Today is going to be a great day > file1.txt This command saves the given string to a text file called file1.txt Step 2: Confirm the contents of the file Let’s now confirm the contents of the file by using the type command, as shown below. C:> type file1.txt Today is going to be a great day Everything is working well, just as expected. Then, let’s check the directory listing. C:> dir file1.txt Step 3: Append new content to the hidden file Let’s execute the following command: C:> echo The sun is all up and the coast is clear > file1.txt:hidden It appears that we have created a new file called file1.txt:hidden, which is not the case. We have just created an Alternate Data Stream within the file1.txt file under the name ‘hidden’. The filenamed file1.txt:hidden does not exist. In fact, if we try to examine its contents, the Windows prompt will return an error, as illustrated below. C:> type file1.txt:hidden The filename, directory name or volume label syntax is incorrect However, we can reveal the contents of the file, as shown below. C:> more < file1.txt:hidden The sun is all up and the coast is clear Remember, the ‘original’ data stream is still there. C:> type file1.txt Today is going to be a great day Yet, when we check the directory, there’s only one file, which is file1.txt. C:> dir file1* Here are three interesting points to note about the last directory listing. - The timestamp has changed after adding the Alternate Data Stream file to the existing file. That is the only indication that a change has indeed happened. - The file size remains unchanged as evidenced by the prefix 36 in file1.txt when checking the directory listing. This implies that you could have many ADS files within a file without your knowledge. - Because of the subtle changes, it’s difficult to detect Alternate Data Stream files unless you use a third-party tool. Risks Associated with Alternate Data Streams Alternate Data Streams enables information to be hidden within other files. As such, it can be a security risk. An attacker can easily store malicious codes or payloads and use them to cause damages to your system. Let’s consider this example. c:> type c:\windows\system32\calc.exe > file1.txt:calc.exe The above command copies the Windows calculator program into an ADS file called calc.exe, which is linked to file1.txt. To launch the hidden calc.exe copy from its ADS in file1.txt, an attacker can run the following command. Now, suppose that was not a calc.exe file but a destructive malware, it could lead to extensive damages to your system. The greatest challenge with Alternate Data Streams is that, if used for nefarious purposes, they are extremely difficult to detect, unless you use third party applications. Additionally, ADS cannot be turned off. Therefore, it’s critical to institute robust measures to prevent its abuse. Do you have any question or comment? Please post them below. Prevent Unauthorized Access to Sensitive Windows Folders! - No more unauthorized access to sensitive data - No more unclear permission assignments - No more unsafe data - No more security leaks
<urn:uuid:51255a8d-c8b6-492f-9134-f495f320f66b>
CC-MAIN-2022-40
https://blog.foldersecurityviewer.com/ntfs-alternate-data-streams-the-good-and-the-bad/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00672.warc.gz
en
0.888614
1,321
2.75
3
There are many places in the operating systems where it might be desirable for you (or third-party application providers) to extend the functionality of what the operating system does. In order to provide this extensibility, many operating systems provide the support for invoking programs. On i, these are called exits or exit points. The exit points are those predefined interfaces where your program can get control An exit program is the program you write that will get control from operating system functions at those predefined points. In general, finding information about exit points in the Information Center is difficult; you have the following options: - Use the API Finder and review All Exit Programs. - Go to APIs by Category and select the category you are interested in. At the end of the list of APIs there is a section on Exit Points for that category (if there are any). - Search on something you know about that exit point – the name for example. Unless you are specifically looking for exit point information, you probably won’t stumble upon it. But the purpose of this blog isn’t to talk about i exit points in a general manner. There are two command exit points that have existed on IBM i for some time – the Command Analyzer Change Exit and the Command Analyzer Retrieve Exit. These exit points allow you to have a program run when a command is invoked; the Command Analyzer Change exit allows your program to run before the operating system passes control to the prompter component. The Command Analyzer Retrieve exit is called after the validity checking program is run but before control is passed to the command processing program. There have been other articles written on these topics that an Internet search will find for you. There were PTFs that were done earlier this year to enhance the support of the Command Analyzer Retrieve exit point to allow the exit program to optionally be called after control returns from the command processing program; this means your program can get control after the command processing program has been completed. This allows you to take whatever action you may want after a command has been run. Of course, not all command processing may be complete even though control has returned from the CPP; there are many examples where the command initiates some processing that is completed asynchronously. This is a nice enhancement that provides some simple, yet potentially very valuable, function. The PTFs for 6.1 and 7.1 are below; this is in the base release in 7.2 and later. For more information, see the CL retrieve exit programs can run after command completion article on the IBM i Technology Updates site. This blog post was edited for currency on February 21, 2020. This blog post was originally published on IBMSystemsMag.com and is reproduced here by permission of IBM Systems Media.
<urn:uuid:af409034-1562-4bdb-a5b8-968eb3daa059>
CC-MAIN-2022-40
https://dawnmayi.com/2012/07/17/command-retrieve-exit-programs-can-run-after-command-completion/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00672.warc.gz
en
0.946253
574
3.328125
3
Audio Monitoring for Sleep Studies Polysomnography, also known as a sleep study, is a test used to diagnose patients with possible sleep disorders. Devices record brain waves, heart rate, blood oxygen levels, and breathing, along with eye and leg movements throughout the night. During these studies, cameras that produce both video and audio, are used in conjunction with medical sensors to verify the information and give a more complete picture for sleep physicians. Unfortunately, audio from built-in microphones inside video cameras can present challenges at playback. From unclear speech to picking up background noises, the needed audio recordings are not always clear enough for the doctors and technicians for analysis. “It can be a challenge to get a good system that records audio and video simultaneously,” said Bettina Stiles, clinical supervisor for sleep center manager SleepMed at Chest Medicine Associates, in an article for Sleep Review magazine. By utilizing specialty audio equipment––such as an external microphone–– that ties into a video camera feed, physicians can get clear, uninterrupted audio to go with their video. For medical labs designed for sleep studies, external microphones are recommended for optimal sound capture. Sound solutions like Louroe Electronics’ Verifact® A microphone can be placed on ceilings in patient rooms. For rooms with higher ceilings, the Verifact® B can drop down to hang closer to the bed to better pick up all audio. Capturing exceptional sound quality, Louroe’s suite of audio systems allow physicians to more effectively monitor and assess their patients. Below are a couple of examples that illustrate the important role of clear audio and video in helping doctors give a more accurate diagnosis. While patients are hooked up to a variety of sensors during their sleep study, certain disorders, like REM behavior disorder, can be better diagnosed through video and audio recordings paired with data. Patients with this disorder, or disorders like it, will attempt to act out their dreams. While some sensors may only show the patient as “asleep,” audio and video recordings will clearly show how the patient is moving and what they may be trying to say. These recordings can also help to identify whether a patient is having seizure activity or just tossing and turning. The security technology can also help to confirm sleep talking and sleep walking disorders. Getting good quality signals from the sensors placed on pediatric patients can be a challenge, especially with younger children, making audio and video verification a must for proper diagnosis. Clear audio paired with video can help physicians determine if a child’s breathing is labored, or if it is even matching what the sensors are indicating. In some cases, it can help determine whether perceived issues, such as sleep apnea, are even valid. In labs that allow co-sleeping with parents, audio monitoring for sleep studies and video can also help to isolate the pediatric patient’s rhythms and ensure that the parent is not influencing the results of the test. This also helps to document the type of interactions happening between parent and child during co-sleeping that they may not be aware of and could be harmful to the child, such as rolling over.
<urn:uuid:cd2852eb-f3c5-4922-85dd-5c4d522d54c6>
CC-MAIN-2022-40
https://www.louroe.com/why-sleep-study-labs-need-better-audio/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00072.warc.gz
en
0.929491
645
2.78125
3
The applications for computer vision are practically infinite, replicating any visual task—and with Chooch’s computer vision platform, it’s never been easier to deploy fast, highly accurate AI models for everything from defect detection to security to complex counting. That’s why organizations of all sizes and industries are now applying computer vision to a wide range of tasks to improve efficiency and accuracy, boost their productivity, and cut costs. Computer vision solutions from Chooch AI integrate training and deployment into a single AI platform, making it simpler than ever to get started. Edge AI enables businesses to run AI models on a variety of embedded systems, creating a network of fast, lightweight, interconnected devices enhanced with computer vision capabilities. To step back for a moment if necessary, computer vision is a subfield of artificial intelligence that seeks to give computers the ability to “understand” aspects of images and videos. Below, we’ll discuss everything you need to know about computer vision—including how you can implement it for organization. These days, nearly all computer vision models are built using a class of machine learning algorithms known as deep learning. The basic building block of deep learning is an artificial neuron, which is connected to other neurons in a system called an artificial neural network, a rough approximation of the human brain. A single neural network may contain thousands, millions, or billions of neurons, which are organized into more complex layers or structures, depending on the network’s architecture. Each neuron receives signals from other neurons, performs a mathematical calculation on these inputs, and sends the result to other connected neurons. Neurons also have internal parameters known as “weights” that affect the strength of their output on the rest of the network. Depending on the task, computer vision models are trained on large datasets of images and/or videos. For each input data point, the network produces a prediction as output, which is then compared against the ground-truth output. If the network’s prediction is incorrect, the weights of the neurons in the network are adjusted via a process known as backpropagation, making it more likely that the system will produce the correct answer next time. Training computer vision models can be highly intensive in terms of both time and effort, since datasets for the task you want may not initially be available. For example, if you want to train an AI model to recognize human faces in an image, you’ll need a large dataset of images, each one annotated with the location of the face(s). Chooch dramatically simplifies the AI model training process. From within the Chooch dashboard, you can easily annotate your images and videos with bounding boxes or polygons. Since having a large, diverse set of images is essential for peak performance, you can even use Chooch’s synthetic data and data augmentation features if you need to increase the size of your dataset Once training is complete and the network has reached a satisfactory level of accuracy, the network’s architecture and weights are saved in a condensed format known as the “model.” You can then deploy this model in a production environment, e.g. in the cloud or on an edge device, and use it for real-world data. With Chooch’s cutting-edge AI platform, it’s never been easier for anyone—regardless of technical skill or experience—to build and deploy powerful AI models. When you send an image or video to the Chooch API, the Chooch Smart Network selects the appropriate AI model and generates the relevant prediction. You can train and deploy AI models on an extremely wide range of tasks, giving you a great deal of flexibility in how you use them. For example, consider a photo of an apple: you could train an AI model to distinguish apples from other fruits, to recognize the color of the apple (e.g. green), or even to determine the type of apple (e.g. Granny Smith). The Chooch integrated, end-to-end AI platform generates highly accurate predictions in just a fraction of a second. Using imagery from cameras, drones, cell phones, medical imaging devices, and more, Chooch AI models can deliver results when and where you need them, whether on the edge or in the cloud. Thanks to technological advances and new research in deep learning and neural networks, the field of computer vision has made great strides in both accuracy and speed, with state-of-the-art models that can equal or even exceed the performance of humans on many tasks. According to forecasts by market intelligence firm Research & Markets, the global computer vision market is predicted to skyrocket from $16 billion in 2019 to $51 billion in 2026, with a breakneck annual growth rate of 26 percent. Why is this? Because there’s real business value in AI and computer vision. It’s no exaggeration to say that you can use computer vision for any kind of visual data—from still images to videos, from infrared to X-rays. You can train an AI model to recognize any kind of pattern or trait in this visual data—including objects, concepts, faces, actions, and more. Need some ideas? The Chooch AI app (available for iPhone and Android) lets you demo some of our platform’s capabilities, all in the palm of your hand. The app can recognize over 200,000 classes of objects in still images and videos. Whatever your field or industry, computer vision can help your organization thrive in a constantly evolving business landscape. Below, we’ll go over just three common use cases for computer vision: healthcare, workplace safety, and manufacturing. The field of healthcare is using computer vision for a wide range of applications. Computer vision healthcare use cases include: Workplace safety and security is another domain in which computer vision can be tremendously helpful and effective—and even save lives. Use cases for safety and security AI include: Last but not least, computer vision for industrial and manufacturing AI is rapidly growing in popularity. The applications of computer vision in manufacturing include: The field of computer vision spans many different subfields and tasks. Below is just a sampling of the most common types of computer vision: We’ve barely scratched the surface of what’s possible with computer vision. Sectors such as media, geospatial, retail, and many more have all successfully implemented transformative computer vision solutions. No matter your company’s size and industry, computer vision can help you achieve greater efficiencies, cut costs, beat your competitors, and better serve your customers. Are you looking to implement AI within your own organization? We’re here to assist with the next steps. Get in touch with Chooch’s team of AI experts for a chat about your business needs and objectives, or sign up today to create a free account on the Chooch platform.
<urn:uuid:737d2f57-1eb5-406c-a5ca-96271b6ef659>
CC-MAIN-2022-40
https://chooch.ai/computer-vision/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00072.warc.gz
en
0.927088
1,437
3.21875
3
One of the most common ways bad actors gain access to digital environments is by guessing passwords. With so many devices being interconnected, cracking into one device could mean access to several devices, as well as extensive access to sensitive information. It is always a good idea to change your password often. When you change your password, pause to consider: Is your password long and complex enough? Long and complex passwords require more effort and time for a hacker to guess. Consider using passphrases, which are long and exceed even the most stringent password length requirements. For example, “My first car was a 1977 chevy camaro”. This passphrase will meet any password complexity and length requirements. It makes use of upper and lowercase letters, numbers, and the space is considered a special character. Is money tight for new software and hardware solutions? Passphrases make use of existing technological capabilities. Microsoft Active Directory allows for a 64-bit password length; the example above is just 36. Are you experiencing resistance in implementing more stringent password policies or frequent user lockout issues? Many institutions acknowledge the importance of security but also resist security practices that are too stringent for various reasons, including employee morale. Passphrases can combat many of those excuses, as well as reduce the time wasted by IT support staff who have to address account lockouts. Are you concerned about network security at your healthcare organization? Fortified Health Security can help. Contact our team of cybersecurity specialists to discuss potential risk and compromise across your organization today. Fortified Health Security is committed to strengthening the security posture of healthcare organizations. In the spirit of Cybersecurity Awareness month, we will be posting daily information for you to consider when maintaining your organization’s cybersecurity program.
<urn:uuid:439d90e5-05a3-4b3f-919e-aa69abe8990d>
CC-MAIN-2022-40
https://fortifiedhealthsecurity.com/blog/pause-to-consider-passwords-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00072.warc.gz
en
0.940027
358
2.53125
3
With the development of data communication and telecommunication technology, the intention of MAN(Metropolitan Area Network)is no longer relative to the wide area network. Due to the high bandwidth and data transmission transparency of DWDM(Dense Wavelength Division Multiplexing) tech, people naturally hope to introduce DWDM as the platform of MAN. However, in the MAN, an optical amplifier is not needed on account of the short transmission distance. If the same DWDM devices as that of WAN(Wide Area Network) are adopted in the MAN, it will not be worth the candle. Thus, DWDM is not seemingly suitable for MAN. While as a simplified version of DWDM technology, CWDM can satisfy the requirements for the practical characteristic of Metropolitan Area Network(Metropolitan Area Network)in short transmission distance without the need of expensive devices such as amplifiers and transceivers. After that, the adoption of CWDM is undoubtedly the best choice to save cost. What Is the CWDM? CWDM (Coarse Wavelength Division Multiplexing/sparse wavelength division multiplexing), as its name implies, is a close relative of dense wavelength division multiplexing. Because the transmission distance of the metropolitan area network is usually not more than 100km, the system has low requirements for the transmission attenuation of the single mode fiber and the fiber amplifier is not also needed. In this way, the bandwidth window of 1200~1700nm can be used and the adjacent wavelength interval is relaxed to 10~20nm, which can also form dozens of wavelength division multiplexing systems. This is the coarse wavelength division multiplexing (CWDM) system. What Are the Advantages of CWDM? Compared with the application of DWDM communication in the backbone network, CWDM communication increases the multiple light signal wavelength interval multi-wavelength technology. It can take advantage of the light source without temperature control, as well as build a communication system with low cost. The features and advantages of CWDM are as below: - The wide wavelength interval greatly reduces the requirements for the performance of optical devices, such as the corresponding lasers and filters. - Laser without cooling function, direct modulation, without wavelength locked. - Due to the short transmission distance(within 50KM), there is no need for an amplifier in general conditions. - Low cost: CWDM technology makes full use of the feature of metropolitan area network such as short transmission distance; it can be directly applied in the entire optical fiber transmission window of 1310~1560nm and operates the wavelength division multiplexing based on the wider wavelength interval than DWDM system. On account of the wide wavelength interval and short transmission distance, it is unnecessary for CWDM to choose expensive laser, which greatly reduces the cost of laser. Besides, CWDM needn’t utilize complex control techniques to maintain higher system requirements. It just needs to choose low-cost coarse wavelength division multiplexer/demultiplexer and multi-channel laser receiving/transmitting device as the relay. The reduction to the cost of components and the requirements of the system leads to the lower cost in CWDM system than DWDM system. The cost of the CWDM filter is 70% less than that of the DWDM filter. The adoption of new filters and multiplexers/demultiplexers is possible to further reduce costs. The initial design of CWDM is aimed at low-cost wavelength division multiplexing tech, so that the advantage of CWDM in cost is the highlight of this tech. - Low Power Consumption: the operating cost of an optical transmission system depends on the maintenance and the power consumption of the system. In this respect, the power consumption of the CWDM system is much lower than that of the DWDM system. For example, a DWDM laser with a cooler and control circuit needs to consume about 4W power per wavelength; while the CWDM laser without a cooler consumes only 0.5W power. The four waves of CWDM optical transmission system consumes about 10~15W power, while the power consumption of the DWDM system similar to it is up to 30W. In the DWDM system, with the increases in the total number of multiplexed wavelength and single-channel transmission rate, the power consumption and temperature management become a key problem in circuit board design. - Small Size: a CWDM laser is much smaller than a DWDM laser. CWDM uses a laser without cooler; the laser is usually composed of a laser sheet and a monitoring photodiode sealed in a metal container with a glass window. The size of the DWDM laser transmitter is about five times the size of the CWDM laser transmitter. In addition to the above six aspects, CWDM has more obvious advantages in security, network flexibility and etc. - Flexible Service Interface: the CWDM system is mainly used in the metropolitan area network; it provides multi-service transparent transmission. It is flexible in the client-side service interface, and can support the following services: - SDH services: support the SDH service of the STM-x (x=1, 4 and 16) based on ITU-T G.707; Ethernet services: support 10M/100M and Gigabit Ethernet services; - ATM services: support ATM services of STM-x (x=1, 4 and 16) based on ITU-T G.707 standard; - other services possible to be widely applied in the future, such as Fiber Channel, ESCON, FICON, DigitalVideo, 10GE and so on); PDH service of 8/34/140Mb/s. With various advantages, CWDM tech is naturally in great favor by many manufacturers and suppliers in the optical communication industry. It is believed that CWDM will be with more advantages with the future development of techs. In addition, Gigalight, as the professional supplier of optical components, has pushed out a series of CWDM products to follow the market’ trend, in which QSFP28 CWDM4 optical transceiver is the hottest one.
<urn:uuid:1d077891-fe10-4557-9143-af6274e403f8>
CC-MAIN-2022-40
https://www.fomsn.com/optical-communication/opticinterconnectionnews/do-you-know-these-advantages-of-cwdm/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00072.warc.gz
en
0.928721
1,301
3.125
3
Identity management and access control are two sides of a coin; both are essential for security, but neither are adequate by itself. Identity management allows a network or system to authenticate the identity of a user through some type of credentials, which can range from a simple user name and password to digital certificates, physical tokens, biometric factors (fingerprints, iris scans, facial recognition, etc), or some combination of these factors. The strength of the authentication required will depend on the sensitivity the material being accessed as well as the impact should these resources fall into unauthorized hands. Public information might require little or no authentication, while proprietary or classified data or accounts with administrative privileges should require stronger authentication, possibly using multiple factors. Single Sign-On and Maximum security But authenticating identity is not the first step. Each user should receive only the appropriate access privileges, based on the need and the level of authentication that has been performed. The fact that someone has established his or her identity as an employee should not result in unfettered access. Studies have shown as many as 35 percent of all hacking attempts are made by employees, and the insider threat to enterprises is serious. These threats can be the result of malicious activity or of errors, but both scenarios present real risk to the enterprise. In addition to threats from otherwise legitimate insiders, there also is a risk that the user credentials can be compromised and that the ID authentication process can be exploited to let malicious outsiders into the system. For these reasons, the principle of least privilege is considered a best practice in access control. As the name implies, this means that every user – whether an individual, a device, a program or a process – is granted access only to the resources necessary to accomplish the job at hand. The concept is simple. A low-level clerk does not need and should not have administrative privileges on IT systems; a worker in sales does not need access to sensitive financial information. In practice, however, it frequently is difficult to manage. Users often are assigned access privileges based on their role in an organization, but individuals seldom fit neatly into single roles. They often need special one-time access, and each person fulfilling the same role might need slightly different types of access. Effectively managing access requires not only authentication and secure connections, but granular controls for each user and the ability to monitor their activities. Hypersocket Framework and Single Sign-On All action within Hypersocket Framework (HSF) requires a permission, and business domains are defined by specific resources for the system being developed. Users are placed in roles that are associated with a specific set of permissions by assigning resources to each role. All actions within the HSF generate Events, which allows detailed reporting and analysis of the system by the applications using it. Events also can trigger responses, such as notifications or alerts. To ease the burden of managing multiple accounts, Hypersocket Single Sign-On lets users log on once to the Hypersocket server to get access to all cloud-based and web-based sites. The SAML protocol seamlessly connects users to cloud services once identity is authenticated by the server. When a cloud services does not support SAML, the Hypersocket Single Sign-On server provides a browser plugin to automate the login process. Bringing it all together Managing identity and access privilege of users is essential to cybersecurity, and Hypersocket IDM and Hypersocket SSPR are available to help provide identity and user management with the granularity needed to make them effective and the ease of use needed to make them practical.
<urn:uuid:28ac14e7-cca9-4a05-a99c-2ceba65a9138>
CC-MAIN-2022-40
https://www.logonbox.com/content/singlesignon/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00072.warc.gz
en
0.94136
727
3.171875
3
Ready to learn Data Science? Browse Data Science Training and Certification courses developed by industry thought leaders and Experfy in Harvard Innovation Lab. This post is about Train/Test Split and Cross Validation. As usual, I am going to give a short overview on the topic and then give an example on implementing it in Python. These are two rather important concepts in data science and data analysis and are used as tools to prevent (or at least minimize) overfitting. I’ll explain what that is — when we’re using a statistical model (like linear regression, for example), we usually fit the model on a training set in order to make predications on a data that wasn’t trained (general data). Overfitting means that what we’ve fit the model too much to the training data. It will all make sense pretty soon, I promise! What is Overfitting/Underfitting a Model? As mentioned, in statistics and machine learning we usually split our data into two subsets: training data and testing data (and sometimes to three: train, validate and test), and fit our model on the train data, in order to make predictions on the test data. When we do that, one of two thing might happen: we overfit our model or we underfit our model. We don’t want any of these things to happen, because they affect the predictability of our model — we might be using a model that has lower accuracy and/or is ungeneralized (meaning you can’t generalize your predictions on other data). Let’s see what under and overfitting actually mean: Overfitting means that model we trained has trained “too well” and is now, well, fit too closely to the training dataset. This usually happens when the model is too complex (i.e. too many features/variables compared to the number of observations). This model will be very accurate on the training data but will probably be very not accurate on untrained or new data. It is because this model is not generalized (or not AS generalized), meaning you can generalize the results and can’t make any inferences on other data, which is, ultimately, what you are trying to do. Basically, when this happens, the model learns or describes the “noise” in the training data instead of the actual relationships between variables in the data. This noise, obviously, isn’t part in of any new dataset, and cannot be applied to it. In contrast to overfitting, when a model is underfitted, it means that the model does not fit the training data and therefore misses the trends in the data. It also means the model cannot be generalized to new data. As you probably guessed (or figured out!), this is usually the result of a very simple model (not enough predictors/independent variables). It could also happen when, for example, we fit a linear model (like linear regression) to data that is not linear. It almost goes without saying that this model will have poor predictive ability (on training data and can’t be generalized to other data). An example of overfitting, underfitting and a model that’s “just right!” It is worth noting the underfitting is not as prevalent as overfitting. Nevertheless, we want to avoid both of those problems in data analysis. You might say we are trying to find the middle ground between under and overfitting our model. As you will see, train/test split and cross validation help to avoid overfitting more than underfitting. Let’s dive into both of them! As I said before, the data we use is usually split into training data and test data. The training set contains a known output and the model learns on this data in order to be generalized to other data later on. We have the test dataset (or subset) in order to test our model’s prediction on this subset. fromsklearn import datasets, linear_model from sklearn.model_selection import train_test_split frommatplotlib import pyplot as plt Let’s quickly go over the libraries I’ve imported: - Pandas — to load the data file as a Pandas data frame and analyze the data. If you want to read more on Pandas, feel free to check out my post! - From Sklearn, I’ve imported the datasets module, so I can load a sample dataset, and the linear_model, so I can run a linear regression - From Sklearn, sub-library model_selection, I’ve imported the train_test_split so I can, well, split to training and test sets - From Matplotlib I’ve imported pyplot in order to plot graphs of the data OK, all set! Let’s load in the diabetes dataset, turn it into a data frame and define the columns’ names: columns = “age sex bmi map tc ldl hdl tch ltg glu”.split() # Declare the columns names diabetes = datasets.load_diabetes() # Call the diabetes dataset fromsklearn df =pd.DataFrame(diabetes.data, columns=columns) # load the dataset as a pandas data frame y = diabetes.target # define the target variable (dependent variable) as y Now we can use the train_test_split function in order to make the split. The test_size=0.2 inside the function indicates the percentage of the data that should be held over for testing. It’s usually around 80/20 or 70/30. X_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.2) print X_train.shape, y_train.shape print X_test.shape, y_test.shape (89, 10) (89,) Now we’ll fit the model on the training data: lm = linear_model.LinearRegression() predictions = lm.predict(X_test) As you can see, we’re fitting the model on the training data and trying to predict the test data. Let’s see what (some of) the predictions are: array([ 205.68012533, 64.58785513, 175.12880278, 169.95993301, Note: because I used [0:5] after predictions, it only showed the first five predicted values. Removing the [0:5] would have made it print all of the predicted values that our model created. Let’s plot the model: And print the accuracy score: There you go! Here is a summary of what I did: I’ve loaded in the data, split it into a training and testing sets, fitted a regression model to the training data, made predictions based on this data and tested the predictions on the test data. Seems good, right? But train/test split does have its dangers — what if the split we make isn’t random? What if one subset of our data has only people from a certain state, employees with a certain income level but not other income levels, only women or only people at a certain age? (imagine a file ordered by one of these). This will result in overfitting, even though we’re trying to avoid it! This is where cross validation comes in. In the previous paragraph, I mentioned the caveats in the train/test split method. In order to avoid this, we can perform something called cross validation. It’s very similar to train/test split, but it’s applied to more subsets. Meaning, we split our data into k subsets, and train on k-1 one of those subset. What we do is to hold the last subset for test. We’re able to do it for each of the subsets. There are a bunch of cross validation methods, I’ll go over two of them: the first is K-Folds Cross Validation and the second is Leave One Out Cross Validation (LOOCV) K-Folds Cross Validation In K-Folds Cross Validation we split our data into k different subsets (or folds). We use k-1 subsets to train our data and leave the last subset (or the last fold) as test data. We then average the model against each of the folds and then finalize our model. After that we test it against the test set. Visual representation of K-Folds. Again, H/t to Joseph Nelson! Here is a very simple example from the Sklearn documentation for K-Folds: X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]]) # create an array y = np.array([1, 2, 3, 4]) # Create another array kf = KFold(n_splits=2) # Define the split – into 2 folds kf.get_n_splits(X) # returns the number of splitting iterations in the cross-validator And let’s see the result — the folds: print(“TRAIN:”, train_index, “TEST:”, test_index) X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] (‘TRAIN:’, array([0, 1]), ‘TEST:’, array([2, 3])) As you can see, the function split the original data into different subsets of the data. Again, very simple example but I think it explains the concept pretty well. Leave One Out Cross Validation (LOOCV) This is another method for cross validation, Leave One Out Cross Validation(by the way, these methods are not the only two, there are a bunch of other methods for cross validation. Check them out in the Sklearn website). In this type of cross validation, the number of folds (subsets) equals to the number of observations we have in the dataset. We then average ALL of these folds and build our model with the average. We then test the model against the last fold. Because we would get a big number of training sets (equals to the number of samples), this method is very computationally expensive and should be used on small datasets. If the dataset is big, it would most likely be better to use a different method, like kfold. Let’s check out another example from Sklearn: X = np.array([[1, 2], [3, 4]]) y = np.array([1, 2]) loo = LeaveOneOut() X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] print(X_train, X_test, y_train, y_test) And this is the output: (array([[3, 4]]), array([[1, 2]]), array(), array()) (‘TRAIN:’, array(), ‘TEST:’, array()) (array([[1, 2]]), array([[3, 4]]), array(), array()) Again, simple example, but I really do think it helps in understanding the basic concept of this method. So, what method should we use? How many folds? Well, the more folds we have, we will be reducing the error due the bias but increasing the error due to variance; the computational price would go up too, obviously — the more folds you have, the longer it would take to compute it and you would need more memory. With a lower number of folds, we’re reducing the error due to variance, but the error due to bias would be bigger. It’s would also computationally cheaper. Therefore, in big datasets, k=3 is usually advised. In smaller datasets, as I’ve mentioned before, it’s best to use LOOCV. Let’s check out the example I used before, this time with using cross validation. I’ll use the cross_val_predict function to return the predicted values for each data point when it’s in the testing slice. from sklearn.cross_validation import cross_val_score, cross_val_predict fromsklearn import metrics As you remember, earlier on I’ve created the train/test split for the diabetes dataset and fitted a model. Let’s see what is the score after cross validation: scores = cross_val_score(model, df, y, cv=6) print “Cross-validated scores:”, scores As you can see, the last fold improved the score of the original model — from 0.485 to 0.569. Not an amazing result, but hey, we’ll take what we can get 🙂 Now, let’s plot the new predictions, after performing cross validation: predictions = cross_val_predict(model, df, y, cv=6) You can see it’s very different from the original plot from earlier. It is six times as many points as the original plot because I used cv=6. Finally, let’s check the R² score of the model (R² is a “number that indicates the proportion of the variance in the dependent variable that is predictable from the independent variable(s)”. Basically, how accurate is our model): print “Cross-Predicted Accuracy:”, accuracy
<urn:uuid:088a266c-4fbb-480a-b7b2-154836a3d256>
CC-MAIN-2022-40
https://resources.experfy.com/bigdata-cloud/train-test-split-and-cross-validation-in-python/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00072.warc.gz
en
0.878798
3,070
3.03125
3
Crustaceans are strange. They’re basically giant bugs that live underwater, but somehow, we aren’t repulsed by them like we are by roaches and spiders. Unlike bugs, crustaceans have much tougher armor called ‘chitin’ that they use to keep seawater on the outside. This armor, however, has unique strengths and properties that have captured the imagination of scientists. You won’t believe the new uses they’re finding for lobster shells. Join the Komando Community Get even more know-how in the Komando Community! Here, you can enjoy The Kim Komando Show on your schedule, read Kim's eBooks for free, ask your tech questions in the Forum — and so much more.
<urn:uuid:cfd2e34d-1055-40eb-9183-329736f4edf5>
CC-MAIN-2022-40
https://www.komando.com/video/komando-picks/how-lobster-shells-could-replace-single-use-plastic/682099/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00072.warc.gz
en
0.934668
162
2.515625
3
This article discusses the difference between layer 2 and layer 3 switches and the appropriate use cases for each. Traditional switching operates at layer 2 of the OSI model, where packets are sent to a specific switch port based on destination MAC addresses. Routing operates at layer 3, where packets are sent to a specific next-hop IP address, based on destination IP address. Devices in the same layer 2 segment do not need routing to reach local peers. What is needed however is the destination MAC address which can be resolved through the Address Resolution Protocol (ARP) as illustrated below: Here, PC A wants to send traffic to PC B at IP address 192.168.1.6. It does not know the unique MAC address however, until it discovers it through an ARP, which is broadcasted throughout the layer 2 segment: It then sends the packet to the appropriate destination MAC address which the switch will then forward out the correct port based on its MAC-Address-Table. Within a layer 2 switch environment exists a broadcast domain. Any broadcast traffic on a switch will be forwarded out all ports with the exception of the port the broadcast packet arrived on. Broadcasts are contained in the same layer 2 segment, as they do not traverse past a layer 3 boundary. Large layer 2 broadcast domains can be susceptible to certain unintended problems, such as broadcast storms, which have the ability to cause network outages. Also, it may be preferable to separate certain clients into different broadcast domains for security and policy reasons. This is when it becomes useful to configure VLANs. A layer 2 switch can assign VLANs to specific switch ports, which in turn are in different layer 3 subnets, and therefore in different broadcast domains. VLANs allow for greater flexibility by allowing different layer 3 networks to be sharing the same layer 2 infrastructure. The image below shows an example of a multi-VLAN environment on a layer 2 switch: Since VLANs exist in their own layer 3 subnet, routing will need to occur for traffic to flow in between VLANs. This is where a layer 3 switch can be utilized. A Layer 3 switch is basically a switch that can perform routing functions in addition to switching. A client computer requires a default gateway for layer 3 connectivity to remote subnets. When the computer sends traffic to another subnet, the destination MAC address in the packet will be that of the default gateway, which will then accept the packet at layer 2, and proceed to route the traffic to the appropriate destination based on its routing table. The diagram below shows an example of a layer 3 switching routing between VLANs through its two VLAN interfaces. As before, the layer 3 device will still need to resolve the MAC address of PC B through an ARP request broadcasted out to VLAN 20. It then rewrites the appropriate destination MAC address and forwards the packet back out the layer 2 segment: Layer 3 switch overview - An overview of how to configure layer 3 routing on Cisco Meraki switches Layer 3 switch example - A configuration example using layer 3 routing on Cisco Meraki switches Best practices for 802.1q VLAN tagging - Information regarding the appropriate use of VLAN tags
<urn:uuid:f5a41384-8903-408e-99bb-0097d7eb8c0f>
CC-MAIN-2022-40
https://documentation.meraki.com/MS/Layer_3_Switching/Layer_3_vs_Layer_2_Switching?utm_source=community&utm_medium=sidebar
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00272.warc.gz
en
0.889021
655
3.53125
4
Several industries, most notably manufacturing, have seen robotics and automation disrupt their production operations, leading to the loss of even highly skilled individuals. Likewise, there’s concern over how artificial intelligence (AI) could affect millions of jobs. As environmental parameters change, will machines come to mimic the “rational thinking” of a human brain and alter their reactions? Analytical jobs such as those of a data analyst requires critical thinking, and rethinking decisions, based on dynamically changing situations. Can artificial intelligence and machine learning (AI/ML) systems branch off to different thinking modes, as humans do, based on changing parameters? Should data analysts and other “knowledge sector” employees feel threatened by AI? According to many prominent experts observing the AI industry, there’s no need to worry. While AI will indeed bring significant changes, AI advances will continue to require human attention to ultimately make efficient and productive decisions. AI Can Deal With the Data Deluge While most of the existing BI solutions can process and store a huge amount of data with many dimensions, they don’t offer an easy way to get insights from the data. To find ways for the business to improve its key KPIs, data analysts simply don’t have the capacity to keep up with the increasing demand to crunch all the data. In fact, BI solutions have largely left the “I” – the intelligence – completely in the hands and minds of the data analysts. The human brain is limited in the number of data points it can process and correlate. According to Gartner, Inc., “More than 40 percent of data science tasks will be automated by 2020, resulting in increased productivity and broader usage of data and analytics by citizen data scientists.” AI stands to play a greater role in BI, where intelligent systems pour over more data than any human could reasonably examine. “With millions of metrics coming in daily, companies don’t have the ability to efficiently track vast amounts of customer data without risking the potential of missing essential insights, which leads to damage monetarily and reputationally,” said David Drai, CEO and Co-founder of Anodot. The more data analysts can identify good and bad deviations from the norm, the more quickly they can react to changes in the business and take necessary action. New Tools, Same Disruptions AI analysis is not unlike previous technological disruptions; the printing press made calligraphers obsolete, but introduced the new role of the professional printer. While AI analysis stands to disrupt BI, it opens the door for new jobs. David Crawford writes in VentureBeat, “The work of an analyst, however, does not just involve conducting data analysis within closed environments. The analysis must be applied to the outside world where there is much more context influencing the interpretation. For example, while AI connected to sensors might be able to analyze the soil on a plot of land and optimize yield more efficiently than a human, it doesn’t know what impact the soil conditions have on the flavor of the resulting crop.” Going forwards, AI will help provide focused insights for data analysts, by reading more deeply into data and identifying patterns. Carrying out exploratory tasks, such as recognizing specific deficiencies or untapped opportunities among the data, will help human professionals to interpret these discoveries to make more informed decisions. The Value of Data Analytics is Growing Big Data thought leader, Bernard Marr adds, “As the value of data analytics becomes apparent in all fields of activity, a growing number of people will want to be able to extract insights from their data. They might not want to take three or four years out to learn advanced computer science and statistics, and with the advances in cognitive computing that won’t be necessary. All that is required might be a brief introduction to NLP technologies.” Joel Shapiro, executive director of the data analytics program at Northwestern University’s Kellogg School of Management says, “Analytics still rests fundamentally on good critical thinking skills —how to ask good questions and rigorously assess evidence that can lead to action.” Artificial intelligence addresses today’s data deluge better than humans, since human analyst can’t sift through all of this data unaided. You can’t have a person sitting there or even whole teams watching dashboards to protect a brand or expect them to zero in on business incidents as they happen. You need AI tools. AI Enhances Data Analyst Job Security This doesn’t mean AI is coming to eliminate jobs for those involved in BI. While AI can do the work that no one has the time for, companies will come to see much stronger benefits in BI and be more inclined to further invest time and effort — creating more jobs in the field as a result. AI is good for job security. AI and data analytics were developed by humans, for our own benefit. David Crawford adds that, “Understanding what it means to be human and caring about the human experience are intrinsically related to the analysis process.” Human data analysts aren’t going away as long as other humans remain their ultimate consumers. Data analysts will become ‘managers’ of teams of AI ‘employees’, leveraging the AI’s algorithms to comb through data and to even get answer to questions that weren’t asked. As these systems collect and interpret greater volumes of data than we ever could, they advance, learning from past analyses to see what’s worked well. As David Drai observed in VentureBeat, “All advances in A.I. are built on the premise that if we can teach machines to learn from their “experiences,” then they will be able to more effectively sort through new information and help us flag the pieces that we need to know about immediately. Obvious steps forward, like the capacity to more effectively recognize seasonality or expect “unexpecteds,” will help lower the number of false positives and enable a far greater reliance on BI.” These systems still require a human to design and maintain them, to ask them the most important questions for the business, and to communicate their results with colleagues in other specialties. As AI solutions are able to dig deeper and more quickly link a cause with an effect, they can drastically reduce the time it takes to prevent or handle a crisis. This empowers the business, uncovering unforeseen opportunities while creating new means of driving revenue and enabling far more insightful decisions for data analysts. With highly scalable machine learning-based algorithms, we now have software that can learn the normal pattern of any number of data points and correlate different signals to accurately identify anomalies that require action or investigation – by the data analysts.
<urn:uuid:f4285a9d-f4a9-4414-b86f-0a9013f10b17>
CC-MAIN-2022-40
https://www.anodot.com/blog/worried-will-ai-take-over-your-data-analytics-jobs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00272.warc.gz
en
0.93875
1,391
2.671875
3
Cisco CCENT IP Addressing & Subnetting Part I One of the most important topics in any discussion of TCP/IP is IP addressing. An IP address is a numeric identifier assigned to each machine on an IP network. It designates the specific location of a device on the network. An IP address is a software address, not a hardware address. IP addressing was designed to allow a host on one network to communicate with a host on a different network, regardless of the type of LANs the hosts are participating in. Cisco CCENT IPv4 Addressing Before we get into the more complicated aspects of IP addressing, you need to understand some of the basics: Defining basic IP addressing terms: Bit = 1 digit (either a one or a zero) Byte = 7 or 8 bits (depends on parity) From an IP address perspective, assume 8. Octet = Always 8 bits IPv4 addresses are 32 bit (4 byte) addresses consisting of two parts, a network portion and a host portion. It is typically represented in dotted decimal notation. An example is 188.8.131.52. Each octet has a value between 0 and 255 where 0 is all bits being 0 and 255 is all bits being 1. Cisco CCENT IPv4 Addressing Defines Class A and Class B IP address characteristics. Cisco CCENT IPv4 Addressing Defines Class C, Class D and Class E IP address characteristics. Cisco CCENT IPv4 Special Addresses Local Broadcast Address If an IP device wants to communicate with all devices on the local network, it sets the destination address to all 1s (255.255.255.255) and transmits the packet. For example, hosts that do not know their network number and are asking some server for it may use this address. The local broadcast is never routed. Local Loopback Address A local loopback address is used to let the system send a message to itself for testing. A typical local loopback IP address is 127.0.0.1. Special address similar to a broadcast address where one packet can be sent and received by multiple destinations. Receivers must subscribe to receive the ip multicast address to receive the multicast packets. Auto configuration IP Addresses When neither a statically nor a dynamically configured IP address is found on startup, those hosts supporting IPv4 link-local addresses (RFC 3927) will generate an address in the 169.254/16 prefix range. This address can be used only for local network connectivity and operates with many caveats, one of which is that it will not be routed. You will mostly see this address as a failure condition when a PC fails to obtain an IP address. Cisco CCENT IPv4 Address Ranges The number of possible hosts is a Class A address are much greater than the number of possible hosts in a Class C address. Fortunately with the use of Variable Length Subnet Masks (VLSM), classfull boundaries can be removed to make better use of address space. In order to properly route IP packets utilizing VLSM, a classless routing protocol like OSPF, EIGRP or RIP version 2 must be used. Routing protocols such as RIP version 1 or IGRP do not recognize VLSM as they are classful routing protocols. The designers of the IP address scheme said that the first bit of the first byte in a Class A network address must always be off, or 0. This means a Class A address must be between 0 and 126 (127 is reserved for loopback address). In a Class B network, the RFCs state that the first bit of the first byte must always be turned on, but the second bit must always be turned off. If you turn the other six bits all off and then all on, you will find the range for a Class B network: 10000000 = 128 10111111 = 191 For Class C networks, the RFCs define the first two bits of the first octet always turned on, but the third bit can never be on. Following the same process as the previous classes, convert from binary to decimal to find the range. Here’s the range for a Class C network: 11000000 = 192 11011111 = 223 So, if you see an IP address that starts at 192 and goes to 223, you’ll know it is a Class C IP address. Cisco CCENT IP Address Classes The designers of the Internet decided to create classes of networks based on network size. For the small number of networks possessing a very large number of nodes, they created the rank Class A network. At the other extreme is the Class C network, which is reserved for the numerous networks with a small number of nodes. The class distinction for networks between very large and very small is predictably called the Class B network. Subdividing an IP address into a network and node address is determined by the class designation of one’s network. With the advent of Variable Length Subnet Masks (VLSM), the distinction between the different classes of IP addresses are not as important as they used to be. Cisco CCENT IP Addressing An IP address consists of 32 bits of information. These bits are divided into four sections, referred to as octets or bytes, each containing 1 byte (8 bits). The address is logically separated into a network portion and a host portion. The subnet mask defines where the network portion ends and the host portion begins. Cisco CCENT IPv4 Special Addresses An IP address that has binary 0s in all host bit positions is reserved for the network address. Therefore, as a Class A network example, 10.0.0.0 is the IP address of the network containing the host 10.1.2.3. As a Class B network example, the IP address 172.16.0.0 is a network address, while 184.108.40.206 would be a Class C network. A router uses the network IP address when it searches its IP route table for the destination network location. The decimal numbers that fill the first two octets in a Class B network address are assigned. The last two octets contain 0s because those 16 bits are for host numbers and are used for devices that are attached to the network. In the IP address 172.16.0.0, the first two octets are reserved for the network address; it is never used as an address for any device that is attached to it. An example of an IP address for a device on the 172.16.0.0 network would be 172.16.16.1. In this example, 172.16 is the network address portion and 16.1 is the host address portion. Directed Broadcast Address To send data to all the devices on a network, a broadcast address is used. Broadcast IP addresses end with binary 1s in the entire host part of the address (the host field). For the network in the example (172.16.0.0), in which the last 16 bits make up the host field (or host part of the address), the broadcast that would be sent out to all devices on that network would include a destination address of 172.16.255.255. The directed broadcast is capable of being routed. However, for some versions of the Cisco IOS operating system, routing directed broadcasts is not the default behavior. Cisco CCENT Private Address Space The people who sat around and created the IP addressing scheme also created what we call private IP addresses. These addresses can be used on a private network, but they’re not routable through the Internet. This is designed for the purpose of creating a measure of well-needed security, but it also conveniently saves valuable IP address space. Again, now shown in binary: Class A: 00001010 Class B: 10101100.00010000 through 10101100.00011111 Class C: 11000000.10101000 Cisco CCENT Private IP Question The following addresses can be routed across the public Internet: The following addresses fall under RFC 1918 and are not routed across the public Internet: Cisco CCENT Addressing without Subnets Without creating subnetworks, all hosts would be on one large network. Not good…really not good. This type of network creates one large broadcast domain. It is not scalable. Routers are used to break up broadcast domains and allow for communication between different ip subnets. Cisco CCENT Addressing with Subnets There are loads of reasons in favor of subnetting. Some of the benefits include: Reduced network traffic Optimized network performance Facilitated spanning of large geographical distances Cisco CCENT How do you determine the mask to use? 1. Determine the number of required network IDs: One for each subnet One for each wide area network connection 2. Determine the number of required host IDs per subnet: One for each TCP/IP host One for each router interface For example if you are provided a class C address and need to carve it up and are given requirements that you need one subnet to support 120 hosts, one subnet to support 50 hosts and four subnets to support 10 host each you can carve it up as follows: 192.168.0.0/25 – supports 128 addresses (126 addressable hosts) 192.168.0.128/26 – supports 64 addresses (62 addressable hosts) 192.168.0.192/28 – supports 16 addresses (14 addressable hosts) 192.168.0.208/28 – supports 16 addresses (14 addressable hosts) 192.168.0.224/28 – supports 16 addresses (14 addressable hosts) 192.168.0.240/28 – supports 16 addresses (14 addressable hosts) Cisco CCENT After you Choose a Possible This slide shows how to determine if a certain subnet mask will meet your business requirements of your Internetwork. It lists questions to ask when determining how to allocate IP addresses and subnets. Remember to account for growth. Cisco CCENT Once you find your mask… This slide describes the questions you need to ask about a mask to determine the subnets, broadcast addresses and valid host ranges of each subnet. Cisco CCENT Now, here is how to get Six Answers! This slide shows you how to achieve the answers to the six important subnetting questions: How many subnets? 2x = number of subnets. x is the number of masked bits, or the 1s. For example, in 11000000, the number of ones gives us 22 subnets. In this example, there are 4 subnets. How many hosts per subnet? 2x – 2 = number of hosts per subnet. x is the number of unmasked bits, or the 0s. For example, in 11000000, the number of zeros gives us 26 – 2 hosts. In this example, there are 62 hosts per subnet. What are the valid subnets? 256 – subnet mask = block size, or base number. For example, 256 – 192 = 64. 64 is the first subnet. The next subnet would be the base number itself, or 64 + 64 = 128, (the second subnet). You keep adding the base number to itself until you reach the value of the subnet mask, which is not a valid subnet because all subnet bits would be turned on (1s). What’s the broadcast address for each subnet? The broadcast address is all host bits turned on, which is the number immediately preceding the next subnet. What are the valid hosts? Valid hosts are the numbers between the subnets, minus all 0s and all 1s. Cisco CCENT Classless Inter-Domain Routing Another term you need to familiarize your self with is Classless Inter-Domain Routing (CIDR). It is really just the method that ISP’s (Internet Service Providers) use to allocate an amount of addresses to a company, home—a customer. They provide addresses in a certain block size—something we will be going into in greater detail later in this chapter. So when you receive a block of addresses from an ISP, what you’ll get will look something like this: 192.168.10.32/28. What this is telling you is what your subnet mask is. The slash notation (/) means how many bits are turned on (1’s). Obviously, the maximum could only be /32 because a byte is 8 bits and there are four bytes in an IP address: (4×8=32). In the example, 192.168.10.32/28 means the address range provided including subnet and broadcast is 192.168.10.32 – 192.168.10.47. But keep in mind that the largest subnet mask available (regardless of the class of address) can only be a /30 because you’ve got to keep at least two bits for host bits. Cisco CCENT IP Subnet-Zero The ip subnet-zero command provides the ability to configure and route to subnet 0 subnets. Subnetting with a subnet address of 0 is discouraged because of the confusion inherent in having a network and a subnet with indistinguishable addresses. It has a major benefit in that it utilizes address space more efficiently. Cisco CCENT IPv4 Subnet Calculation The slide indicates one way to determine both the network address given an address and mask utilizing a logical AND. The network address is all 0’s in the host portion of the address and the broadcast is all 1’s in the host portion of the address. Cisco CCENT Binary to Decimal We discussed this in chapter 1, but it is important enough to review at this point: It’s pretty simple really. The digits used are limited to either a 1(one) or a 0 (zero), with each digit being called one bit. Typically, you count either four or eight bits together, with these being referred to as a nibble or a byte, respectively. What interests us in binary numbering is the value represented in a decimal format—the typical decimal format being our base ten number scheme we’ve all used since kindergarten. The binary numbers are placed in a value spot; starting at the right and moving left, with each spot having double the value of the previous spot. 128 64 32 16 8 4 2 1 So explaining a couple of examples on the slide are as follows: 85 equals 01010101 which equates to 64 + 16 + 4 + 1 = 85 131 equals 10000011 which equates to 128 + 2 + 1 = 131 simple really. Cisco CCENT Binary (Cont.) Here is a binary chart that is best just to memorize. You will need to know this off the top of your head as we delve deeper into subnetting and when you take the CCNA test. Notice that it is no more than taking the information just learned on the previous slide and performing a little addition. Remember the bit values were as follows: 128 64 32 16 8 4 2 1 So 11000000 equates to 128 + 64 = 192 11100000 equates to 128 + 64 + 32 = 224 11110000 equates to 128 + 64 + 32 + 16 = 240 11111000 equates to 128 + 64 + 32 + 16 + 8 = 248 11111100 equates to 128 + 64 + 32 + 16 + 8 + 4 = 252 11111110 equates to 128 + 64 + 32 + 16 + 8 + 4 + 2 = 254 11111111 equates to 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255
<urn:uuid:9ede65ea-d82d-435d-b890-2b13f3f7c0cb>
CC-MAIN-2022-40
https://www.certificationkits.com/cisco-certification/ccent-640-822-icnd1-exam-study-guide/cisco-ccent-icnd1-640-822-exam-certification-guide/cisco-ccent-icnd1-subnetting-part-i/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00272.warc.gz
en
0.919339
3,406
3.53125
4
Thanks to crypto ransomware, criminals seem to be having an open season in a world driven by internet-based communication. According to Kaspersky Lab reports, between 2015 and 2016, the number of internet users who encountered one form or another of crypto ransomware increased from over 1.9 million to 2.3 million compared to the previous year (Kaspersky Lab). The hardest hit countries included the United States, Germany, and Italy. Crypto ransomware has indeed become an epidemic that everyone should not only be wary of but learn how to avoid it too. What is Crypto Ransomware? Crypto Ransomware is one of the recent forms of malware that attacks a computer by restricting the user’s access to files stored in the computer. The malware displays an on-screen alert advising the user to pay a given amount of money through anonymous methods such as Bitcoin, to regain access to his or her files. There are many variants of crypto ransomware, commonly known as CryptoDefense, CryptoWall and CryptoLocker, which are spread through emails, instant messaging applications, and drive-by downloads. As soon as your computer is infected, the crypto ransomware takes control of all your files, locks up everything with an unbreakable encryption, and asks for a ransom of up to $500 in cryptocurrency or have all your files destroyed. Crypto Ransomware uses social engineering techniques to lure computer users into running the malware. For instance, the victim will receive an email with a password-protected zip file attachment allegedly from a close friend or a reputable company. Once you open the file, the ransomware infection takes over, effectively restricting access to all your files. Typical Stages of Crypto Ransomware A crypto ransomware attack follows a typical 5-stage process, namely: - Installation through social engineering techniques. Once the user’s computer is infected the malware installs itself, sets its own keys in the Windows Registry to automatically start itself and take over every time the computer boots up. - Contacting the author’s server. Before the ransomware attacks it contacts a server operated by the criminal gangs. - Keys and handshake. The ransomware server and client – in this case your computer – identify each other in an intricately designed handshake. The server then generates a pair of cryptographic keys. One key is saved in your computer and the other one kept in the criminal’s server. - Encryption stage where the ransomware encrypts all the files in your computer - Extortion stage where the ransomware finally hijacks your computer and displays a message demanding for a given amount of money within a given time frame before they destroy all your files. The ransom must be paid in untraceable electronic payments such as Bitcoins. How to Stay Safe from Crypto Ransomware Crypto ransomware is spread through emails and other social engineering techniques such as instant messaging. Drive-by downloads are also known to spread many forms of malware including ransomware attacks. But there are ways to protect yourself both as a personal computer user or as a corporate entity. Here’s how: Security Tips for Consumers Here are a few ways consumers can protect themselves from ransomware attacks: - Always have a reliable anti-virus or security solution in your devices. Never turn off advanced security features that can detect and prevent ransomware attacks. - Keep all the software installed on your computer updated. Operating systems and other commonly used applications such as Java, Firefox, Chrome, and Microsoft Office have automatic update features that should be kept on at all times. Most of these updates provide advanced security features. - Avoid downloading files from unknown sources. Scan all downloads before you open them. - Create a cloud back-up for your important files and data Security Tips for Businesses and Corporate Entities - Back up all important files and data - Have a strict write permission restriction policy for all your file servers - Have an advanced endpoint protection that can detect malware and malicious traffic - Block access to suspicious websites with web and email protection - Educate staff and all system users about signs of potential security threats - If you suspect an attack or infection, disconnect from all networks at once. Importance of data backup According to Panda Security, it’s important to have a backup system in place for all your files to mitigate damage caused by ransomware, hardware problems, and other potentially harmful incidences. Storing critical data in your computer or local server can result in massive losses in the unfortunate event of a ransomware attack. Folderit provides a secure and efficient cloud document management system for both small and medium businesses. With a secure and easy to use Folderit cloud DMS, you’ll be safe from the adverse effects of a crypto ransomware attack.
<urn:uuid:38dab39e-4eca-47e9-82b5-a75851b08b1f>
CC-MAIN-2022-40
https://www.folderit.com/crypto-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00272.warc.gz
en
0.91159
967
2.71875
3
Learn The Basics Of SMTP Email: What SMTP Is Understand why SMTP Email is a protocol widely used for communication between servers. What Is SMTP? To understand SMTP Email, it is first necessary to know what SMTP stands for. SMTP or Simple Mail Transfer Protocol is the protocol or system of sending emails online. SMTP is a set of commands that ensure that your email client-server sends the message to the right server and that the receiving server, in turn, uses SMTP to ensure accurate delivery of the message to the end recipient. In other words, SMTP is a connection-oriented, text-based protocol which enables a mail sender to communicate with a receiver using command strings and supplying necessary data. It is done over a trustworthy ordered data stream channel, mostly a Transmission Control Protocol (TCP) connection. What Is Meant By SMTP Email? Emails sent via the Simple Mail Transfer Protocol are called SMTP Email. It facilitates the reliable delivery of email messages as it not only does it connect to the server of the recipient and establish communication, it also returns the message to the sender in case the message isn’t delivered to the receiver because of some communication gap between the involved outbound SMTP servers. Understanding The Components Of SMTP Email The working mechanism of Simple Mail Transfer Protocol or SMTP emails takes place in a sequence of commands. The SMTP server examples can be best understood when their step by step functioning is clear to us. Since SMTP is a technical term not familiar to the masses, a step-wise description of the work functioning helps to formulate an idea of what makes SMTP a reliable option for your personal or professional pursuits. The working chain: SMTP Client: The first commands are sent from the SMTP client (a term used for the one who initiates the communication, that is, the sender or transmitter). SMTP Server: The next series of commands come from the corresponding party or the SMTP server (a term used for the one who acts as the listening agent or receiver). Transactions: This exchange of commands completes the session, and such an SMTP session can include zero or more SMTP transactions – it varies depending upon the type of communication. Components Of An SMTP Transaction After having known the working chain of SMTP email, let us now look at the elements that make SMTP transactions convenient and adaptable. SMTP comprises of three commands or reply sequences: - MAIL command: The Mail command is used to establish the return address or the return-path. This means that in case an email doesn’t successfully reach its end recipient, it automatically returns to the initiator or sender. In other words, the Mail command establishes a reverse path. - RCPT command: The RCPT command is short for receipt command and establishes a recipient of the message. The command can be used for more than one recipient at one go, each time for the new recipient. - DATA command: The DATA command is used to signal the beginning and content of the message text, the content of the message. It is different from the Mail and RCPT command as they form a part of the envelope, whereas the Data command is associated with the components contained within the envelope. The DATA Command In Brief The DATA command has two parts – the message header and the message body, both of which are separated by an empty line. It is a group of commands. The SMTP server replies twice here – first to the DATA command itself as notification of its availability to receive the text. Then the server responds after the end-of-data sequence either to accept or to reject the entire message. What Is My SMTP Server? Now with the concepts understood, one might wonder which SMTP port he/she is subscribed to or should subscribe to. Generally, the SMTP settings are set to a person’s local Internet Service Provider’s SMTP settings (i.e., “smtp.yourisp.com”) and for an incoming mail server (IMAP or POP3) is set to his/her email account’s server (i.e., hotmail.com), which might differ from the SMTP server. But in case the question of “What is my SMTP server?” rings a bell in your head, here’s what you can do to find the answer: - Go to Internet Explorer. - Click on “Tools”. - Click on “Internet Options”. - Click on “Programs” - Note the email program you are using. - Check the website of the email manufacturer. - Locate the SMTP server by opening a DOS window (click on the “Start” button -> choose “Run” from the menu -> type “CMD” in the box -> click “OK”). - Type either “ping.smtp.mysite.com” or “ping mail.mysite.com” on the DOS window. - Wait for the server to respond to the request. - Note the name of the server. - Click on “Tools.” - Click on “Accounts.” - Click on “Mail.” - Select the “Default” account. - Choose “Properties” from the menu. - Choose the “Server” tab. - Choose “Outgoing Mail.” Here you will see the name of your SMTP server. Should You Go For An SMTP Server – Free Or A Paid Service? Choosing a good SMTP Email server is vital, but it becomes difficult to decide when you need to pay money to get access to a server. Those who are on a budget but want to use an SMTP server free of cost to make communication safer and sounder can use the free server options available in the market. While they may not be as sound as the paid ones, they certainly serve the purpose well. In most of the free SMTP server options, there is a pre set limit for the maximum number of emails that can be sent in a month; an organization may make a selection after drawing a comparative analysis and conclusion of all these options. Two SMTP Example Models Used By Organizations SMTP comes in two models, namely, the end-to-end method model and the store-and-forward method model. While the end-to-end method model is used for communications between different organizations, the store-and-forward method model is used for connections within an organization. Choose The Right SMTP Server Example! SMTP examples and SMTP server examples are all over the place, but choosing the right alternative among all the countless available options can make all the difference for your business communication and marketing campaigns. Which SMTP Ports To Use For An Uninterrupted Service? You might now be wondering which SMTP port is the most suitable for you or which port you are subscribed to. Here is a list of the commonly used ports that can prove beneficial to you: Port 25: This is the oldest and the most widely used port. It acts as the default port that communicates email across the Internet using the SMTP. Port 465: This port began to be used for SMTPS encryption and authentication “wrapper” over SMTP. Its purpose is to send emails securely using Secure Sockets Layer (SSL). Port 587: This port can be used as a default SMTP port as all mail servers support it. When operated together with TLS encryption, Port 587 ensures that emails are securely submitted in line with IETF guidelines. Port 2525: This port is accepted as a replacement for Port 587 by every ESP. Consumer ISPs and cloud hosting providers also support it. SMTP email is undoubtedly the harbinger of uninterrupted communication between people and businesses. Its advantages are many, and investing in an SMTP Server is sure to leave a person satisfied because it ensures that his/her business or organization doesn’t come out as unresponsive or passive to the potential clients. However, there are several SMTP service providers, and choosing the best one among them is the real deal. You always keep in mind that only when an SMTP email server promises to be effective, reliable, affordable, and secure can it be an adoptable option for you. Join the thousands of organizations that use DuoCircle Find out how affordable it is for your organization today and be pleasantly surprised.
<urn:uuid:68cea2f1-63e2-4109-8504-c1073e7c11af>
CC-MAIN-2022-40
https://www.duocircle.com/content/smtp-email
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00272.warc.gz
en
0.900133
1,787
3.671875
4
Software-defined technology refers to using software to control elements of a system. One of the earlier—and simpler—iterations of software-defined technology is an engine control application for improving a car’s performance. Before people started using software to control things like the turbo boost, fuel efficiency, and traction control, when you bought a car, you just got in and drove, hoping it would serve your needs. Software-defined networking (SDN) and software-defined wide-area networking (SD-WAN) give users the ability to “tune” or manipulate how the network behaves, creating virtually unlimited possibilities for enhanced performance and customization. But what is the difference between these two technologies? This article will explore what differentiates the two solutions, what they have in common, and how they can enhance your network’s performance. What is SDN? Software-defined networking is an approach to network architecture that gives users the power to intelligently control the network using software. Operators can both centrally control the network and customize its performance to suit the organization’s unique needs. To program the network, users employ application programming interfaces (APIs) instead of relying on controls physically located on individual pieces of hardware. The Origins of SDN Using software to control technology has its roots in the control of telephone networks. Rather than relying on a switch operator to pull and plug complicated combinations of patch cables, engineers began using preprogrammed controls to manage phone calls. Fast-forwarding to the 21st century, Stanford University’s computer sciences department launched the Ethane project, which spawned OpenFlow. This was an early iteration of an SDN that employed a clear split between the tech being controlled and the software-powered interface used to manage it. Typical SDN Use Cases Because SDN enables users to customize the behavior of any given technology, its potential applications are seemingly endless. Here are some of the more common uses: - Scaling data center operations: Amazon and Google have used SDN to form scalable data centers, meaning engineers can engage more or fewer resources to efficiently manage the storage and use of data. - Deploying applications: Managers can use SDN to release and manage applications across a network—all from a centralized location. - Securing Internet-of-Things (IoT) architecture: While IoT devices offer convenience and introduce possibilities, they also open multiple access points for hackers and other data thieves. With SDN, IoT engineers can provide a centrally located, customizable layer of protection to help make the process more secure. - Easing the burden on edge components: A properly programmed SDN can sense an overload condition in each of its connected components and, in the case of a network that incorporates edge computing, route traffic away from the edge devices and prevent potentially harmful latency. It can also reduce latency by giving the edge device a boost in bandwidth or processing power. - Enabling intent-based networking (IBN): With IBN, a network administrator has the ability to tell the network what to do in line with the organization’s specific objectives. In the past, an administrator would have to hope the devices chosen fit into the larger business plan. An administrator empowered with SDN can custom design the operation of different components in a way that syncs with big-picture objectives. Implementing SDN opens the door for enhanced connectivity because it allows more resources to be controlled and made available. Because each device is programmable and can be used to interact with others administrators have nearly unlimited possibilities at their fingertips. Devices can be programmed to work with each other, serving as extensions of existing network structures or even supporting each other's operation. In this way, multiple devices can work together like team members to accomplish important organizational objectives. A user also has the option to supplement the function of one network element with another to create a previously impossible offering to clients. For example, an SDN-powered architecture can be used to give an organization or application access to multiple cloud computing environments at the same time. Like simultaneously increasing a car’s turbo boost and adjusting the fuel map, this kind of access to hybrid cloud environments enables organizations to leverage the capabilities of multiple elements to improve application performance. Typical Features of SDN SDN presents several opportunities that would be otherwise unattainable: - Programmable network behavior: Everything from when and how a network is used to provisioning resources and bandwidth can be controlled using programming. - Convenient, centralized control: Network engineers can manage several elements of the network without leaving their desks. There is no need to travel to hardware-based interfaces again and again to tweak the performance of network equipment. - Virtualization: Not only does virtualization make it possible to configure and control different elements of the network, it also opens the way for creative possibilities. Given the option to control a variety of network devices and their parameters, engineers can easily conceive creative solutions and quickly troubleshoot a range of issues. What Do SDN and SD-WAN Have in Common? Both of these technologies stem from the same central concept: controlling a network using software. Therefore, they have several things in common: - The data plane and control plane are separated: In a traditional network, the data plane dictates where your data goes. The control plane is inside a router or switch, which makes it inconvenient for administrators to control the flow of data. Both solutions solve this problem by putting the control plane in a software environment. After an administrator connects a device, they can manage the flow of traffic across the network from a centralized location. - Compatibility with commodity x86 hardware: Commodity computing helps administrators take advantage of a series of lower-cost computers in a parallel computing structure, and AMD’s x86 hardware provides one of the leading commodity computing solutions. Both solutions are compatible with x86, making them easier to implement in a parallel computing setup. - Virtualization: Virtualization is the fulcrum of both technologies, creating an abstraction of the physical network. With both solutions, the user can manage the network in this virtual environment, which puts previously separated controls at their fingertips. - Possibility for virtual network function (VNF): Virtual network functions manage particular network functions such as load balancing and firewalls. They can be strung together or combined to produce a fully virtual environment. Both solutions allow for the integration of VNF, which may add another convenient layer of control for an administrator. What is the Difference Between SDN and SD-WAN? The primary difference between these technologies is SD-WAN delivers a wide-area network (WAN) that connects multiple sites with each other, making it, in some ways, an SDN in the WAN. On the other hand, SDN can be used to form networks that can be quickly changed according to what an organization needs, operating on a local-area network (LAN), SD-WAN is built to support WANs that are spread out over a sizable geographical area. Another crucial difference is that SD-WAN is run by the vendor that provides it rather than by internal resources. Meaning, SD-WAN may take less work from a network administrator because the vendor is providing the service. These improve the usability of both solutions. SD-WAN can also integrate with a virtual private network (VPN). An organization with a VPN connecting several locations can, therefore, use SD-WAN to underpin their existing VPN. - Focuses inward on the LAN or service provider network - Programmable and customizable - Enabled by NFV - Designed by the user - Focuses on geographically distributed locations - Preprogrammed and less complex - Routing can run virtually or via an SD-WAN device - Configured by the vendor How Fortinet Can Help? Regardless of the solution you choose, it’s critical to ensure adequate network security. Flexibility can be a two-edged sword. While the ability to make quick, comprehensive changes may make core network administration easier, it can also result in security gaps. It’s important to have a security framework built specifically for your solution, incorporating security in the data plane, control plane, and management plane. Fortinet, named a Leader in Gartner's 2021 Magic Quadrant report for WAN Edge Infrastructure, offers a complete SD-WAN solution that can connect a central location with a branch office and teleworkers while utilizing multiple distributed clouds. An SD-WAN available in virtual versions can also be set up to empower an organization to implement Software-as-a-Service (SaaS) options for new or existing clients.
<urn:uuid:ebf91bb9-012b-440a-a44c-f8eb712a474d>
CC-MAIN-2022-40
https://www.fortinet.com/resources/cyberglossary/sdn-vs-sd-wan
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00272.warc.gz
en
0.90882
1,799
3.453125
3
Basic definitions of malware and virus Malware: the word comes from malicious software, so it includes everything that runs on a computer, or other device, with bad intentions. The bad intentions can be aimed at you or at your computer. Virus: a program, or piece of code, that runs against your wish and can replicate itself. Looking at the definitions we can learn that a virus is a type of malware, but not all malware are viruses. Well known other types of malware are Ransomware, Trojans and Spyware. Besides malware, there is also Adware, which most of the time qualify as potentially unwanted programs (PUPs), and are usually easy to remove. I put emphasis on “can replicate itself” for a reason. This is because the replication factor is very important in the definition of a virus. As we concluded that viruses are malware, but only malware that can replicate itself is considered a virus. We can distinguish between different forms of replication. Viruses can replace other files with a copy of themselves or attach their code to existing executables. How do viruses spread? Not a complete list, but to demonstrate the variety, here are a few of them: - Boot sector viruses, copied from floppy to computer, became a lot less popular, but have switched to USB, so there are still a few using this method. - File infectors, these viruses attach themselves to, or replace other executables, so they get run instead of, or even along with the intended program. - Macro type of viruses, these viruses hide in documents and execute when the document is opened. These documents can be sent by mail as attachments or offered for download on websites. - Viruses can also be delivered by exploit kits. What does polymorphic virus mean? You may have seen the term polymorphic virus. This indicates that the virus replicates, but the “replica” is not an exact copy of the original. The main routine has the same payload, but the files differ in shape and size. This is a method used to avoid detection by anti-viruses that are based on file detection. In the old days when viruses had no other goal, then to wreck havoc on a computer, they were much more common then today. The goal often was just to prove a point or demonstrate the skills of the writer. Today’s more commercial viruses can be intended to weaken your defenses, to steal information or to add the computer to a botnet. Otherwise they are very rare, because there is no commercial interest in breaking your computer. Does Malwarebytes detect viruses? Yes, it does. Also Malwarebytes deletes infected files, but it can’t clean them if the virus is attached to the original file. Meaning, we detect the virus and remove the file, but we don’t take the virus out of a file and leave the clean file behind. Viruses come in many shapes and flavors, but not every malware is a virus. The most important thing however, is to be adequately protected against and to be aware of the dangers.
<urn:uuid:d187d414-a15c-41eb-aff2-c8c51ebc3acc>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2015/05/malware-vs-virus-what-is-the-difference
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00272.warc.gz
en
0.951034
663
3.625
4
Predicting the future is a famously unprofitable endeavor, even when armed with the latest and greatest artificial intelligence algorithms. So far, the best scientists have done is have an AI system predict what someone will do a few minutes into the future, although that was when they were following a recipe for making a salad, which seems not a whole lot different from predicting which numbers come after six and seven. And even then, the machine was right only about 40 percent of the time. In fairness, the researchers behind the salad gambit, from the University of Bonn in Germany, were mainly demonstrating techniques for training machine learning algorithms using video, an active area of AI research the Army Research Laboratory, for example, is also pursuing with projects such as Deep TAMER. But it does show the limits of trying to predict what lies ahead. Even the Bonn researchers say a long-term prediction of “anything more than a few seconds” is still just an educated guess. But that doesn’t mean there’s no place for predictive AI, which is used in everything from policing to medicine to disaster forecasting. Predicting the future may still be a mug’s game, but there are many advantages to having a good idea of what’s likely to happen soon. One promising area where AI’s power to crunch incoming data could offer a look into the future is in predictive maintenance, which essentially takes the old idea of preventive maintenance and eliminates the guesswork. Where the Rubber Meets the Road Most people are familiar with preventive maintenance from their cars — change the oil every few thousand miles, check the fluids regularly so they don’t run out and maybe invest in some new tires before the ones on the car go completely bald. Cars’ computer systems help with “Maintenance Required” or “Check Engine” lights that come up on the dashboard, but those lights aren’t very specific and often cause more angst than good. Predictive maintenance, on the other hand, ties machine learning software to sensors in a vehicle that measures the wear and tear on individual components, and draws on historical data concerning those parts to conclude, with a high probability, when something is about to go wrong. As a result, organizations can avoid unexpected breakdowns and unscheduled maintenance, and perhaps most importantly, have their vehicles ready for deployment when needed, whether that need is for package deliveries, patient transport, emergency response or military operations. The Pentagon, for instance, is taking a hard look at predictive maintenance. The Defense Innovation Unit recently gave a startup called Uptake $1 million to test its Asset Performance Management software, intended to monitor components and predict failures, on 32 Bradley Fighting Vehicles, thus avoiding unanticipated and potentially catastrophic failures on the battlefield. A successful test could lead to widespread deployment of Uptake’s application or similar software, as the Bradley is among the most common vehicles in the military, with about 6,700 in use around the world. Predictive maintenance is getting test drives with some other notable military vehicles as well. Last year, DIU (which then had an “x” for Experimental in its name, since discarded), gave 3C IoT a multiyear deal to develop a cloud-based predictive maintenance system to cover a variety of aircraft, starting with the E-3 Sentry airborne warning and control system plane and the F-16 fighter. The system will ride on the Amazon Web Services GovCloud region, combining massive amounts of structured and unstructured data to make predictions of component failures. Not only would it reduce maintenance costs, but it could help extend the lifespan of the F-16s, which the Air Force apparently will need for 20 years longer than originally planned. The Internet of More Reliable Things Predictive maintenance is extending far beyond military vehicles, of course. Commercial vehicle fleets, power and utility systems, factories, public transportation and the nation’s crumbling infrastructure also stand to benefit from the approach, particularly as the growth of the internet of things makes it more viable. Among its other advantages, predictive maintenance can also improve the bottom line, with the McKinsey Global Institute predicting it will save manufacturers alone up to $630 billion in 2025, and Deloitte forecasting millions in savings for individual sectors. As game-changing technologies go, AI has caused plenty of disruption, sometimes in unintended ways. New possibilities in facial recognition, for instance, are drawing opposition over concerns about privacy violations and other potential misuses. AI has been lauded for making great strides in medicine (and recently spotted a patient’s rare form of leukemia doctors had missed), but has also caught flak for making unsafe recommendations. Predictive policing also faces serious criticism. For all of AI’s accomplishments, it often seems like there’s another shoe ready to drop. But AI may have found something of a sweet spot in predictive maintenance, which can cut maintenance costs, extend the lives of vehicles and other machines, and doesn’t even seem to threaten anyone’s job. AI may not be able to predict the future, but it has a future under the hood in predictive analytics.
<urn:uuid:149d4299-4e99-47ed-9c29-6356bc6406ce>
CC-MAIN-2022-40
https://governmentciomedia.com/predictive-maintenance-ai
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00272.warc.gz
en
0.95538
1,062
3.15625
3
The word ambient means immediate surroundings of something. It has originated from the French word ambiant, which is further derived from the Latin word ambientem. The actual verb meaning of the term is “to go about.” When the word collaborated with the technology, it turned out to be “Ambient Intelligence Technology.” Everybody is aware that technology has turned out to be a ubiquitous part of our lives. The flourishing virtual personal assistants and Bluetooth speakers have become a subtle part of the background. For sure human lives have turned better with these technological innovations. This has made ways for dramatic disruption in daily lives known as ambient intelligence. So, ambient intelligence can be explained as a multi-disciplinary approach that looks forward to enhancing communication between the human and its surrounding while interacting with each other. It aims to make the environment more beneficial to humans with its quick response technology. The best example of it could be a smart home. The ambient intelligence or AmI holds the power to drive a conclusion and act upon it on behalf of humans, considering preferences based on the data obtained from all the nearby connected sensors and systems surrounding the users. AmI acts intelligently, pervasively, and intuitively. It is capable of understanding a user rather than questioning him and work. It takes actions according to human preferences without proving its presence. Ambient intelligence can be termed an emerging technology set to radically change the way humans interact with machines and devices around. Working of ambient intelligence Ambient intelligence works in a multi-disciplinary way and is an amalgamation of multiple technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), Big Data, Human-Computer Interaction (HCI), Pervasive-Ubiquitous Computing, and Networks. Ambient intelligence feels the environment and user context with the help of different intelligent digital devices such as Alexa, Siri present at home or workplace by utilizing various IoT sensors and devices. After collecting the data from all these different devices, the Aml systems process it. Further, it uses analyzed and processed information to interpret data that can help it to understand user proximity, state, intent, and behavior. The system starts interpreting data based on prior learning, current information, and pattern identification. The next step includes deciding on the best action and reverting to the user with an instinctively designed natural interface of an intelligent device. Application of ambient intelligence There are multiple ways in which ambient intelligence can make human life easier and better. Let that be a workplace, kitchen, or a complete house; Aml can effectively prove helpful in all the scenarios. The ambient intelligence technology serves the purpose of actually having machine assistance then spending on a human assistant. To get more details on the concept, let us consider an example of a smart office building. Equipment of ambient operations in an office building with now-common devices such as thermostats, smoke detectors, and lighting controlled by motion sensors is a must. What is different with the current development in ambient intelligence are ambient operations supported by strong artificial intelligence that have moved toward an extreme personalization end. Suppose a person wants to control the office temperature as soon as he enters the office building, he needs to activate automatic sensors. Similarly, lights turn on and off as soon as one enters the room and adjust to color and intensity preferences as an add-on. Philips Hue lighting has given a thought to it and is now showing up in many offices. The communication is carried out with a networked wearable device such as a smartphone. Sensors will work based on the activity of smartphones. For example, lights will turn on when a person comes into range so that he does not fumble around in the dark for a light switch or needs to change lights with the voice command. Philips has successfully implemented the live example of this concept into the workplace named Edge, a new building in Amsterdam. Workers here can personalize their lighting and temperature environment at their work desks using their smartphone app. Once connected and the setup is done, the app will control the light and temperature for workers all the time they are traveling in the building. This is possible through the network of lights and sensors installed by Philips called a ‘connected smart ceiling.’ Anyone can put up more intelligent examples that can prove implementing Aml technology can help ease human lives in multiple ways. Privacy – the only concern Introducing the ambient intelligence technology in everyday concepts does look interesting and exciting; the only concern here is privacy. The Aml systems will hold all the information about the lives of the people they are tracking. If the information drops into the hands of an unwanted source, it would, for sure, lead to misuse of data or intervening in a person’s privacy. Concerns on data usage, privacy, and overall security need to be addressed first before bringing the Aml systems into action. Ambient intelligence has a long way to go The above examples are proof of Aml amplifying the life of a human by offering comfort and ease. Aml equipped homes with human-centric technology will help perform daily chores more efficiently. In the near future, Aml will for sure find ways outside homes in different industrial applications, including retail, healthcare, manufacturing, smart cities, and more. It can even be thought of as keeping a closer eye on human health, especially in old-aged people and toddlers. With ambient intelligence, the human-machine relationship continues to evolve to keep up with the changing times. Moreover, such technology builds a way to think upon enhancing technology in this COVID-19 era where “no touching” or probably maintaining social distance is the new normal. To learn more visit our latest whitepapers about ambient intelligence and the latest technology.
<urn:uuid:07e2b854-82af-44dd-b9bc-c6dcce4cc9df>
CC-MAIN-2022-40
https://www.mobilitydemand.com/insights/tech/artificial-intelligence/living-in-ambient-intelligence-environment/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00272.warc.gz
en
0.937767
1,179
3.21875
3
Computers are capable of shaping the chips of their successors themselves. For decades, people have been drawing the map of the chips, but now even here the computer takes over. Researchers write this in a new study published 9th June in the scientific journal Nature. The floor plan (or 'layout') of all billions parts that make up such a chip is crucial: the smaller, the better and more efficient. It's a matter of square millimeters. While it takes people at Google and the University of California months to complete this layout process, artificial intelligence (AI) only takes a few hours. Computers designing an improved version of themselves; it sounds like the scenario of a blockbuster science fiction movie, but it is very real. Google is already using this new method when designing its so-called TPU chips that will be used in its next generation of AI computers. The development appeals to the primal fear of the apologists of the AI revolution. Mathematician Irving John Good, who worked with Alan Turing to crack German codes during World War II, hinted at this as early as 1965, when he said that the first ultra-intelligent machine will also be man's last invention. After all, a machine that is smarter than humans in all areas can make better machines. The result, Good says, is an "intelligence explosion" that is rapidly surpassing human intelligence. But we're not there yet. The self-learning system, fed by the designs of some ten thousand previous chip maps, is capable of only one task: creating maps for specific chips. But it does that extremely well. And fast. As a result, according to co-author Andrew Kahng, Moore's law will at least last for a while longer. This law states that the number of components per chip doubles every two years. The advantages of this new step are great. The layout process of chips takes much less time. And so it becomes easy to design a specific chip for each task (a heart rate monitor, for example). At the moment software controls the various tasks of a generic chip and that is inefficient. It's estimated that the new generation of specific chips can soon be a factor of a hundred more economical than their programmable counterparts. It might be a matter of time before your phone says: 'I'm sorry Dave, I'm afraid I can't do that'.
<urn:uuid:d21a7e07-65fb-4fe4-a36d-df0c63c3ff83>
CC-MAIN-2022-40
https://www.datacenter-forum.com/datacenter-forum/computers-no-longer-need-us-to-design-the-chips-of-their-successors
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00472.warc.gz
en
0.944193
482
3.59375
4
About the authors Allan Liska is a Consulting Systems Engineer at FireEye, and Geoffrey Stowe is an Engineering Lead at Palantir Technologies. Inside DNS Security: Defending the Domain Name System DNS security is a topic that rarely comes up, and when it does, it’s usually after an attack or breach disruptive enough to merit a mention in the news. Last year’s DDoS attack against US-based DNS provider Dyn was one of those, but it isn’t included in this tome as it was released a few months before the attack. Nevertheless, the attack sparked an increase of interest in DNS security, and the world at large finally really understood the Internet’s and, therefore, their dependency on this system. As could be expected, the authors first explain what DNS (Domain Name System) is, provide a short history of its creation and development, and a concise overview of how it’s used and what needs to be secured. Next, they offer a brief history of DNS security breaches (both successful and unsuccessful) and a summary of common DNS security problems that someone attempting to secure a DNS infrastructure in an enterprise can be faced with. DNS security events can be the result of both external attacks and internal mistakes, and the authors provide some very good advice on how to keep on top of things, as well as instructions on how to develop a solid DNS security plan for one’s company. The next two chapters deal with common DNS configuration errors and external DNS exploits. But many companies don’t have an expert in-house to deal with the DNS infrastructure, and often outsource DNS tasks. Chapter 9 addresses the things that companies have to think about and decide on when going for that option (this includes thinking about how much to outsource, and DDoS protection). The authors provide good pointers on the questions companies need to ask prospective domain registrars, and tips on how to work securely with a DNS provider. The book contains information about DNS reconnaissance strategies employed by attackers (and how to thwart their efforts), DNS network security, Windows DNS security, and the security of BIND, the most widely used DNS software package on the Internet. Readers will also learn enough about the DNSSEC protocol to implement it (on Window, Linux) and operate it, and to make an informed decision on whether to use it at all. And, finally, they will also get a peek at some real-world examples of complex DNS configurations. A lot of material can be found and read online about DNS and DNS security, but if you want to take a systematic approach and not miss anything, this book is a good place to start. Even if you’re not tasked with DNS security in your day-to-day job, you should pick it up, as it’s an easy, enjoyable read and – I would argue – it’s a good idea to know something about technologies that our daily lives are dependent on.
<urn:uuid:d5a985ca-f071-437c-936a-d0fb05bbb772>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2017/01/25/review-dns-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00472.warc.gz
en
0.954267
618
2.546875
3
The Defense Advanced Research Projects Agency’s Robotics Challenge will test several designs from contestants in simulated disaster areas. As part of the Defense Advanced Research Projects Agency’s Robotics Challenge, teams from several countries will showcase their robot designs and capabilities in multiple events for a chance to win $3.5 million in prizes. The objective of the contest is to test robotic solutions that can be applied during natural disasters to assist in response. Beginning in 2013, teams competed in trials in Florida putting their robotic prototypes to the test in eight tasks that examined mobility, manipulation, dexterity, perception and operator-control mechanisms. The final challenge will test similar capabilities in simulated disasters zones at Southern California’s Fairplex. In this final round, 25 robots will be given an hour to perform a circuit of physical tasks, with degraded communications between the robots and their operators. The intermittent communication will force operators to give concise and precise directions to their robots instead of step by step instructions. The robots in turn must successfully relay what they are experiencing in the simulated disaster zone despite limited and interrupted communication. Each team will feature a different solution and model, which DARPA’s Robotics Challenge program manager Gill Pratt believes will profile several potential solutions to disaster relief assistance for the future. “The teams all have different hardware approaches, different software approaches and different approaches for the user interface,” Pratt said. “[S]o I think we'll see a whole range of different ways that technology will be applied to this problem.” In addition to communication deficiencies, teams will face other challenges that disaster response teams might encounter in real-world situations. Robots will have to navigate through a series of obstacles while transitioning to and from a vehicle that will transport the robots from a safe zone to the disaster zone. They must also carry their own power supplies. DARPA believes that the technology solutions featured during these challenges will transform the robotics field and “catapult forward development of robots featuring task-level autonomy that can operate in the hazardous, degraded conditions common in disaster zones.” NEXT STORY: NIH dives into cyber-physical systems research
<urn:uuid:51026269-dac2-4f8b-b2c7-6641ff64bdb8>
CC-MAIN-2022-40
https://gcn.com/emerging-tech/2015/03/robots-march-toward-disaster-response/300193/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00472.warc.gz
en
0.939318
434
2.90625
3
Generate Rows Tool One Tool Example Generate Rows has a One Tool Example. Visit Sample Workflows to learn how to access this and many other examples directly in Alteryx Designer. Use Generate Rows to create new rows of data at the record level. Use this tool to create a sequence of numbers, transactions, or dates. The Generate Rows tool follows a process to generate rows of data. That process consists of an initial expression (applied to record 1), then a loop expression is applied (such as an increment) that builds subsequent rows, based on a condition (true or false) that ultimately build rows until the condition is false when it terminates the loop. Connect an Input An input connection to this tool is optional. Configure the Tool - Choose to update an existing field or create a new field. - Update Existing Field: Assesses the rows coming in and adds new records accordingly. One example is an input that contains a start and an end value where you would like to generate a row for each value in between. - Create New Field: The tool is configured as an input. If this method is chosen, specify the new Field Name and appropriate Type and Size. - Specify the Initialization Expression to start the creation of rows. You can enter a value, create an expression, or select the ellipses button to open the Expression Editor. - Specify the Condition Expression, where the condition is either true or false. If the condition is true, additional rows are generated until the condition equals false. Select the ellipses button to open the Expression Editor. LOWRANGE <= [HIRANGE] - Specify the Loop Expression (Usually Increment): This is typically expressed as an increment that generates subsequent rows until the false condition is met. Select the ellipses button to open the Expression Editor. LOWRANGE + 1 Configure the Condition Expression so row generation ends to preserve hard drive space. ● GenerateRows (5): The value did not change after the Loop Expression. Adjust your Loop Expression to contain the Field being modified. Because this tool includes an expression editor, an additional input anchor displays when the tool is used in an app or macro workflow. Use the Interface tools to connect to a Question anchor. Visit Interface Tools for more information.
<urn:uuid:e82d6d18-1bb5-43fb-a6f8-4d8ea935709d>
CC-MAIN-2022-40
https://help.alteryx.com/20221/designer/generate-rows-tool
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00472.warc.gz
en
0.792707
499
2.765625
3
A New Criminal Business Model We all know that a computer virus is bad and that we need anti-virus software to protect our computers against them. Unfortunately, this is no longer enough. Cyber Security has become a lot more complicated since the turn of the century and even more so in the last few years. Recently, the news has been filled with reports regarding Cyber Security, or more accurately Cyber Security failings. Information has been stolen from large corporations but often this isn’t the obvious credit card information but rather contact details and passwords. Whatever the information it all has value and often ends up for sale on the Dark Web alongside purpose built easy to use ‘hacking’ tools. Now even criminals with very little computer skills are able to get a list of e-mail address’ and an app that allows them to start their own cyber-crime business. They can gain access to our systems and take our money either by restricting access to files and holding them to ransom or by stealing information with the intent to extort. Ransomware has become very commonplace in recent years. Usually if the recipient of an infected e-mail opens the attachment their computer, or files on the computer and network, will become compromised and encrypted. Afterwards the criminal will then offer to de-encrypt the files, at a price. If you pay sometimes these files will be decrypted but often this isn’t the case. Your policy should always be not to pay. Even if you do get your files back once these criminals know they can hold data to ransom they will continue this business model and come up with new and more sophisticated attacks. Extortionware is a lot more difficult to predict and protect against. Extortionware attacks are usually highly targeted and are more about the retrieval of data than its destruction or encryption. Once cyber criminals have gained access to your system and taken sensitive information demands are made, usually for money, followed by a threat. For example, criminals may send your company’s intellectual property to competitors or distribute your data online unless they’re paid. However, money isn’t always the motivation behind this sort of attack. The 2015 information leak from website Ashley Maddison was carried out only after hackers gave the company a chance to change their operating policies. The policies weren’t changed and as a result around 36 million user details were released in a highly-publicised leak. The main concern with this sort of attack is that a backup can’t be you get out of jail free card. Once the criminals have your data there is nothing you can do. Because of this prevention is not only advised but imperative. Ashley Maddison, TalkTalk and Yahoo all failed to protect their systems from attack, but every company large and small should learn from this and ensure their systems are as secure as possible. Protection is better than Reaction To avoid becoming pray to the cyber-criminals you must train your staff on how to spot fake e-mails, increase your network perimeter security with a security firewall, add anti-malware software to your computers and always have a regular backup taken throughout the day. It isn’t possible to block 100% of attacks so having a backup is very important as it can often be your last line of defence against criminal file encryption. On top of this we recommend proper controls on server shares to add another layer of protection to sensitive information and block ransomware encryptions where permission is denied. These points along with other policies and practices help to make up the Government backed Cyber Essentials Qualification. With the rise of more complex and sophisticated cyber-attacks, including ransomware and extortionware, we believe this should be the minimum operating policy of any company no matter the size, industry or customer base. For more information please contact LSA.
<urn:uuid:be367448-7464-4b35-9a35-26cca5df4102>
CC-MAIN-2022-40
https://www.lsasystems.com/lsa-systems-blog/what-is-ransomware-extortionware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00472.warc.gz
en
0.95962
783
2.734375
3
So many big, expensive cyber attacks have taken place in the last few years that it’s hard to remember them all – when will we learn our lesson? Cyber attacks are common ground these days. There was the Chase Bank breach of 2014, which exposed the financial information of 76 million Chase customers. This attack was set to target 10 major financial institutions in total, but only one other company reported that data had been stolen. This company was Fidelity Investments. Though the attack caused serious repercussions for Chase Bank, the damage could have been much worse. Four hackers (two from Israel) were eventually arrested. Hacking Isn’t Just About Stealing Data In the Sony Pictures data breach of 2014, over 100 terabytes of data was stolen by North Korea. This attack was about more than just getting the personal information of consumers. The attack occurred because of a movie that Sony Pictures was set to release called “The Interview”. The movie, starring Seth Rogen and James Franco, was a fictional story about two journalists who go to North Korea to interview Kim Jung Un. The two men actually work for the CIA and are planning to assassinate the very well-known but unpopular leader. It was believed that North Korea’s leader ordered the cyber attack on Sony Pictures to show his displeasure and disapproval of the film. In addition to the personal information of Sony executives and other employees, hundreds of photos and emails were released to the public. These highly personal items caused a massive amount of embarrassment to Sony’s top executives. No One Is Safe from Hackers Proving that no one is immune from cyber hackers, Equifax, one of the nation’s largest credit reporting agencies, was infiltrated by hackers in mid-2017. The company estimated that approximately 143 Americans were affected. In addition, an unknown number of consumers from Canada and the UK were affected by this breach. Were there any signs that an enormous data breach like this might occur? A report issued in October of 2017 by Motherboard, found that Equifax had certain vulnerabilities due to an online portal created for employees. Researchers discovered that the Equifax website was highly susceptible to a basic forced browsing bug. A researcher from Motherboard said that he didn’t even have to do anything special to infiltrate the system. It was far too easy to get in. “All you had to do was put in a search term and get millions of results, just instantly—in cleartext, through a web app,” the researcher said. In spite of this information being available to Equifax, it took them six months to close the portal and shut down these vulnerabilities. In this day and age, it’s unthinkable that organizations as sophisticated as Equifax might be so lax in their data security. Target Stores lost millions of dollars when they had to reimburse customers for their losses after their 2013 data breach. In addition to that, a class action lawsuit was settled for roughly $10 million. As if that wasn’t enough, 20-30 percent of Target shoppers said they were worried about shopping online at Target stores after the breach. Are We More Vulnerable Than We Believe? Many data security experts believe that cyber weaknesses like this are far more common than the public believes. In an era when everyone should be fully aware and taking every precaution to prevent a data breach, numerous large corporations remain at risk. After all is said and done, most people would expect any organization that has experienced a cyber theft to drastically improve their cybersecurity. Large, expensive data breaches leave an organization open to legal action, plus they’re embarrassing. Consumers say that they are less likely to do business with any company that has been a victim of a cyber breach. But has that really happened? A new study performed by CyberArk reveals that 46 percent of all companies who have experienced a cyber breach have not substantially updated their security policies. This failure to learn from past mistakes has the public truly baffled. In some cases, IT professionals have been interviewed and asked why they haven’t greatly improved their cybersecurity. Over 30 percent of these pros said that they did not believe it was possible to prevent all cyber-attacks. This indicates that even security experts aren’t sure what to do to stop future attacks from occurring. But, should we simply make the decision not do anything at all? New Report Sheds Light on the Problem A 2018 report from CyberArk called, “Global Advanced Threat Landscape Report”, indicates that at least half of all businesses and organizations have only taken the basic security measures required by law. Though their public relations department may say they are taking every precaution to protect customer data, this is probably not true. In addition, 36 percent of respondents in the report said that administrative credentials were currently being stored in Excel or Word docs. These documents would be easy to obtain by any hacker with average skills. The Global Advanced Threat Landscape Report also reveals that the number of users with administrative privileges has jumped from 62 percent to 87 percent over the past few years. This points to the fact that many companies are opting for employee convenience over data security best practices. This is an alarming statistic given the soaring cost of cyber breaches. Moving Into the Future with Better Cyber Security The new AT&T Global State of Cybersecurity highlights many of the critical gaps that remain in our cybersecurity strategies. IT infrastructure and critical data must be fully protected, including credentials and security answer keys. In most organizations, those in higher positions are given greater access and authority to online data and this equates to heightened risks of a cyber breach. According to Alex Thurber, Senior Vice President and General Manager of Mobility Solutions, “If 2017 has taught us anything, it is that every device needs to be secured because any vulnerability will be found and exploited”. The company is set to sign a deal with Punkt Tronics to install better security on smartphones, Blackberry devices, and other electronic devices. With consumers spending more and more time browsing on their cell phones, all mobile carriers are searching for ways to better protect their customers from hacking. What Consumers Can Do A great increase in the sale of anti-virus software and password managers demonstrates a strong resolve by consumers to incorporate stronger security measures into their everyday lives. Innovative technology is producing a new generation of security software that combines threat defense techniques and other more conventional means of cybersecurity. Though some of these techniques are having an impact, experts believe there’s much more to be done. As our society becomes more aware and more prepared, even stronger security for IT systems will be developed. Until then, security experts urge the public to be more cautious about clicking on links. Employees at any company need regularly scheduled security meetings where they are educated and reminded to utilize best practices when using smartphones and computers. All programs should be updated regularly with software updates and fixes to known bugs. Create difficult passwords and change them every 90 days. These are just a few of the ways that consumers can stay safe while surfing on the internet. Published On: 28th March 2018 by Ernie Sherman.
<urn:uuid:f52a93d3-aeb6-4211-9bfb-090c4ffaeb38>
CC-MAIN-2022-40
https://www.fuellednetworks.com/are-we-learning-anything-from-all-these-cyber-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00472.warc.gz
en
0.969187
1,455
2.8125
3
One-Way Data Obfuscation and Authentication Hashing is a difficult-to-reverse data masking technique that converts a variable length "message" (e.g., someone's password) into an obfuscated, fixed-length, alphanumeric string. The message digest, or "hash value," can be an indexed look-up for the message. Sometimes there is more than one message per index (a "collision"). Because hashing is not as strong as encryption, or as reliably reversible, it is sometimes suitable for masking alone. More commonly, however, hashing is used with encryption. IRI supplies SHA1 and SHA2 hashing algorithms along with several encryption functions. Hash functions are also used to generate checksums or Message Authentication Codes (MAC). These are created and sent along with messages like emails, EFTs, or passwords. When the message is received, its contents are run through the same hash function to create a new MAC. If the original and new MACs match, the message is authentic; if they do not, the message is likely to have been altered, and thus compromised. Use the field-level hashing functions in IRI FieldShield in the IRI Data Protector suite, IRI CoSort in the IRI Data Manager suite, or the IRI Voracity platform to help mask PII. Or, create a MAC for one or more column values in each row. Include the MACs as an additional field or provide them in a separate file. Use the MAC to verify that the data in the record was undisturbed. For more information and another use of hash values: See here.
<urn:uuid:625f7e4e-a638-426e-ab22-e1aae5b449bd>
CC-MAIN-2022-40
https://www.jet-software.com/en/data-masking-static-data-masking-hash/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00472.warc.gz
en
0.901589
339
3.03125
3
OSPF ROUTER TYPES OSPF routers control the traffic that goes traverses across the area boundary. The OSPF routers are categorized based on the function they perform in the routing domain. There are four different types of OSPF routers: Internal Router – This is a router that has all of its interfaces in the same area. All internal routers in an area have identical LSDBs. Backbone Router – This is a router which resides in backbone area. The backbone area is set to area 0. Area Border Router (ABR) – This is a router that has interfaces attached to 2 or more areas. It can route between areas. ABRs are exit points for the area, which means that routing information destined for another area can get there only via the ABR of the local area. ABRs can be configured to summarize the routing information from the LSDBs of their attached areas. ABRs distribute the routing information into the backbone. In a multi-area network, an area can have one or more ABRs. Autonomous System Boundary Router (ASBR) – This is a router that has at least one interface attached to external network (other autonomous system), such as a non-OSPF network. An ASBR can import non-OSPF network information to the OSPF network using route redistribution. Notable is that a router can be performing 2 functions. For example, if a router connects to area 0 and area 1, and in addition, maintains routing information for external network, it will be called more than one router type namely – - a backbone router - an ABR and - an ASBR.
<urn:uuid:a4e65c8e-a967-40d7-865b-a9111a6fdb0e>
CC-MAIN-2022-40
https://ipwithease.com/ospf-router-types/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00472.warc.gz
en
0.918947
344
3.484375
3
IBM to Recycle Silicon Wafers for Solar Cells November 5, 2007 Timothy Prickett Morgan IBM is probably best known for its enterprise-class servers and related systems software, but the company is obviously a pretty big player in the chip industry with its various Power and PowerPC chips. IBM’s techies in Burlington, Vermont, where IBM still makes a lot of chips, have been scratching their heads about what to do with defective silicon wafers, which are the inevitable result of any chip-making process, and have come up with a novel idea: recycle them for use in solar cells. An engineer named Eric White from the Burlington factory, which is located there because the Watson family that ran IBM for the better part of 60 years liked to ski, has created a means to polish all of the semiconductor etchings that turn a silicon wafer into a microprocessor (or any other kind of chip such as memory) off the wafer, thus turning it back into a raw piece of polysilicon material. Once the wafer is cleaned, IBM can use it to calibrate its machinery as it does its chip-making runs–this is called a monitor wafer–and after it gets a little worn out from use, then IBM can sell it to the burgeoning solar power industry, which is eagerly looking for raw silicon material from which it can make solar cells. These wafers are usually cast off into landfills–which is something of a creepy thought, but this is the modern industrial economy’s way of handling waste. The IBM method means this silicon resource, which is very expensive to produce, can be recycled for other uses. IBM estimates that the global chip industry produces approximately 3 million scrap silicon wafers a year, which would be sufficient to create a solar farm that could generate 13.5 megawatts of electricity. That works out to 57 million kilowatt-hours of juice per year, which is enough to power 6,000 Western-style homes at 9,500 kilowatt-hours per home per year. The shocking thing, of course, is how few homes that amount of electricity powers. Why someone didn’t think to do this 30 years ago is also a bit of a shame, but progress is progress and IBM is to be commended for coming up with the scheme to recycle its silicon. The Semiconductor Industry Association says that around 250,000 silicon wafers are started each day in the world, and IBM estimates that about 3.3 percent of the wafers are scrapped. By recycling those scrapped wafers at the Burlington facility as monitor wafers, IBM saved more than $500,000 in 2006 and is projected to save $1.5 million in 2007 and that much each year going forward. IBM did not say how much money it could make selling the second-hand wafers. And by using recycled silicon, solar cell manufacturers can save between 30 percent and 90 percent of the energy they normally expend creating solar cells, those lowering the carbon footprint of their products. Because Big Blue now has now gone green, it plans to share the methodology behind the wafer scrubbing with the rest of the silicon industry.
<urn:uuid:7d3f3044-e4d4-4c0c-b941-05f17e2d042d>
CC-MAIN-2022-40
https://www.itjungle.com/2007/11/05/tfh110507-story05/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00472.warc.gz
en
0.963882
668
2.734375
3
Several years ago, firewalls were fairly simple devices — at their core, all they really did was block ports on a network. If an organization didn’t want users doing something like instant messaging, all administrators had to do was find out which ports were used for that activity and block them at the firewall. There were a few problems back then, such as when both allowed and blocked applications needed to use the same port. But for the most part, firewalls simply blocked traffic wholesale without any real intelligence beyond what an administrator could program into the device. The Current State of Firewalls Today, the fact that organizations need to use thousands of applications from a variety of devices means that blocking a port would almost certainly cause valid applications to stop working. And hackers can simply attack the ports that they know will be open, such as those commonly used for email. Instead, firewalls have evolved to provide deep-packet inspection, intrusion detection and application identification. That means patterns are blocked, not just ports. Most attack programs use a certain pattern, if not a template, to try to penetrate a network. This includes actions such as cloaking a program’s true intentions, encrypting payloads and using the ports that other programs do, but for a different purpose. Other than when a hacker comes up with something new, these patterns are all fairly well known and can be blocked by a standard firewall, though a clever new attack can inflict a lot of damage before a firewall’s pattern blockers are updated. Intrusion prevention system (IPS) capabilities help next-generation firewalls combat new attacks and ongoing threats. They’re able to identify individual applications and can find and block imposters trying to masquerade as valid programs. They also look for suspicious behaviors, such as a program trying to jump from an IPv4 to an IPv6 network, and restrict that activity. IPS devices can also be updated with new profiles and new scanning techniques.
<urn:uuid:5a698d0b-2caa-41ad-a7f5-dee0be69298e>
CC-MAIN-2022-40
https://fedtechmagazine.com/article/2014/04/why-agencies-need-next-generation-firewall
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00672.warc.gz
en
0.962869
405
3.046875
3
VLAN ACLs are often also referred to as VLAN access-maps. When we want to filter traffic from one VLAN to other and from one VLAN to other world we can use ACLs and several other methods. But what if we want to filter traffic within the same VLAN. That is where the VLAN ACLs come handy and help us achieve this purpose. Let us take the topology example as below – In the above topology all the Routers simulate the end-hosts and are on same VLAN 10. Switch is acting as a simple L2 switch with all the ports connecting to routers in same VLAN 10. Initially we can ping from one router to another as all are in the same VLAN. Now let us say we want to filter traffic within the same VLAN so that R1 and R3 aren’t able to reach R2. And to achieve this we will have to configure VACL as below on switch: Sequence number 10 will look for traffic that matches access-list 101. All traffic that is permitted in access-list 101 will match here. The action is to drop this traffic.Sequence number 20 doesn’t have a match statement so everything will match, the action is to forward traffic.As a result all traffic from any host to destination IP address 10.0.0.2 will be dropped, everything else will be forwarded. Last step is to apply this ACL to VLAN in which you want to filter the traffic. As a result the traffic from R1 & R3 to R2 will be blocked and dropped.
<urn:uuid:62fd3d5a-21f0-443c-814e-3834c787e4c1>
CC-MAIN-2022-40
https://ipwithease.com/vacl-configuration-scenario/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00672.warc.gz
en
0.935638
338
3.265625
3
Home networks: How do you keep your employees secure? A recent study of 127 popular routers found that every single one of them had critical vulnerabilities. These range from easily guessed login credentials (the username and password might be hardcoded as “admin”) to devices that are seldom given security patches. Your users are working from home on networks that aren’t secure. A recent study of 127 popular routers found that every single one of them had critical vulnerabilities. These range from easily guessed login credentials (the username and password might be hardcoded as “admin”) to devices that are seldom given security patches. One-third of the routers tested was running a version of Linux that was last updated in 2011. It goes without saying that this is dangerous, particularly at a time when more people than ever are working remotely. The IT department can control the network in the office, but there’s no way to check every remote worker’s home network. If an employee’s router is compromised then all kinds of attacks are possible including, for example, redirecting users to websites that appear genuine, but which are designed to steal credentials. The dangers of WiFi And home networks are not the only risk that employees are being exposed to in these times. With entire households home-working and home-schooling, it can be tempting to relocate to a nearby cafe and use the public WiFi there. Hackers can often compromise these networks or simply create spoof networks with matching names and lure people onto those. Again, once they are in, they can compromise machines, hijack data and steal credentials. Your regular remote workers might know all this. Perhaps they have to do training before they are issued with a company laptop. Unfortunately, in the rush to set up thousands of employees to work remotely during the pandemic, many people started remote working with little, if any, training on how to stay secure. Many companies will circumvent the problem by requiring employees to access line of business applications via a virtual private network (VPN) but not enough are doing this. In our recent survey of remote working practices, just 29 per cent of respondents said they were using a VPN. Almost as many – 26 per cent – said they used an application installed locally on their machine. Has everyone been trained? This is risky, to say the least. You might feel that it’s an acceptable risk, but we know that cybercriminals increased their activity during the pandemic. They knew, even if companies didn’t, that the number of potential targets was about to increase massively. Companies need to make sure that everyone is familiar with best-practice security measures when working remotely – and that applies to people who have been remote workers for a while. Beyond that, companies need as much visibility as possible of security issues across all their platforms. The kind of single-pane security visibility that Cloudhelix provides will help, for example. Another solution is SD-WAN (software-defined wide area network) technology, which offers a new way to handle corporate networking by virtualising the network infrastructure, instead of requiring proprietary hardware. It can be designed to prioritise cloud-based applications, whether an employee is connecting from home or the office and can add security functionality without the need for more equipment. The challenge of securing home workers won’t go away. Finding the best solution means partnering with a provider that has the experience and sector knowledge to determine what will work for your particular circumstances. Got a question? Our experts have an answer
<urn:uuid:f0299a7c-0f34-4caa-bd3d-d57c4e594409>
CC-MAIN-2022-40
https://www.ek.co/publications/home-networks-keeping-employees-secure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00672.warc.gz
en
0.968679
728
2.546875
3
As COVID-19 wreaks havoc on the world, ransomware attacks are also rapidly rising, and they are having a terrifying impact on hospitals and other care facilities. The largest cyberattack in the history of US healthcare happened on Monday, September 29th. Universal Health Services, a for-profit corporation that runs 400 hospitals and clinics with 90,000 employees in 45 states, was attacked by hackers who infected its internal computer network with ransomware. This cyberattack forced hospitals back into an analog mode, as they had to pivot from working online to maintaining records using good old fashioned pen and paper. Leaving the digital world left patient records inaccessible and limited the ability to provide care, resulting in longer wait times and adding additional stress on an already strained healthcare system. This is not an isolated incident, as similar attacks in Europe have put other patients’ lives in danger. Another serious incident happened in German hospital, where they suffered a similar cyberattack that cost a patient her life. Hackers count on human mistakes No matter how much money a business invests in cybersecurity, any network can be compromised by a single human error. Recent data shows human mistakes caused 27% of data breaches in the US this year alone. If you want to make a serious difference in your cybersecurity make employees aware of potential risks and give them guidelines to maintain best cybersecurity practices. 4 common human errors that cause breaches #1 Lack of cybersecurity knowledge Employees who don’t know about cybersecurity are more likely to open infected files, click on phishing links, and rely on public Wi-Fi. They are vulnerable and so is your network. #2 Choosing weak passwords Does your business have a password management policy in place? If not, employees may unknowingly put the business at risk. Poor password management habits include using weak passwords, default credentials or storing passwords on non-encrypted forms. #3 Handling sensitive data carelessly All employees are human and make mistakes. Such as accidentally deleting sensitive files, sending emails to wrong addresses, and not encrypting sensitive data. Any lack of awareness about potential security threats can have dire consequences at the workplace. #4 Using ancient software Old software is a hacker’s best friend. When you use software downloaded from unauthorized sources you often get malware and viruses. Outdated programs also lack security features, so stay away from programs offered by suspicious websites and sources. It’s 2020, be vigilant Ransomware specialists are masters of crafting interesting emails that get people to click. And once you click, it’s chaos. That’s why companies must be vigilant with cybersecurity and educate employees. What exactly happens when an employee “clicks” on an email attachment? The unleashed ransomware payload searches for weak spots, locks up programs and demands money for the keys to unlock it. Once the networks go down, electronic health records become unusable, and this can have tragic effects on patients and families. Remember when your mother told you to never speak to strangers? Well in 2020, hackers are strangers and they are sending you fun stuff to click on. Don’t! A human mistake isn’t just a mistake anymore, it can lead to catastrophic consequences. Got questions? Irdeto offers modular cybersecurity solutions and services to smaller startups to help them scale up their cybersecurity capabilities. Call us to see what we can do for your business. Click here to get in touch with Irdeto’s Connected Health team to learn more!
<urn:uuid:d8374556-9bb4-4c46-8145-544291c2b3a9>
CC-MAIN-2022-40
https://blog.irdeto.com/healthcare/ransomware-attacks-are-turning-human-errors-into-tragedies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00672.warc.gz
en
0.938602
714
2.765625
3
Managers of data centers around the world are scratching their heads to figure out how to make their facilities more sustainable. This is because a staggering 75% of data centers see sustainability as a competitive advantage. Such a notable change in perspective comes in response to the growing environmental problems caused by data centers and the IT industry. To achieve this lofty yet necessary goal, industry leaders must learn to reduce waste while increasing efficiency to keep businesses running without negatively impacting the environment. Meanwhile, data centers will soon consume one tenth of all the electricity we produce! In the process, they eject millions of tons of carbon emissions per year. The IT sector is overtaking the economy in both size and resulting environmental problems. The biggest contributor to data centers’ environmental impacts is their inefficient cooling systems. That’s right, the energy wasted when blowing cold air over servers has a devastating effect on the planet. That’s why a growing number in the industry are excited about a new technology: single-phase liquid immersion cooling. This green innovation by GRC makes sustainability a snap. It provides superior data center cooling at a lower cost while protecting the environment. Sustainability shouldn’t feel like a punishment. Now you can increase your profits while making operations safe for everyone. What is Single-Phase Immersion Cooling? Immersion cooling refers to the act of immersing servers in a bath of coolant, taking traditional air cooling out of the mix entirely. Currently, it comes in two varieties; single-phase and two-phase. Single-phase immersion cooling is the more economically viable of these options. As its name implies, this technique involves a single phase of matter for the coolant—liquid. Conversely, with two-phase cooling, the coolant alternates between liquid and gas. While this is an interesting concept, it hasn’t panned out well in practice due to its cost, complexity, and environmental impact. The advantages of single-phase immersion cooling include its extremely simple design, which results in lower CapEx and OpEx and higher reliability. The liquid pulls heat out of the servers efficiently while also serving as a protective barrier against corrosion and other hazards. This efficiency is measured by the “power usage effectiveness” (PUE), where a score of 1 is ideal and a score of 2 indicates that the data center wastes half its electricity on cooling. Single-phase liquid immersion scores a near-perfect 1.02 to 1.03. Older cooling technologies fare far worse, scoring in the range of 1.6 to 2.0. In essence, single-phase immersion cooling is today’s leading technology to cool data center hardware. Single-Phase Immersion Cooling CAN Enable Sustainability In addition to its other advantages, such as larger cooling capacity and lower cost, single-phase immersion cooling offers a useful solution to improving sustainability. GRC’s single-phase cooling efficiently removes 100% of server heat. Even better, it emits only a fraction of the energy and byproducts of older cooling technologies, making single-phase the clear choice for sustainability-minded organizations. With single-phase immersion cooling, your data center can cut out 95% of its cooling energy use, and 50% of its total maintenance and energy costs. That also translates into fewer carbon emissions. Needless to say, upgrading to single-phase immersion is the smartest choice you can make for sustainability. Due to the higher energy efficiency of liquid immersion, companies using GRC’s products can win grants from governments and utilities. Data centers can also save on floor space while reducing electrical infrastructure. Furthermore, single-phase immersion only uses green products so it poses virtually zero threat in terms of global warming. The coolant is safe for the planet. Two-phase immersion systems, on the other hand, use substances with much higher global warming potential than even carbon and methane and are chemically reactive. Whether you compare single-phase immersion to two-phase or any of the older cooling methods, single-phase comes out on top as the cooling solution for sustainability every time. It uses fewer resources to accomplish more cooling, enabling data centers to safely serve consumers long-term. Data Centers With Single-Phase Have Even More Environmental Benefits The implementation of single-phase immersion cooling produces widespread environmental benefits. Data centers using this technology decrease their e-waste. Parts last longer and the cooling system is simpler so there’s less need to throw away electrical and electronic equipment. Submerging servers in the liquid coolant keeps debris, moisture, hot spots, and oxidation at bay. Immersion-cooled servers are less subject to problems with solder joints or electrostatic discharge. You also no longer need server fans, eliminating a source of vibrations. All of these issues influence how long your hardware lasts. Liquid immersion cooling keeps parts running longer. Practically any resource in the data center stands to benefit from single-phase cooling. GRC’s immersion cooling can decrease carbon footprints by 40%, real estate footprints by 30%, and water use by an astounding 8 million gallons per megawatt! This cooling technology will allow society to advance to 5G, IoT, AI, and other innovations without breaking the bank or harming the environment. It’s a high-performance, eco-friendly solution that’s come along at just the right time. Additionally, single-phase immersion cooling enables server heat to be recycled for other purposes, such as district heating. This introduces possibilities like sustainable agriculture and is even starting to be mandated by regulations. It’s almost too good to be true. Not surprisingly, leading data centers are already taking the plunge! Go Green With GRC Now’s the time to go with GRC’s environmentally sustainable single-phase liquid immersion solution. This technology eliminates much of the need for electricity in data centers and its unsightly byproducts. Unlike its counterparts, it uses materials and functions to mitigate the risks of global warming. And it even extends the longevity of IT equipment to trim down e-waste. GRC’s single-phase cooling also lets you recycle waste heat to be productive. However you look at it, this technology represents a brilliant new era for sustainable data centers. Immersion cooling protects the environment and your investment. Deploy more servers, save money, and make your data center and our planet greener. Talk to GRC today!
<urn:uuid:3526dd25-6574-44a9-84dc-27bedcd76217>
CC-MAIN-2022-40
https://www.grcooling.com/blog/can-single-phase-immersion-cooling-enable-sustainability-in-data-centers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00672.warc.gz
en
0.917944
1,318
2.96875
3
Which display device is the best for your classroom? A look at the pros and cons of projectors, TVs, and interactive whiteboards. Gone are the days of chalkboards and plain old whiteboards – and while we can all agree that’s good news, not everyone agrees on their replacements. To help, here’s a look at the pros and cons of three of the most popular display options in today’s classrooms: projectors, TVs, and interactive whiteboards. Pros: Projectors offer high visibility. If you have a large classroom, all you need to do is choose a large enough screen (or, even simpler, a white wall) and move the projector a certain distance away to get a huge picture than can be seen from every angle. Cons: Because projectors aren’t backlit, they require darker spaces to be clearly visible. Plus, the projector itself can be costly, between bulbs that burn out quickly to the expense of mounting the projector high enough. Pros: Between their even, bright lighting and high resolution, TVs offer the best picture of the three classroom display options. Cons: The issue is that, unlike projectors, you’ll need to buy a bigger TV to make things visible in a bigger room. This problem grows the more the room grows, and once you reach a certain classroom size you simply can’t buy a TV large enough to give every student a clear view. Pros: Unlike the other two display options, whiteboards offer tangible interactivity. You can write and display notes on them, and even download software that guides collaborative learning. Cons: There are very few “cons” to Interactive Whiteboards. Some models tend to have a short lifespan, or bulbs that dim quickly, but they are such are a powerful tool for the teacher and an excellent learning device for the students that the good they do in a classroom quickly outweighs the bad.
<urn:uuid:be4c8547-6874-4c2d-beaa-a8f800ffcd2f>
CC-MAIN-2022-40
https://ddsecurity.com/2016/02/25/display-showdown-projectors-tvs-or-interactive-whiteboards/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00672.warc.gz
en
0.937476
410
2.671875
3
14 Mar How to Avoid Using Common Security Question Passwords Is your first pet’s name putting your computer at risk? Security questions are a very valuable tool for protecting data and preventing unauthorized access to electronic devices, but they can also be a giant hole in an organization’s security plans. In order to make sure your security question passwords are really protecting you, be sure to follow these tips: - Avoid using easily verifiable information – In the age of Facebook and Twitter, it’s easy to find out lots of information on just about anyone. Some of the most commonly used security question answers – “What is your mother’s maiden name?” “What’s your favorite sports team?” “What high school did you attend?” – can be figured out by spending five minutes looking at someone’s social media profile. - Make up an answer – One way to avoid using an easily researched piece of information as the answer to a security question is to make up an answer that isn’t true. Since security questions are meant for your protection and personal use, there’s no rule that you have to give a truthful answer to them. As long as your answer is something you’ll remember, the password you create for your security questions can be anything. - Don’t reuse security question answers – Another common mistake people make is to use the same security question (and answer) for multiple devices. While this does make things easier to remember, it also means that figuring out the password to one device gives someone access to all of that person’s devices. CyberlinkASP specializes in desktop virtualization and remote desktop services for businesses. Contact us today at 972-262-5200.
<urn:uuid:c50b496d-5e66-4283-aec0-6f0881f5a52a>
CC-MAIN-2022-40
https://www.cyberlinkasp.com/insights/avoid-using-common-security-question-passwords/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00072.warc.gz
en
0.936541
368
2.5625
3
Combining Social Engineering & Malware Implementation Techniques Cybercriminals will often use a combination of social engineering methods and malware implementation techniques – in order to maximize the chances of infecting users’ computers: - Social engineering methods – including phishing attacks – help to attract the potential victim’s attention. - Malware implementation techniques – increase the likelihood of the infected object penetrating the victim’s computer. This was one of the first worms that was designed to steal personal data from users’ online accounts. The worm was distributed as an email attachment – and the email contained text that was designed to attract the victim’s attention. In order to launch a worm copy from the attached ZIP archive, the virus writers exploited a vulnerability within the Internet Explorer browser. When the file was opened, the worm created a copy of itself on the victim’s disk – and then launched itself, without any system warnings or the need for any additional action by the user. A spam email – with the word ‘Hello’ in the subject line – stated ‘Look what they say about you’ and included a link to an infected website. The website contained a script that downloaded LdPinch – a Trojan virus that was designed to steal passwords from the user’s computer, by exploiting a vulnerability in the Internet Explorer browser. Combining Social Engineering & Malware Implementation TechniquesKaspersky With so many different types of malware – and the vast range of malicious...
<urn:uuid:3ad2d99b-3448-46f0-acc9-384ccf36ea9b>
CC-MAIN-2022-40
https://www.kaspersky.com/resource-center/threats/malware-manipulation
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00072.warc.gz
en
0.917179
317
3.046875
3
The prevalence of automation is everywhere in our modern, tech-first culture and continuously on the rise — with good reason. Cybersecurity experts see vast amounts of data and countless attempted breaches, becoming literally overwhelmed and specifically because of two challenges: (1) effectively finding attacks hidden among billions of daily security events, (2) efficiently responding to those attacks in a timely manner. These challenges are not being addressed and, in most SOCs, decades-old tools are used to do only a partial job. These tools are simple, rules-based systems and fundamentally limited in capabilities. For those testing new techniques, automation is consistently used at the wrong times and in the wrong ways. This leads to a rise in breaches and millions of unfilled security analyst positions. More specifically, these tools limit security teams to a process relying heavily — and unfortunately — on human bandwidth. For example: 1.SIEM systems collect security event data and generate alerts based on a fixed rule set. Rules are shallow and limited to known IOCs so they often generate too much noise, while simultaneously missing most new unknown threats. 2.Security analysts triage alerts by investigating as many as possible and creating incidents for ones that may need remediation, which is an unscalable process. 3.Senior security and IT operations analysts evaluate the incidents and determine the appropriate response. 4.If they have enough time, senior analysts cyberhunt for new threats and generate new rules. In response to these inefficiencies, it’s only natural to turn to automation as a way to improve performance. However, not all automation is created equal. There are levels of automation, ranging from cognitive at the high end to robotic process at the low end. Cognitive vs. robotic automation One of the most distinctive elements of advanced automation is that it’s “intelligent” with key capabilities, such as deep reasoning, domain knowledge encapsulation, decision making and adaptability. Harvard Business Review categorizes the ways security work can be automated: 1.Robotic process automation (RPA) – the use of a machine (physical or digital) to replace a piece of repetitive work. This automation type is able to handle routine tasks that don’t require decision making. Manufacturing robots, for example, know exactly where to tighten a bolt every time. 2.Cognitive automation – a machine that improves its ability to conduct a given task over time, such as virtual assistants, image recognition and self-driving cars. It’s able to handle decision making and exploratory tasks when faced with situations it hasn’t seen before. These terms are consistently underused and misunderstood among today’s cybersecurity teams. SecOps automation today Technology layers in today’s cybersecurity chain attempt to do different actions with varying levels of automation. Specifically: SIEMs: designed to alert on “bad” events based on rules that human analysts create and maintain. Each rule represents a single snapshot of a negative event pattern out of a potential universe of billions. Most enterprises, even after creating only a few dozen rules, are overwhelmed by alerts, most of which are false positives. The system doesn’t learn from its experience. Alert triageautomation: intended to help humans evaluate whether the torrent of alerts from SIEM systems are real threats. While some of the tasks are routine and require robotic automation (e.g., checking against threat intelligence systems and blacklists), the task of determining if an alert is a true positive requires some cognitive automation. Without the capabilities of machine learning-driven automation, the system must resort to manual steps of analysts examining the data to judge an alert’s severity. However, with cognitive automation, we can automate more of the manual process, thereby truly automating the alert triage process. Incident response automation: constructed to automate the steps taken in response to incidents that are deemed to be of high severity, so human analysts don’t have to do the same thing over and over, and over. These tasks are very routine (e.g., creating new firewall rules or a new ticket in a case management system) and only require robotic process automation. These systems do not learn from their experience. Threat hunting automation: designed to automate the much more challenging task of finding new unknown threats in the environment. This requires exploration, judgement and intuition, combined with context and history of the organization’s environment. This is clearly a task primarily suited for cognitive automation. This is the highest level of automation and requires the system to intelligently learn from the skilled security analysts driving it. What should SecOps automation look like? Type of Automation Required Primarily Robotic Process Automation Requires Cognitive Automation, plus Robotic Primarily Cognitive Automation Cognitive automation is a critical component for SecOps automation to catch up to the monumental task of catching thousands of threats from billions of events. When ranking by level of required automation, the activities and automation types include: In a cognitive system, it’s critical the expertise and context of the human analyst be easily captured and used to further enhance the system. To accomplish this, all steps in the SecOps automation process need to have feedback loops built in that capture output results (in the form of ranked threats and patterns) and also input logic (in the form of a security analyst’s expert review). Building in such feedback loops will ultimately result in far less leakage from the system at each step in the process and lead to improved security overall. Intelligent security automation You could achieve the very basics with a solution that only provides robotic automation. However, it will not be able to fully automate alert triage nor even begin to tackle threat hunting. An “Intelligent Security Automation” solution is one encompassing both cognitive and robotic automation, which will get you much further in your SOC automation journey.
<urn:uuid:bae3a49e-d131-4af7-b660-d8e613fb8f66>
CC-MAIN-2022-40
https://www.logichub.com/blog/how-cognitive-and-robotic-automation-play-in-secops
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00072.warc.gz
en
0.925498
1,229
2.65625
3
Intellectual property (IP) is a lot like taxes: you don’t care about the legal niceties until they apply to you. IP (not to be confused with the IP in TCP/IP, which stands for Internet Protocol) is the fuel that runs our information economy, the wellspring from which a thousand flowers bloom in the form of a plethora of products based on IP previously created, established, and licensed to all comers. When an inventor creates a valuable technology and patents it, there are two ways to go: - exploit the technology directly and try to profit from being the sole supplier, or - license it to anyone on a “fair, reasonable, and non-discriminatory” (FRAND) basis and nurture an entire industry. There are plenty of examples of both. IP licensing can enable not just one company, but an entire industry. The internal combustion engine is not one thing, but a series of interrelated systems, each of which was developed experimentally during a period from 1791, when John Barber developed the gas turbine, to 1892, when Rudolf Diesel invented compression ignition. None of these inventors could have made a working automobile alone, but since their inventions were all patented and licensed, others were able to build on them until something highly useful arose. Today’s cars all incorporate these fundamental patents. Thomas Edison invented and patented many things — 1,093 things, in fact. Take one, the light bulb. Others had already managed to run electrical current through a filament to make it glow, but the problem was an engineering one: how to make the element stretched between the positive and negative electric poles last long enough to be practical. He experimented and found the right material (carbon), but he also had to adjust many other elements — the thickness of the copper wires, the current, resistance, and voltage of the system — and then figure out an inexpensive manufacturing process to make the whole thing commercially viable. Although the War of Currents was bitter, pitting market leaders of competing electrical technologies against each other, Edison licensed the light bulb to other manufacturers with the goal of getting the invention into the hands of the greatest number of people possible. There have been instances of inventors’ keeping their inventions to themselves. Dean Kamen created the Segway, a two-wheeled, self-balancing, battery-powered vehicle. Kamen and his backers had high hopes for the vehicle, but one thing they never did was license the technology to anyone else. In fact, Segway sued the entire hoverboard industry for patent infringement when various manufacturers, mostly foreign, used similar technology in their products without licensing. Instead of presiding over a growing transportation sector, Segway has been reduced to the role of single supplier of niche technology for specialty applications. The contrast in business models is on stark display in the history of the PC business. Apple was an early leader, being the first company to put together a fully integrated system with a graphical user interface (GUI). In 1991, Apple still had the largest market share of any vendor in the PC market. By contrast, Microsoft was sort of an imitator, copying Apple’s key GUI features (which Apple had in turn cribbed from Xerox). But the main difference between the two was that Apple, which in the early 1980s had a clear technological and market advantage, chose to keep everything to itself. All Apple’s inventions and improvements when straight into Apple products, which were, admittedly, pretty good. Microsoft, which didn’t make PCs itself but rather created software that made PCs run, chose to license its operating system to all comers on a FRAND basis. By 2000, Apple had dropped out of the top 10 vendors with a less than 2% market share. Microsoft had all the rest. Now, to be fair, Apple’s subsequent rise as one of the world’s top technology companies did help its PC market position, but the company never regained the dominance it once had in PCs. The mobile industry is another great example of how an open licensing policy has led to rapid market growth, the rise of many companies, and innovation on multiple fronts. The basic patents for cellular technology are held by a variety of companies, including Ericsson, Nokia, Qualcomm, and Google (this last in the form of the former Motorola assets). Now, relations in the mobile industry have not always been tranquil (see Smartphone patent wars), but in general, these companies agreed to act together for the benefit of their customers (the handset makers and communications carriers). Ericsson and Qualcomm used to make phones but no longer do. Nokia continues to make phones. Google, in competition with is own Android customers, makes something north of a reference phone. What an industry arose from this open regime! The collective IP licensed by these companies and others has spawned a tremendous industry, with hundreds of manufacturers, tens of thousands of software providers, billions of customers, and trillions of dollars in annual revenue. On the basis of this business model, the mobile industry has gone from selling bricks that no one would carry 20 years ago to today’s amazing smartphones. There is literally a mobile phone for every person on the planet (although some people have several and others none). Around the world, the entire culture has been changed by the mobile industry. Everyone is dependent on their cell phone. Any photograph taken in public today has someone looking at their phone. This is what an open licensing regime can do.
<urn:uuid:8d7b4193-3a1f-4445-83ed-5e98c7f42eea>
CC-MAIN-2022-40
https://www.cio.com/article/230216/without-intellectual-property-licensing-where-would-we-be.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00072.warc.gz
en
0.967215
1,126
3
3
Information About 802.11v The controller supports 802.11v amendment for wireless networks, which describes numerous enhancements to wireless network management. One such enhancement is Network assisted Power Savings which helps clients to improve the battery life by enabling them to sleep longer. As an example, mobile devices typically use a certain amount of idle period to ensure that they remain connected to access points and therefore consume more power when performing the following tasks while in a wireless network. Another enhancement is Network assisted Roaming which enables the WLAN to send requests to associated clients, advising the clients as to better APs to associate to. This is useful for both load balancing and in directing poorly connected clients. Enabling 802.11v Network Assisted Power Savings Wireless devices consume battery to maintain their connection to the clients, in several ways: By waking up at regular intervals to listen to the access point beacons containing a DTIM, which indicates buffered broadcast or multicast traffic that the access point delivers to the clients. By sending null frames to the access points, in the form of keepalive messages– to maintain connection with access points. Devices also periodically listen to beacons (even in the absence of DTIM fields) to synchronize their clock to that of the corresponding access point. All these processes consume battery and this consumption particularly impacts devices (such as Apple), because these devices use a conservative session timeout estimation, and therefore, wake up often to send keepalive messages. The 802.11 standard, without 802.11v, does not include any mechanism for the controller or the access points to communicate to wireless clients about the session timeout for the local client. To save the power of clients due to the mentioned tasks in wireless network, the following features in the 802.11v standard are used: Directed Multicast Service Base Station Subsystem (BSS) Max Idle Period Directed Multicast Service Using Directed Multicast Service (DMS), the client requests the access point to transmit the required multicast packet as unicast frames. This allows the client to receive the multicast packets it has ignored while in sleep mode and also ensures Layer 2 reliability. Furthermore, the unicast frame is transmitted to the client at a potentially higher wireless link rate which enables the client to receive the packet quickly by enabling the radio for a shorter duration, thus also saving battery power. Since the wireless client also does not have to wake up at each DTIM interval in order to receive multicast traffic, longer sleeping intervals are allowed. BSS Max Idle Period The BSS Max Idle period is the timeframe during which an access point (AP) does not disassociate a client due to nonreceipt of frames from the connected client. This helps ensure that the client device does not send keepalive messages frequently. The idle period timer value is transmitted using the association and reassociation response frame from the access point to the client. The idle time value indicates the maximum time that a client can remain idle without transmitting any frame to an access point. As a result, the clients remain in sleep mode for a longer duration without transmitting the keepalive messages often. This in turn contributes to saving battery power.
<urn:uuid:099e5367-ea1b-495e-b042-bd3c3b01e3bb>
CC-MAIN-2022-40
https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/17-2/config-guide/b_wl_17_2_cg/802_11v.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00072.warc.gz
en
0.912932
670
3.078125
3
Guru: Global Variables in Modules December 13, 2021 Ted Holt When I first learned to program computers (RPG II, COBOL 74), the only kind of variables I knew of were global variables. Any statement within a program was able to use any variable. It was not until I started my computer science degree that I found out about local variables, which are known to only part of a program. Since that time, it has been my practice to use local variables as much as possible and global variables only when necessary. Ideally an RPG program, service program, module, or subprocedure would have no global variables at all, but I don’t live in an ideal world. Today I want to write about an appropriate use of global variables in a module. When you consider that the typical RPG program or service program in most shops is built from only one module, what I have to say applies to RPG programming in general, not just *MODULE objects. For an illustration, I’ll use the topic of inventory. Let’s say that we work for an organization that stores items in warehouses. Each item has attributes, of course, among them weight and dimensions (length, width, height). Here’s a simplified item table with the columns we need for this illustration. create table items ( ID char ( 6) primary key, Description for Descr varchar (25), Weight dec ( 9, 3), Height dec ( 7, 3), Width dec ( 7, 3), Length dec ( 7, 3)); insert into items values ('AB-101', '#10 Widget', 2.1, 7, 6, 10.2); Since I am American, I’ll say that we store weights and measures in U.S. customary units, or as I usually call them, English units. However, just because the data is stored in English units does not mean that the user should have to deal with English measurements. One of the best features of a database management system (as opposed to data files, like the System/36 files we had to use in antiquity) is that we can perceive the data in ways that differ from how the data is stored. (This is the concept behind views.) Therefore, even though the weight and dimensions of an item are stored in English measurements, the user should be able to view them in metric units instead. Let’s use the same concept in our programming. Suppose we have a service program of subprocedures that retrieve and manipulate item data, similar to the inventory service program I described recently. Among the item-related subprocedures are some that return the weight and measurements of an item. For example, there’s a WeightOf subprocedure that operates like a built-in function. Give it an item number and it returns the weight of the item. There are also LengthOf, WidthOf, and HeightOf subprocedures that return the various dimensions of the item. There is a subprocedure that returns all three dimensions in parameters. Since I want these subprocedures to be able to return the data in either English or metric units, I would need to pass a parameter to each subprocedure to specify which system of measurements to use. There would be nothing wrong with that approach. ItemWeight = WeightOf (SomeItem: ‘KG’); However, an alternative is to tell the service program once which system to use and be done with it. For that, we can define a global variable in the module. All the weight-and-measure-related subprocedures can check the value of the global variable and behave accordingly. Here’s part of copybook INVITEMS. **free dcl-s MeasurementSystem_t char(1) template; dcl-c AmericanSystem const('0'); dcl-c MetricSystem const('1'); dcl-s Weight_t packed(9:5) template; dcl-s Dimension_t packed(7:3) template; dcl-s ItemNumber_t char(6) template; dcl-pr WeightOf like(Weight_t); inItemNumber like(ItemNumber_t) const; end-pr; dcl-pr HeightOf like(Dimension_t); inItemNumber like(ItemNumber_t) const; end-pr; dcl-pr SetMeasure; inMeasurementSystem like(MeasurementSystem_t) const; end-pr; dcl-pr GetMeasure like(MeasurementSystem_t); end-pr; First is an enumerated data type that lists the supported measurement systems. It’s in this copybook because the callers need to reference the constants, and may need to reference the template. Next are some templates for common data. As I wrote recently, these definitions are how the caller perceives the data, which is not necessarily how the data are stored in the database. In this case, the data definitions match the definitions in the database. Last are enough procedure prototypes to illustrate the concepts. Notice WeightOf and HeightOf. They return data, but there is no parameter to specify which system of weights and measures to use. Instead, the caller calls the SetMeasure subprocedure to let the subprocedures know which system to use. Now that we understand the interfaces, let’s see the implementation. This is module INVITEMS. **free ctl-opt nomain option(*srcstmt: *nodebugio); /include prototypes,invitems /include prototypes,assert dcl-s MeasurementSystem like(MeasurementSystem_t) inz(AmericanSystem); dcl-c LBS_TO_KG_FACTOR const(0.4535924); dcl-c INCHES_TO_CM_FACTOR const(2.54); dcl-c C_SQLEOF const('02000'); dcl-proc WeightOf export; dcl-pi *n like(Weight_t); inItemNumber like(ItemNumber_t) const; end-pi; dcl-s ItemWeight like(Weight_t); exec sql select it.Weight into :ItemWeight from items as it where it.ID = :inItemNumber; select; when SqlState = C_SQLEOF; clear ItemWeight; when SqlState > C_SQLEOF; assert (*off: 'Error in Weight function.'); endsl; if MeasurementSystem = MetricSystem; eval(h) ItemWeight *= LBS_TO_KG_FACTOR; endif; return ItemWeight; end-proc WeightOf; dcl-proc HeightOf export; dcl-pi *n like(Dimension_t); inItemNumber like(ItemNumber_t) const; end-pi; dcl-s ItemHeight like(Dimension_t); exec sql select it.Height into :ItemHeight from items as it where it.ID = :inItemNumber; select; when SqlState = C_SQLEOF; clear ItemHeight; when SqlState > C_SQLEOF; assert (*off: 'Error in Height function.'); endsl; if MeasurementSystem = MetricSystem; eval(h) ItemHeight *= INCHES_TO_CM_FACTOR; endif; return ItemHeight; end-proc HeightOf; dcl-proc SetMeasure export; dcl-pi *n; inMeasurementSystem like(MeasurementSystem_t) const; end-pi; MeasurementSystem = inMeasurementSystem; end-proc SetMeasure; dcl-proc GetMeasure export; dcl-pi *n like(MeasurementSystem_t); end-pi; return MeasurementSystem; end-proc GetMeasure; The variable MeasurementSystem is declared before (i.e., outside of) the subprocedures. This means that the subprocedures can reference it. There are two ways for a caller to change the value of the MeasurementSystem global variable. The first way, which I don’t like, is to export the variable in the module and import it in the caller. In the module: dcl-s MeasurementSystem like(MeasurementSystem_t) export; In the callers: dcl-s MeasurementSystem like(MeasurementSystem_t) import; With this method, the caller changes the MeasurementSystem variable as it would any other variable. MeasurementSystem = MetricSystem; Now that you’ve seen it, I recommend you forget it. The second way, which I do like, is to use a “setter” routine. In this module, the setter is subprocedure SetMeasure. A caller passes a parameter to SetMeasure, which changes the value of the global variable. If a caller needs to know the value of a global variable, it uses a “getter”. I didn’t make this up. Getters and setters are common in object-oriented languages like Java and C++. The assert subprocedure came from here, which is where I normally get it when I need to install it on another system. Here’s a short calling program that uses these routines. **free ctl-opt actgrp(*new) option(*srcstmt: *nodebugio) bnddir('SYSTEM'); dcl-f qsysprt printer(132); /include prototypes,InvItems dcl-s UOM like(MeasurementSystem_t); dcl-s ItemWeight like(Weight_t); dcl-s ItemHeight like(Dimension_t); *inlr = *on; UOM = GetMeasure (); writeln ('1. UOM=/' + UOM + '/'); SetMeasure (AmericanSystem); ItemWeight = WeightOf ('AB-101'); writeln ('2. Weight=/' + %char(ItemWeight) + '/'); ItemHeight = HeightOf ('AB-101'); writeln ('3. Height=/' + %char(ItemHeight) + '/'); SetMeasure (MetricSystem); ItemWeight = WeightOf ('AB-101'); writeln ('4. Weight=/' + %char(ItemWeight) + '/'); ItemHeight = HeightOf ('AB-101'); writeln ('5. Height=/' + %char(ItemHeight) + '/'); UOM = GetMeasure (); writeln ('6. UOM=/' + UOM + '/'); return; dcl-proc writeln; dcl-pi *n; inString varchar(132) const; end-pi; dcl-ds ReportLine len(132) end-ds; ReportLine = inString; write qsysprt ReportLine; end-proc writeln; And here’s the output. 1. UOM=/0/ 2. Weight=/2.10000/ 3. Height=/7.000/ 4. Weight=/.95254/ 5. Height=/17.780/ 6. UOM=/1/ I’ll leave it to you to validate that the code worked correctly. I can’t say enough bad about global variables. They have been the source of innumerable bugs and wasted so much of my time. To say that I hate them is to put it mildly. But I freely admit that they have their uses. Editor’s Note: Don’t miss Ted’s special note in this issue of The Four Hundred.
<urn:uuid:74fdbb3e-1246-4fe2-a59a-dbae498da636>
CC-MAIN-2022-40
https://www.itjungle.com/2021/12/13/guru-global-variables-in-modules/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00072.warc.gz
en
0.802002
2,451
3.015625
3
What is Single Sign On (SSO)? Single sign-on (SSO) is a security mechanism that allows an organization to manage its users and their access to resources within multiple web applications. SSO grants users access to all of their workplace applications with one set of credentials, authenticating users across multiple devices as well. Even if an authorized user accesses the same resource or page in multiple applications, only the user associated with the SSO identity will be able to access that resource or page. Single sign-on is becoming a common practice in modern business because of how easy it makes user authentication and password management. It’s a simple, quick, and easy way to remember and store passwords without having to enter them each time you try to access your account, minimizing workplace anxiety developed from having to remember various passwords. A recent study summarized that workers lose 32 days a year in workplace productivity by toggling between apps at an average of 10 times an hour. SSO speeds up access to your business apps by granting access to all necessary applications with one set of credentials per user. If you provide services to customers over multiple properties or have two corporate-owned sites, you can use SSO to create a seamless experience for your clients. For example, you can keep your customers logged in across different devices (mobile, laptop, smart device). It might be annoying when clients have to log back into their accounts on another device, but with SSO it’s simple; they don’t even need to type their password again. If employees aren’t constantly having to remember which login information goes where they’ll spend less time trying to remember all their usernames and passwords. Instead, they can focus on performing their jobs instead of wasting valuable time. Security is always a priority in today’s tech-savvy world, and single sign-on can help protect your business from hackers by creating a safer login experience. Without SSO, you’re likely asking your clients to remember several different passwords across all of their applications. This not only leaves users vulnerable if they forget one but it also increased the likelihood of a data breach. Additionally, companies like yours will have to store an overabundance of passwords as you manage users across your clientele base. Reduce Password Fatigue One of the most common complaints from end-users is that they need to remember so many passwords. A Gartner survey states up to 50% of all help desk calls are for password resets, while Forrester Research states that the average help desk labor cost for a single password reset is about $70. Reinforce Separation Between Personal & Work Accounts Another benefit to implementing SSO is its ability to reduce friction between work and personal applications. By requiring every employee to use SSO, your IT admins essentially gain control over access across all platforms used within their organization, including personal laptops or mobile devices owned by employees. Requiring end-users to authenticate separately for work versus personal apps slows productivity by introducing yet another barrier. No matter what industry you operate in or how robust security controls already are within your organization, allowing employees unrestricted access through multiple doors encourages bad behavior at best and invites dangerous mistakes at worst. Implementing single sign-on (SSO) lets your employees use one login/password combination across all their business-related apps, therefore alleviating some of their frustration and making them more productive. It may not sound like a big deal now, but as your team grows over time, managing those user accounts can become a huge headache. Thanks to product and platform advances, SSO is even easier to implement and more accessible than ever before. With increased accessibility comes increased value, which often translates into cost savings. For example, one study estimates companies using SSO experience an average 54% reduction in help desk tickets compared to companies without. With so many passwords to keep track of, it’s inevitable that users will forget them—either on purpose or by accident. SSO reduces these errors because there is one username and password combination to remember. It also helps eliminate errors created by typos—for example, when entering passwords into an email login field instead of an online banking field. By logging in once per day, users can avoid small mistakes that could lead to a breach in security or costly losses from fines and refunds. These kinds of costs add up over time, leading some companies to invest in credential management systems that employ multifactor authentication, or MFA. These kinds of systems combine unique usernames and passwords with additional information like biometrics (fingerprints, facial recognition) or physical hardware (security keys). The more layers you have protecting your data, the less likely you are to experience serious damage due to minor slip-ups. Additionally, centralized sign-on makes employee access auditing simpler, since admins can see how many times each person logs in each month. This gives them a better understanding of how often employees need additional training and what types of risk factors they might face during work hours. Ultimately, single sign on creates a system that is both secure and simple for employees to use without requiring complicated setup processes for new employees or painful training sessions about network permissions and usernames/passwords/PINs/codes/badges. If every employee uses a singular login no matter which device they’re using or where they connect from, people won’t get confused about what credentials they need to use where—meaning fewer breaches due to confusion around who has permission to do what where. Using a single login simplifies administration procedures by centralizing identity management capabilities within one system. SSO can significantly boost operational efficiency while simultaneously increasing user engagement rates and creating more effective ways of expanding the digital experience.
<urn:uuid:3a2cff70-96d7-4ad1-ad80-e170cc2e8ccf>
CC-MAIN-2022-40
https://bluefletch.com/benefits-of-single-sign-on/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00072.warc.gz
en
0.936784
1,203
2.65625
3
Banks, credit unions, and loaning companies are among the top institutions that need a robust cybersecurity system in order to survive. On top of financial investments, they hold copious amounts of data that can easily be exploited when left unprotected. With just a few clicks, cyberattackers can wipe out every penny. So how can finance firms avoid the top data breaches from happening to their system? Depending on the root cause of the data breach, several security measures can be applied to protect assets. Keeping software updated, patching up application vulnerabilities, securing the data storage, and encrypting the devices which contain sensitive information are some ways to prevent data breaches in financial institutions. Dealing with Data Breaches A data breach is a cybersecurity incident where sensitive information of customers becomes compromised. Data breaches expose personal and financial information, which are often sold on the black market and circulated among identity thieves. It’s not easy to recover once data has been compromised but it can be prevented by strengthening the security measures in a firm. Studies show that 88% of data breaches have consistent patterns over the years. By deciphering these patterns, it’s possible to minimize the risk of a data breach in a financial institution. Top Data Breaches in Finance Even big companies are vulnerable to the data breach, especially if their security measures are loose in some ways. Here are the biggest data breaches in the world of finance and banking: Affected Parties: 3.9 million customers How it Happened: The tapes containing the names, addresses, Social Security numbers, account numbers, payment histories, and other personal information of CitiFinancial’s 3.9 million customers were lost in transit by the United Parcel Service (UPS). While the company claims that the data were not stolen or compromised, reports from institutions that compile personal information (banks, data brokers, universities, and more) have seen an increase in data security failures. How it Was Resolved: Investigation was performed by the UPS, while CitiFinancial has informed both their 3.9 million customers and the Secret Service about the incident. They mailed letters to their customers offering a 90-day free credit card monitoring service. This incident also pushed the Californian government to strengthen the law requiring institutions – including private companies, government agencies, and nonprofit organizations – to inform customers in case of data files that have been compromised. How it Could Be Avoided: Since CitiFinancial claims that there were no indications of theft, the only possible conclusion to draw from this is that human error plays a big part in the loss of tapes in transit. To prevent this, heightened security during transit of sensitive information must be implemented. The tapes, or other physical storage of the data, should also be encrypted to prevent unauthorized people from accessing the information. Educational Credit Management Corp. (2010) Affected Parties: 3.3 million customers How it Happened: Educational Credit Management Corp. is a nonprofit organization that helps students deal with their loans. All the data, which included names, addresses, birthdays, and Social Security numbers, were kept in a portable media. The device was suspected to have been stolen in March of 2010. ECMC claimed that although personal information was among the data recorded in the portable media, it did not contain financial information like bank account data or credit card numbers of the customers. The corporation also did not confirm whether or not the stolen device was encrypted. How it Was Resolved: ECMC immediately notified law enforcement agencies to start conducting an investigation that will help recover the missing portable media. Meanwhile, ECMC offered free credit monitoring and protection services in partnership with Experian for all the affected borrowers. How it Could Be Avoided: Since this data breach is suspected to be caused by a physical attack/theft of a data-carrying device, it would help to keep all important information on the cloud. Having crucial information stored on portable devices like USBs and hard drives is incredibly risky, especially when cloud-based encryption is a convenient option for even the smallest banks. Data Processors International (2003) Affected Parties: 8 million credit card numbers (including 2.2 million MasterCard issued and 3.4 million Visa-issued) How it Happened: In 2003, Data Processors International’s security system was hacked and around 8 million credit card accounts were accessed. This includes cards that are issued by MasterCard, Visa, American Express, and Discover Financial Services, to name a few. Both MasterCard and Visa notified the banks of the affected cards. Luckily, no report of fraudulent acts from the affected accounts was reported. How it Was Resolved: DPI immediately sought the help of the Secret Service as well as the FBI to track down the computer hacker responsible for intruding the company’s system. However, the perpetrator was not caught. This incident also contributed to the eventual passing of a law in California that requires institutions to inform affected customers of the data breach. How it Could Be Avoided: DPI’s case is an instance of hacking, which is the most common cause of data breach. It can happen in several ways, but in the case of a mass-scale hacking in a corporation, the most common gateways point to vulnerable applications. This can be avoided by keeping the software, hardware, and applications patched up and up to date. Korea Credit Bureau (2014) Affected Parties: 20 million South Koreans How it Happened: An employee of the Korean Credit Bureau secretly copied the customer information to an external drive over the course of one and a half years. The information includes identification numbers, names, addresses, and credit card numbers. This incident alarmed South Koreans, as their country ranks high among the rate of credit card use all over the world. Eventually, the perpetrator was caught and the companies were fined. How it Was Resolved: After the incident, a special task force was created to investigate the impact of the theft. A public apology by the executives of the three affected credit card companies was also issued. The companies were also suspended from issuing new credit cards for three months after the incident. They were also fined $5640 (6 million won) in addition to the compensation they have to pay for the financial loss of the customers. How it Could Be Avoided: This case of hacking was done by an employee who had access to the data. It can be prevented by thoroughly checking the background of the worker you will trust to hold the financial and personal information of customers. Equifax, Inc. (2017) Affected Parties: 143 million U.S. Accounts How it Happened: Equifax is a credit monitoring company that caters to Americans, including high-profile accounts of politicians and celebrities. Using an unpatched Apache Struts vulnerability, the hackers were able to access sensitive information from May to July 2017. On July 29, 2017, Equifax discovered the data breach but waited until Thursday of the same week to publicize the issue. How it Was Resolved: Before notifying the public, some of the senior executives of Equifax, Inc. sold their company shares worth $1.8 million. Equifax offered free credit monitoring to the affected customers. They also set up a special website to help customers check whether their personal information has been compromised. Aside from this, the Congress was also called to reform the data protection policies in effect. How it Could Be Avoided: This large incident of theft involving the Social Security numbers of customers was caused by an unnoticed faulty Apache Struts. This kind of vulnerability in a company’s security can be solved by frequently updating the applications and patching up the software to make sure that there are no backdoors for a hacker to utilize. How Much Will Data Breach Damages Cost You? Data breach damages cost financial institutions anywhere from $1.25 million to $8.19 million. Banks have the second-largest spending when it comes to the total cost of a data breach at $5.86 million. Each breached account can cost an average of $206. What Can You Do When Data Has Been Compromised? Even with so many preventive measures and added security layers to protect sensitive data, there is still a risk that data breach might occur. In the unfortunate case that this happens, here’s what you need to do to contain the issue and prevent it from progressing negatively: - Inform the customers and the law enforcement authorities with a detailed explanation of what happened. - Offer customers the right protection like credit monitoring. - Upgrade the security system to prevent the same incident from happening again. - Enforce policies in the office that will help strengthen security. Protect Your Data with Abacus Minimize the risk of a data breach with the multi-layered and comprehensive security plans we offer at Abacus. Aside from specializing in improved security measures to avoid compromised data, our experts at Abacus are also experienced in mitigating the damages after a data breach. For an all-around security measure that will protect your company, start your consultation with us by calling (856) 505 6860 or sending an email at email@example.com.
<urn:uuid:bca4cd94-d0e8-4f06-a16b-e8d5933f0565>
CC-MAIN-2022-40
https://goabacus.com/top-data-breaches-in-finance-and-how-to-avoid-them/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00273.warc.gz
en
0.952241
1,901
2.546875
3
The Beginnings Of ERP And How It Has Evolved The Year 2000 problem (also known as the Y2K problem, the Millennium bug, the Y2K bug, or simply Y2K) was a problem for both digital (computer-related) and non-digital documentation and data storage situations which resulted from the practice of abbreviating a four-digit year to two digits. In 1997, The British Standards Institute (BSI) developed a standard (DISC PD2000-1) which identified two major problems that existed in many computer programs. Firstly, the practice of representing the year with two digits becomes problematic with logical error(s) arising upon “rollover” from x99 to x00. This has caused some date-related processing to operate incorrectly for dates and times on and after 1 January 2000, and on other critical dates which were billed “event horizons”. Without corrective action, long-working systems would break down when the “…97, 98, 99, 00…” ascending numbering assumption suddenly became invalid. Secondly, some programmers had misunderstood the rule that although years that are evenly divisible by 100 are not leap years, if they are divisible by 400 then they are. Thus the year 2000 was a leap year. Companies and organizations worldwide checked, fixed, and upgraded their computer systems. That led to scrutiny and rearrangement of ERP systems. Information Technology (IT) companies experienced rapid growth in the 1990s because the year 2000 problem and introduction of the Euro disrupted legacy systems. Many companies took this opportunity to replace legacy systems with ERP. The rapid growth IT companies experienced from ERP implementations was followed by a slump in sales after these issues had been addressed. ERP systems initially focused on automating back office functions that did not directly affect customers and the general public. Front office functions such as customer relationship management (CRM) dealt directly with customers, or e”“business systems such as e”“commerce, e”“government, e”“telecom, and e”“finance, or supplier relationship management (SRM) became integrated later, when the Internet simplified communicating with external parties. “ERP II” was coined in the early 2000s. It describes web”“based software that allows both employees and partners (such as suppliers and customers) real”“time access to the systems. The role of ERP II expands from the resource optimization and transaction processing of traditional ERP to leveraging the information involving those resources in the enterprise’s efforts to collaborate with other enterprises, not just to conduct e-commerce buying and selling. Compared to the first generation ERP, ERP II is said to be more flexible rather than confining the capabilities of the ERP system within the organization, it is designed to go beyond the corporate walls and interact with other systems. “Enterprise application suite” is an alternate name for such systems. As a trusted partner of SAP since 2005, Cornerstone continues to provide SAP Business One ERP business management software and related technical services to small and midsized companies within the wholesale distribution, manufacturing, and online retail industries. The flexibility of SAP Business One (SAP B1), when combined with the customization expertise of Cornerstone Consulting, allows a company to implement a totally integrated ERP business management solution, designed to meet its specific business needs. Now, instead of your employees adapting to the software, with SAP Business One’s fully integrated accounting, customer relationship management (CRM), inventory tracking-control-management, and material requirement planning (MRP), the system can adapt to the needs of your business. The Cornerstone Consulting staff understands that business owners do not have time to learn the intricacies of a complicated software program in order to effectively and efficiently operate their enterprises. If time is money, then wasted time is wasted money. SAP Business One has many benefits to offer including instant, “real time” access to information from finance, manufacturing, and sales departments. In addition, users of SAP Business One can make more informed decisions by generating accurate, up-to-the-minute analytics with the built-in Crystal Reports. The system also has automatic “workflow alerts” which provide users with warnings based on individualized business rules concerning data. This permits immediate action in the event of an emergency. Such an “early warning system” allows businesses to anticipate problems and concerns ahead of time, which can save both time and money. Further, the platform is designed to be a mobile ERP solution and can run on iPads and iPhones. To learn more about SAP Business One ERP and how it can help your business, please call Cornerstone Consulting at 813-321-1300 today. The article from which the information about the origin of ERP was excerpted: http://tinyurl.com/79muh4l
<urn:uuid:c3d1a390-5018-4758-88ce-c52e405bbb37>
CC-MAIN-2022-40
https://www.cornerstone1.com/2012/06/the-origin-of-enterprise-resource-planning-erp/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00273.warc.gz
en
0.944393
1,021
3.0625
3