text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Lack of Symmetry in Qubits Can’t Fix Errors in Quantum Computing, Might Explain Matter/Antimatter (Phys.org) A team of quantum theorists at Los Alamos National Laboratory (LANL) seeking to cure a basic problem with quantum annealing computers—they have to run at a relatively slow pace to operate properly—found something intriguing instead. Although our discovery did not the cure the annealing time restriction, it brought a class of new physics problems that can now be studied with quantum annealers without requiring they be too slow,” said Nikolai Sinitsyn, a theoretical physicist at LANL. While probing how quantum annealers perform when operated faster than desired, the team unexpectedly discovered a new effect that may account for the imbalanced distribution of matter and antimatter in the universe and a novel approach to separating isotopes. Significantly, this finding hints at how at least two famous scientific problems may be resolved in the future. The first one is the apparent asymmetry between matter and antimatter in the universe. Another long-standing problem that can benefit from this effect is isotope separation. For instance, natural uranium often must be separated into the enriched and depleted isotopes, so the enriched uranium can be used for nuclear power or national security purposes.
<urn:uuid:a1336a63-7c67-4ac2-8e6b-72379be972ca>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/lack-of-symmetry-in-qubits-cant-fix-errors-in-quantum-computing-might-explain-matter-antimatter/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00519.warc.gz
en
0.895036
270
3.296875
3
Hackers can use AC power lines as a covert channel to extract data from air-gapped networks through a malicious code that controls power consumption of a computer by regulating CPU core utilization and modulates the data based on power fluctuations. Attacker’s place a probe that measures the conducted emission on the power lines to processes the signal and then decodes it back to binary information. Security researchers from the University of the Negev, Israel presented this new type of covert channel dubbed PowerHammer, that allows attackers to extract data from air-gapped computers through AC power lines. Air-Gapped network referred to a secure computer that is isolated from the unsecured networks and being maintained with strict regulations to ensure maximum protection. They are used in military and defense systems, critical infrastructure, the finance sector, and other industries. Researchers presented two versions of attack line level power- hammering and phase level power-hammering to measures the emission conducted on the power cables. Air-gap covert channels are special covert channels that enable communication from air-gapped computers – mainly for the purpose of data exfiltration. It can be classified electromagnetic, magnetic, acoustic, thermal, and optical, researchers with this paper introduced electric current flow based covert channel. PowerHammer Attack Model Attackers require the targeted air-gapped computer need to be infected with the malware by means of social engineering, supply chain attacks, or malicious insiders. Then the receiver “non-invasive probe” need attached to the power line feeding the computer or with the main electric panel to measure the modulated signals, decodes and send’s to the attacker. Now by placing a probe in the system, the malware starts retrieving interesting data for the attacker. The data might be files, encryption keys, credential tokens, or passwords”. With the Exfiltration phase, the malware starts leaking the data by encoding and transmitting the data through the signals that injected in power lines and the signals are generated based on the workload on the CPU cores. Line Level & Phase level PowerHammer Attack The in-line Level attack, the attacker taps the in-home power lines that are directly attached to the electrical outlet. With Phase level, the attacker taps the power lines at the phase level, in the main electrical service panel. We evaluated the covert channel in different scenarios with three types of computers: a desktop PC, a server, and a low power IoT device. Researchers said, “Our results show that binary data can be covertly exfiltrated from air-gapped computers through the power lines at bit rates of 1000 bit/sec for the line level power-hammering attack and 10 bit/sec for the phase level power-hammering attack.” In last September Security researchers from Ben-Gurion University of the Negev (BGU) introduced a new covert channel which uses the Infrared and Surveillance camera as a Communication Channel and they Named as aIR-Jumper.
<urn:uuid:e33a5fd5-5e95-48d3-bd02-054cf470df33>
CC-MAIN-2022-40
https://gbhackers.com/powerhammer-air-gapped-computer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00519.warc.gz
en
0.903666
620
2.53125
3
Interoperability Challenges in IoT The Internet of Things will be instrumental in shaping the future evolution of the Internet by enabling linkages between heterogeneous things/machines/smart objects not only amongst themselves but also with the Internet, resulting in creation of value-added open and interoperable services/applications. The Institute of Electrical and Electronics Engineers (IEEE), defines interoperability as “the ability of two or more systems or components to exchange data and use information”. McKinsey asserts that interoperability enables 40 percent of the total potential economic value from IoT. A risk of non-interoperability is loss of key information which can be catastrophic for users/ applications in the health and emergency domain. According to the European Commission - fostering a consistent, interoperable and accessible Internet of Things remains one of the biggest challenges. This is due to following idiosyncrasies of IoT : • Co-existence of multifarious systems (devices, sensors, equipment, etc.) that interchange location/ time dependent information in varied data formats, languages, data models/ constructs, data quality and complex interrelationships. • Multi-version systems designed by manufacturers over time for varied application domains making formulation of global agreements and commonly accepted specifications very difficult. • New “Things” that get introduced and which support new unanticipated structures and protocols. • Existence of low-powered devices which need to exchange data over “lossy” networks and may have minimal likelihood/ accessibility for a power recharge in months/ years. • Heterogeneous, multi-vendor and dispersed characteristics of IoT networks. A recent study by the International Standards Organization (ISO) revealed 400+ standards that were related to IoT. This plethora of standards intensifies the constant dilemma faced by a CIO working with IoT implementations. A vendor’s view is usually biased toward their offerings. Micro controller vendors focus on device level protocols, microprocessor vendors emphasize on protocols at the router level, and cloud-offering vendors focus on higher level application protocols. To facilitate the CIO, the section below provides a ready reckoner on the leading standards/ protocols available at various layers of the Open System Interconnection model, i.e., at the Link Layer, Network Layer, Transport Layer and Application Layer. 1. Link Layer: Link Layer protocols determine how the data is physically sent over the network’s physical layer or medium (e.g – copper wire, radio wave). Some of the relevant Link Layer standards are IEEE 802.3 (wired Ethernet), IEEE 802.11 (wireless LAN), IEEE 802.16 (wireless broadband), IEEE 802.15.4 (low-rate wireless networks for power constrained devices) and 2G/ 3G/ 4G (mobile communication). 2. Network Layer: The network layers perform the host addressing and packet routing. This is done using IP addressing schemes such as Internet Protocol version 4 (IPv4), Internet Protocol version 6 (IPv6) and IPv6 over low power wireless personal area networks (6LoWPAN). IPv6 uses a 128-bit address scheme that allows 2128 addresses compared to IPv4 which uses a 32-bit address scheme allows 232 addresses. 3. Transport Layer: Transmission control protocol (TCP) and User datagram protocol (UDP) are two prevalent transport layer protocols. TCP ensures reliable/ orderly transmission of packets, error detection and flow/ congestion control capability. UDP is used for time-sensitive applications where packet dropping is preferable to delayed packets. UDP applications have no overhead of connection setups nor have requirements for message ordering, duplication elimination & congestion control. 4. Application Layer: Application layer protocols define how the applications interface with the lower layer protocols to send data over the network. Some of the IoT relevant application layer protocols are Hypertext Transfer Protocol (HTTP), Advanced Message Queuing Protocol (AMQP), Extensible Messaging and Presence Protocol (XMPP), Message Queuing Telemetry Transport (MQTT), Constrained Application Protocol (CoAP), and Data Distribution Service (DDS). MQTT is well suited for constrained environments where devices have limited processing/ memory resources and low network bandwidth. DDS is a device-to-device standard for real-time, high-performance data exchange. CoAP is another device-to-device standard for environments with constrained devices and low power/ lossy networks. While standardization should be driven by Standards Developing Organizations - collaboration is essential with Open Source communities, Special Interest Groups & Certification forums. We should also leverage learning from the mobile devices industry where interoperability was achieved not only by instituting global standards but also via the Global Certification forum comprising handset manufacturers, test equipment manufacturers, and network operators.
<urn:uuid:4700f1fa-45a7-4906-97cc-d0cdbac1bd90>
CC-MAIN-2022-40
https://company-of-the-year.ciotechoutlook.com/cioviewpoint/interoperability-challenges-in-iot-nid-2451-cid-113.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00519.warc.gz
en
0.868762
968
2.84375
3
Risk Based Vulnerability Management (or RBVM) is a process by which one evaluates the business risk for an organization resulting from its vulnerable digital assets and helps organization achieve an acceptable security posture by prioritizing the remediation. E.g. ISO 27001:2013, NIST-CSF. This is via manual audits, which is time consuming and laborious. It is enough for an organization to achieve a cyber hygiene, but not sufficient to counter cyber-attacks, which are ever changing! Typically tool based, identifies vulnerabilities in assets by looking up popular vulnerability databases, like NVD (National Vulnerability Database). Another class of issues is Misconfiguration. Though these are not the classic vulnerabilities, they do render the asset vulnerable. These are checks performed against popularly accepted industry benchmarks like CIS (Centre for Internet Security). The IT infrastructure of an organization today is diverse, spread across the cloud, on-premise and employees working from home. The vulnerability assessment must cover the assets in the above scenarios. The findings from these assessments are quite technical in nature. Severity of the issues are based on CVSS (Common Vulnerability Scoring System), which is constant and does not take the organization context into account. Reports of the each of the asset type are different, given 8-10 asset types, there is no way to correlate and normalize them. The number of issues identified are large, with tens of assets, the identified issues can be in the thousands. Based on this finding, two reasons can be attributed to any/most incidents: i. The organization did not know they had the issue. ii. The organization did know they had the issue but it got buried under tons of other issues! Every organization is constrained in terms of time and money they can invest in cybersecurity. When the number of issues identified are too large, prioritization becomes the challenge. It boils down to managing risk. For this the first step is identifying the risks. The potential for loss, damage, or destruction of an asset as a result of a threat exploiting a vulnerability. Risk is the intersection of assets, threats, and vulnerabilities. The process would be to identify vulnerabilities across the organization, model the risk for the identified weakness, prioritize them, and start remediating the top risks – thereby given the same time and effort, the organizations is de-risking themselves optimally. Research has shown that organizations suffer 80% less breaches by adopting a Risk Based Vulnerability Management model. DeRisk Center follows a Risk Based Vulnerability Management model. Identifies asset’s inherent weaknesses by performing a combination of VA, Misconfiguration checks and PT (Penetration Testing). This is the inside out view. Using Threat Intelligence identifies the threats from the outside, outside-in view. Computes likelihood of a breach for each identified weakness. Further, models the risk objects and scores them. Builds a prioritized risk register and suggests remediations.
<urn:uuid:9892b8fc-9930-470e-a342-41896ef150cd>
CC-MAIN-2022-40
https://seconize.co/blog/risk-based-vulnerability-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00519.warc.gz
en
0.941322
622
2.953125
3
Clustering Ship Data to Identify Port Boundaries There are many reasons that researchers analyze maritime traffic. For example, commodity traders analyze the flow of goods, port operators coordinate ground transportation and minimize vessel time in port, and government agencies track economic statistics. Regardless of the use case, accurately predicting and reporting when vessels enter and exit ports is paramount. The basis of this analysis is the World Port Index (WPI), which our Optix platform includes as a geographic port layer. We found one element of this data set lacking: ports are represented by a single point, which provides no information about the port’s effective geographic extent. This is especially relevant for determining what vessels are in port, which is useful data for tracking where those vessels have traveled as well as for building analytics related to the port itself. To improve upon the WPI’s lack of spatial resolution, we set out to apply a data-driven approach to port boundaries, applying several clustering techniques to a large corpus of historical maritime data that we had in our Optix platform. The positional data used for this initiative was from the Automatic Identification System (AIS), a global positional system used by vessels. Vessels use AIS when emitting their location and other navigational metadata to other vessels. We have real-time and historical AIS data from our partner exactEarth, which leverages satellites and terrestrial collectors to gather that data and create a global picture of all vessels. To trim down the data to a more manageable size, we filtered it down to only vessels with indicators that they were in port. This resulted in a data set of 812,593,583 observations. As the base for our investigation we used World Port Index data. This data is primarily used for navigation, and provides both the location of ports as well as an abundance of metadata. With the AIS data collected, we set off to create port boundaries from the data locations. We tried a couple of different clustering methods before settling on Density-based spatial clustering of applications with noise(DBSCAN). DBSCAN performs clustering in an unsupervised manner by aggregating points together into core points, which must be within a set distance of a set number of other points. Any point within that set distance of those core points is considered within the cluster. If a point isn’t within any core points, then it is marked as an outlier. This clustering method has a number of advantages. The density-based approach means that DBSCAN can capture clusters of any shape, and its tendency to ignore outliers is also useful. DBSCAN can also be easily implemented with haversine distance, which provides a more accurate spatial measure of the data (due to these points being located on a sphere). We ran DBSCAN over the filtered data, and after tuning hyper-parameters to optimally fit the WPI, we created clusters from the data we collected. Once we had our clusters, we constructed port polygons by calculating the convex hull for each cluster and then buffering it slightly. Once done, we could associate the polygons with the WPI port points and label the polygons. If multiple ports were within a polygon, we split up that polygon into individual ports. If there was no cluster close enough, we created a circular buffer around the WPI point as a generic boundary. Here’s what the results look like, shown with vessels that reported as moored: Starting with just a dataset of vessel locations, we were able to construct an accurate and comprehensive worldwide picture of all active ports contained within the WPI. This enriched layer will provide a good basis for port analytics going forward and help expand our maritime capabilities, allowing for more in-depth analytics and insights. We are excited to see what we’ll be able to build going forward with this data as it fuels other initiatives such as ETA prediction for vessels, route analysis, and port capacity evaluation.
<urn:uuid:323c731e-2dce-4f15-86e1-6c09681d51d3>
CC-MAIN-2022-40
https://www.ga-ccri.com/clustering-ship-data-to-identify-port-boundaries
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00519.warc.gz
en
0.943751
823
2.8125
3
Drones are increasingly popular across the federal government, with the military services and agencies such as the Interior Department and the Forest Service all using or testing unmanned aerial vehicles, or UAVs. Shawn McCarthy, research director for the Government Insights group at IDC, notes that the ease of data collection enabled by drones can present both benefits and challenges. “There is a substantial geographic information system element in drone data collection, along with video,” McCarthy says. “It’s not that either of those elements is new, but drones help access data in an efficient and inexpensive manner, which leads to a greater amount of data to deal with.” Processing Drone Data Takes Time and Computing Horsepower Robert Wells, a research hydraulic engineer with ARS, flew a drone over fields in Minnesota and Iowa last summer, collecting data to help farmers better manage soil erosion. Using the unmanned aerial vehicle, Wells gathered images that provided 3.5 billion data points from a single field within three hours. Alisa Coffin uses a DJI drone to collect data for her USDA research, and has upgraded her storage capacity to handle the extra information. But even with the help of a 24-core workstation, it took him an entire week to process the data from each field and 18 weeks to process the data for the entire project. “It’s amazing how much data was generated in that short period of time. I have multiterabyte hard drives, and I started filling them up quickly,” Wells says. “If I weren’t working solo — if multiple people were doing this every single day — the storage requirements would be devastating.” Coffin’s office made investments to handle the new workload triggered by the use of the DJI drone: a rack-mounted PC solution with 256 gigabytes of RAM dedicated to processing drone images, including components such as a Dell Precision Rack 7910, a Dual Intel Xeon 6 Core Processor E5-2643 v4 and an NVIDIA Quadro K6000 12GB video card. The solution provides more than 7TB of storage capacity, but Coffin expects to create 2TB to 3TB of new data, largely images, this summer alone. “It’s starting to fill up,” she says. She finds drone-gathered data so helpful that she has also acquired a Matrice M210 drone to gather thermal infrared imagery and an inexpensive consumer drone to collect additional photos. However, she always “strongly cautions” others who are interested in dabbling with drones. “They look so simple, and the results look so promising,” Coffin says. “But if you haven’t thought through all of the components of a UAV program — the software, the hardware, the people — you can end up wasting a lot of time and money and not have the quality of data you need to do the research.” The Cloud May Ease Data Storage Concerns In the fall of 2017, U.S. Customs and Border Protection tested UAVs to determine whether they could assist with the agency’s mission. While drones show promise for gathering data at the border, the agency faces challenges in storing, processing and securing that data, says Tom Mills, chief systems engineer in the department’s Office of Information and Technology. “I think we’re still in that stage of assessing,” Mills says. “We face an issue with the logistics of transmission and where it’s stored. We also have to figure out how to store the sheer amount of data and how to process that data.” For the latter issue, the cloud may provide an answer. Mills notes that the price of data storage is falling and that cloud solutions provide flexibility. “The good thing about the cloud is, it’s elastic,” he says. “As soon as we don’t need it, we’re not paying for it.” “It’s performing well for us,” says Capt. Craig Wieschhorster, the vessel’s commanding officer. “It gives us a tactical advantage.” The drone helps the crew conduct fisheries enforcement, flying over closed areas to see if anyone is fishing illegally. It also assists in counterdrug operations, allowing the Coast Guard to gather intelligence and assess situations before intercepting vessels suspected of trafficking drugs. “We’re able to see everything these guys are doing, without them seeing us,” Wieschhorster says. When it comes to one of the Coast Guard’s classic missions — search and rescue — pilots can look at images collected by the drone “before even getting in the helicopter,” he says. That lets them reduce the amount of potentially dangerous time surveying conditions from the air. More Drones Leads to More Data for Navy The Navy is currently preparing to move on from smaller UAVs to a larger, unmanned surveillance aircraft currently under development. Christopher Page, command information officer for the Office of Naval Intelligence, says that the service’s drone program — coupled with data from a growing array of other sensors and sources — is leading to “significant, near-term increases in the volume, variety and velocity of data.” To accommodate the influx, Page says, the Navy will increasingly treat data storage and processing as an ongoing operational expense, rather than as a one-time capital outlay. “The Navy is not going to generate the necessary capabilities and capacities through the traditional approach of making large, capital-intensive investments in on-premises hardware, software and support,” Page says. “It is, instead, going to generate what it needs by embracing an operations-intensive, cloud-first approach that emphasizes taking full and effective advantage of commercial cloud services.” On a much smaller scale, USDA’s Wells is looking for better solutions to store and process the data he collects in the fields of the Midwest. He is hoping that cloud resources might provide those functions, and also allow him to quickly share large data sets with his colleagues across the country. “The technology itself is absolutely glorious,” Wells says. “I intend to do a great deal more of this, but I’m trying to find an easier path forward.”
<urn:uuid:a09c20bd-771b-47af-8f7a-c981ce8f0770>
CC-MAIN-2022-40
https://fedtechmagazine.com/article/2018/07/assets-air-provide-high-value-data-feds
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00719.warc.gz
en
0.941152
1,330
2.78125
3
How to Do Data Modeling the Right Way Data modeling supports collaboration among business stakeholders – with different job roles and skills – to coordinate with business objectives. Data resides everywhere in a business, on-premise and in private or public clouds. And it exists across these hybrid architectures in different formats: big and unstructured and traditional structured business data may physically sit in different places. What’s desperately needed is a way to understand the relationships and interconnections between so many entities in data sets in detail. Visualizing data from anywhere defined by its context and definition in a central model repository, as well as the rules for governing the use of those data elements, unifies enterprise data management. A single source of data truth helps companies begin to leverage data as a strategic asset. What, then, should users look for in a data modeling product to support their governance/intelligence requirements in the data-driven enterprise? Nine Steps to Data Modeling - Provide metadata and schema visualization regardless of where data is stored Data modeling solutions need to account for metadata and schema visualization to mitigate complexity and increase collaboration and literacy across a broad range of data stakeholders. They should automatically generate data models, providing a simple, graphical display to visualize a wide range of enterprise data sources based on a common repository of standard data assets through a single interface. - Have a process and mechanism to capture, document and integrate business and semantic metadata for data sources As the best way to view metadata to support data governance and intelligence, data models can depict the metadata content for a data catalog. A data modeling solution should make it possible for business and semantic metadata to be created to augment physical data for ingestion into a data catalog, which provides a mechanism for IT and business users to make use of the metadata or data structures underpinning source systems. High-functioning data catalogs will provide a technical view of information flow as well as deeper insights into semantic lineage – that is, how the data asset metadata maps to corresponding business usage tables. Data stewards can associate business glossary terms, data element definitions, data models and other semantic details with different mappings, drawing upon visualizations that demonstrate where business terms are in use, how they are mapped to different data elements in different systems and the relationships among these different usage points. - Create database designs from visual models Time is saved and errors are reduced when visual data models are available for use in translating the high-quality data sources that populate them into new relational and non-relational database design, standardization, deployment and maintenance. - Reverse engineer databases into data models Ideally a solution will let users create a logical and physical data model by adroitly extracting information from an existing data source – ERP, CRM or other enterprise application — and choosing the objects to use in the model. This can be employed to translate the technical formats of the major database platforms into detailed physical entity-relationship models rich in business and semantic metadata that visualizes and diagrams the complex database objects. Database code reverse-engineering, integrated development environment connections and model exchange will ensure efficiency, effectiveness and consistency in the design, standardization, documentation and deployment of data structures for comprehensive enterprise database management. Also helpful is if the offline reverse-engineering process is automated so that modelers can focus on other high-value tasks. - Harness model reusability and design standards When data modelers can take advantage of intuitive graphical interfaces, they’ll have an easier time viewing data from anywhere in context or meaning and relationships support of artifact reuse for large-scale data integration, master data management, big data and business intelligence/analytics initiatives. It’s typically the case that modelers will want to create models containing reusable objects such as modeling templates, entities, tables, domains, automation macros. naming and database standards, formatting options, and so on. The ability to modify the way data types are mapped for specific DBMS data types and to create reusable design standards across the business should be fostered through customizable functionality. Reuse serves to help lower the costs of development and maintenance and ensure data quality for governance requirements. Additionally, templates should be available to help enable standardization and reuse while accelerating the development and maintenance of models. Standardization and reuse of models across data management environments will be possible when there is support for model exchange. Consistency and reuse are more efficient when model development and assets are centralized. That makes it easier to publish models across various stakeholders and incorporate comments and changes from them as necessary. - Enable user configuration and point-and-click report interfaces A key part of data modeling is to create text-based reports for diagrams and metadata via a number of formats – HTML, PDF and CSV. By taking the approach of using point-and-click interfaces, a solution can make it easier to create detailed metadata reports of models and drill down into granular graphical views of reports that are inclusive of object types-tables, UDPS and more. The process is made even more simple when users can take advantage of out-of-the-box reports that are pertinent to their needs as well as create them for individual models or across multiple models. When generic ODBC interfaces are included, options grow for querying metadata, regardless of where it is sourced, from a variety of tools and interfaces. - Support an all-inclusive environment of collaboration When solutions focus on model management in a centralized repository, modular and bidirectional collaboration services are empowered across all data generators – human or machine—and stewards and consumers across the enterprise. Data siloes, of course, are the enemies of data governance. They make it difficult to have a clear understanding of where information resides and how data is commonly defined. It’s far better to centralize and manage access to ordered assets – whether by particular internal staff roles or to business partners granted role-based and read-only access – to maintain security. Such an approach supports coordinated version control, model change management and conflict resolution and seeds cross-model impact analysis across stakeholders. Modeler productivity and independence can be enhanced, too. - Promote data literacy Stakeholder collaboration, in fact, depends on and is optimized by data literacy, the key to creating an organization that is fluent in the language of data. Everyone in the enterprise – from data scientists to ETL developers to compliance officers to C-level executives – ought to be assured of having a dynamic view of high-quality data pipelines operating on common and standardized terms. So, it is critical that solutions focus on making the pipeline data available and discoverable in such a way that it reflects different user roles. When consumers can view data relevant to their roles and understand its definition within the business context in which they operate, their ability to produce accurate, actionable insights and collaborate across the enterprise to enact them for the desired outcomes is enhanced. Data literacy built on business glossaries that enable the collaborative definition of enterprise data in business terms and rules for built-in accountability and workflow promote adherence to governance requirements. - Embed data governance constructs within data models Data governance should be integrated throughout the data modeling process. It manifests in a solution’s ability to adroitly discover and document any data from anywhere for consistency, clarity and artifact reuse across large-scale data integration, master data management, metadata management and big data requirements. Data catalogs and business glossaries with properly defined data definitions in a controlled central repository are the result of ingesting metadata from data models for business intelligence and analytics initiatives. You Don’t Know What You’ve Got Bottom line, without centralized data models and a metadata hub, there is no efficient means to comply with industry regulations and business standards regarding security and privacy; set permissions for access controls; and consolidate information in easy-to-understand reports for business analysts. The value of participating in data modeling to classify the data that is most important to the business in terms that are meaningful to the business and having a breakdown of complex data organization scenarios supports critical business reporting, intelligence and analytics tasks. That’s a clear need, as organizations today analyze and use less than 0.5 percent of the information they take in – a huge loss of potential value in the age of data-driven business. Without illustrative data models businesses may not even realize that they already have the data needed for a new report, and time is lost and costs increase as data is gathered and interfaces are rebuilt.
<urn:uuid:3bf49128-d262-435f-a8b1-c6ba017ebfdc>
CC-MAIN-2022-40
https://blog.erwin.com/blog/how-to-do-data-modeling-the-right-way/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00719.warc.gz
en
0.898348
1,739
2.765625
3
WHAT IS APPLICATION VULNERABILITY? Application vulnerabilities are flaws or weaknesses in an application that can lead to exploitation or a security breach. With the enormous global reach of the Internet, web applications are particularly susceptible to attack, and these can come from many different locations across many attack vectors. Application vulnerability management and application security testing are critical components in a web application security program. Application security standards are established by leading industry research and standards bodies to help organizations identify and remove application security vulnerabilities in complex software systems. Web application security deals specifically with the security surrounding websites, web applications, and web services such as APIs. The ten most commonly seen application vulnerabilities are detailed in the OWASP Top 10 list, which is highly regarded and updated frequently as the security landscape morphs and changes.
<urn:uuid:0f488ab8-8350-47a9-b4ed-074e1a35f9e6>
CC-MAIN-2022-40
https://www.contrastsecurity.com/glossary/application-vulnerability?hsLang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00719.warc.gz
en
0.935271
159
3.078125
3
In the last few years, we’ve seen workers connect to the office from home, creating a rise in security breaches for many industries. If you are a business owner looking to ensure your business is secure, you likely want to stay ahead of the curve and discover how cyber-attacks are set to evolve in the coming years. Maybe you’re just curious. In any case, here are the significant upcoming changes to the world of cyber security in the years to come. Artificial intelligence poses a massive threat to all users of the internet. Simply put, artificial intelligence enables hackers to invade your network and get better at it as they do so. Instead of trying to figure out your passwords, AI bots can simply brute force their way into many people’s private data; by trying millions of known-passwords, they can get into many accounts. A recent study by Scientists using 43 million LinkedIn accounts found that AI programs could guess a quarter of the passwords. Using unique and often arduously long and complex passwords will be necessary if this threat becomes as dangerous as experts are predicting. You should use two-factor authentication when possible as well. More and more companies are collecting your biometric data. Governments worldwide are collecting biometric data from their citizens without having the proper security to keep that information safe. Biometric data collected from developing nations are especially at risk due to their lack of security infrastructure. Believe us, a lot of companies and governments have started to collect biometric data. While these stockpiles are usually well protected, sometimes it isn’t enough. Some tips for protecting your network and keeping your biometric data include: If the last few paragraphs have made you run off to Google to research Artificial Intelligence or Biometric data, we don’t blame you. We encourage you to dive deeper into these subjects. Only through knowledge and preparedness can we hope to combat the imminent threats to your data. Always remember what devices you are using and how they can be attacked. For a single individual, it may be enough to do the following: - Use strong passwords—the more complicated, the better. Do not under any circumstances use Batman. - Check cookie agreements for websites you visit. Websites are collecting a lot of your data. - Two-factor authentication is your friend. - Use secure wifi when possible. When not possible, a VPN is vital. - Consider getting a cover for your webcams. - Don’t permit apps to do anything that isn’t completely necessary. - Ask for help from experts. When managing network security, the opportunity for invasive users only climbs. That is why it is essential to take personal responsibility. If you own a business, ensure you and your employees follow these rules. You should also speak to a professional about getting suitable network security. At Netcotech, we can help you with all your IT solutions to keep your business and your staff secure. For a free risk assessment, please contact us.
<urn:uuid:8ee9989b-38eb-4347-8527-c2be7fdbe1e4>
CC-MAIN-2022-40
https://www.netcotech.com/blogs/upcoming-cyber-security-threats-of-the-future
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00119.warc.gz
en
0.9436
623
2.703125
3
In a post on its AI blog penned last week, Google explained the technology behind the radar sensors included in last year’s Pixel 4 smartphone. Though the hardware behind what Google calls Project Soli — a 60GHz radar and antenna receivers capable of covering a combined 180-degree field of view — was revealed during the release of the Pixel 4 last year, the AI models and algorithms that power the motion gesture system had yet to be discussed in any detail by the company’s engineers until now. Soli’s AI models are trained to detect and recognize motion gestures with low latency, with Google acknowledging that though Soli is in its early days — with the Pixel 4 and Pixel 4XL being the first and only consumer devices to feature it thus far — it could lead to newer forms of context and gesture awareness on a variety of devices, and potentially make way for a better experience accommodating users with disabilities. Soli’s radar and antenna receivers record the positional information — things like range and velocity — of an object by measuring the electromagnetic waves reflected back to the antennas. This data is then fed into Soli’s machine learning models for what Google refers to as “sub-millimeter” gesture classification, where subtle shifts in an object’s position are measured and compared to distinguish various motion patterns between objects. Developing the machine learning models presented Google with a number of challenges to overcome. For starters, even simple gestures like swipes are performed in a number of different ways by users. Second, over the course of a day there may be a number of extraneous motions within the sensor’s range that could appear similar to gestures. And finally, whenever the phone is moved, from the point of view of the sensor the whole world appears to be moving. To solve these challenges, Google’s engineers designed custom machine learning algorithms that are optimized for low-latency detection of in-air gestures from the radar signals. The machine learning models consist of neural networks that were trained using millions of gestures recorded from thousands of Google volunteers, which were then mixed with hundreds of hours of background and radar recordings from other volunteers of generic motions made within range of Soli’s sensors. “Remarkably, we developed algorithms that specifically do not require forming a well-defined image of a target’s spatial structure, in contrast to an optical imaging sensor, for example. Therefore, no distinguishable images of a person’s body or face are generated or used for Motion Sense presence or gesture detection,” wrote engineers Jaime Lien and Nicholas Gillian in the post. “We are excited to continue researching and developing Soli to enable new radar-based sensing and perception capabilities,” they added later. In addition to its use for Motion Sense, Soli’s technology is also used to alert and prepare the phone when a user is about to use the Face Unlock feature of biometric authentication, which was also debuted on the Pixel 4. Soli debuted on the Pixel 4 last fall with a few supported gestures as the technology behind the Pixel’s Motion Sense features, including a swipe to change songs or silence an alarm or call, and the ability to wake the screen when you reach for your phone. As Soli and Motion Sense can be updated via software, Google also recently added the ability to pause music with a new gesture, and presumably more updates may be coming in the future.
<urn:uuid:88870b18-f217-414a-9dad-b8e3cbb8c0cc>
CC-MAIN-2022-40
https://mobileidworld.com/google-blog-post-details-how-soli-radar-tech-works-031602/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00119.warc.gz
en
0.941277
709
2.96875
3
As computers begin to take the roles traditionally held by us mere mortals, it may be time to ask: How far is too far? Even if a machine can do a person’s work faster, can it do a better job? These are questions that educators are starting to raise as new technologies take over some of the tasks that used to be handled by teachers. In educational assessment, for example, we’ve been relying on machines to read bubble sheets for decades. Exams like the SATs, ACTS and GREs, as well as dozens of other federal- and state-mandated tests, are now taken online and scored by computers. But not all tests have one answer. Can we trust a computer to grade an essay? Computer software grades essays – instantly One nonprofit – founded by Harvard and the Massachusetts Institute of Technology – has developed software that uses artificial intelligence to evaluate essay and short-answer responses. EdX is planning to make its software available to schools on its website, and says its program will give teachers more time to focus on other responsibilities. EdX said the technology also benefits students. When they submit essays through the computer software, they can receive instant feedback and a grade for their work. Teachers could then allow students to use this feedback to rewrite their essay and improve their grade. “There is a huge value in learning with instant feedback,” said EdX’s president Anant Agarwal. “Students are telling us they learn much better with instant feedback.” Are computers smart enough? While some studies have found that software can grade essays accurately and effectively, many educators and researchers remain unconvinced. One organization, the Professionals Against Machine Scoring of Student Essays in High-Stakes Assessment, is focused on preventing this kind of software from becoming adopted. Its petition has already collected thousands of signatures. “Computers cannot ‘read,'” explained the organization on its site. “They cannot measure the essentials of effective written communication: accuracy, reasoning, adequacy of evidence, good sense, ethical stance, convincing argument, meaningful organization, clarity, and veracity, among others.” While this technology is a great tool, it should be used to enhance teaching rather than to replace teachers. Instructors could use it to help students work on their writing skills. For example, teachers could have students write an essay at home and then use the program to evaluate it the next day in class. As a group, the teacher and students could talk about writing problems identified, ways to fix them and whether the software did a good job of grading the essay. If teachers could get their students thinking critically not only about their own writing, but the way it’s graded, this would be a win for everyone involved. Do you think schools are relying too heavily on technology? Please share your thoughts on our Facebook page!
<urn:uuid:296efd7c-eeab-4215-b5bc-9e8c3889b1e4>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/what-makes-a-great-essay-ask-your-computer
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00119.warc.gz
en
0.964602
597
3.578125
4
Within reverse zones, you will find what we call the PTR records, or Pointer Records. The PTR records are responsible for defining the reverse DNS for each host on the network. In other words, a PTR record is used to map a network interface (or IP) to a hostname. These records are most often used for reverse DNS. For instance, an A record for mail.example.com points to the IP address 188.8.131.52. In the PTR of the reverse database, this IP address is stored as the domain name 184.108.40.206.in-addr.arpa, pointing back to its designated hostname "mail.example.com". When a mail server receives an email, a three-way handshake takes place to verify the sending server. Forward DNS Check> Reverse DNS Check> FQDN Check During this process, the forward DNS must match the reverse DNS as defined in in-addr.arpa, which must match the fully qualified domain name in the message header. When the three-way handshake passes, the email is delivered to the client Inbox without issue. If the check fails, the mail is either rejected outright or delivered to the clients' spam folder. Properly configured Reverse DNS can help prevent your email from ending up in the recipient's spam folder. Reverse DNS requires a special reverse DNS domain ending with .in-addr.arpa. in-addr.arpa takes IP addresses and prepends them to their respective domain.
<urn:uuid:ac6c372d-6a12-4201-ac01-918e95a941ac>
CC-MAIN-2022-40
https://support.constellix.com/support/solutions/articles/47001109971-reverse-dns
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00119.warc.gz
en
0.863224
319
3.640625
4
Running a successful business involves many different factors and processes that must be handled correctly. Therefore, issues like mishandled documentation, mistakes in following policies, and security breaches could wreak havoc on the inner workings of an organization. A big enough problem could even lead to a business’s downfall. To protect against these issues, business leaders use specialized GRC processes and tools. GRC refers to governance, risk, and compliance, and is a strategic approach that organizations take to manage their essential documentation and processes for optimal performance. Read on to learn more about the importance of GRC, how GRC software systems help businesses, and the best practices for GRC in your business. Read more: What Is the 5-Step Risk Management Process? The Importance of GRC Governance, risk, and compliance are essential to upkeep, as they can protect organizations from several critical risk factors. GRC tools simplify business management processes through automated solutions and streamlined workflows. These software systems run different actions that enhance the productivity and protection of businesses by addressing vulnerabilities, managing policy procedures, and ensuring organizational compliance. GRC strategies and tools can be used by organizations of all sizes, private or public. GRC strategies and tools can be used by organizations of all sizes, private or public. By managing governance, risk, and compliance factors, any business can be more prepared for challenges that could impede their success and growth. How GRC Tools Can Help GRC tools and software solutions are designed to help organizations with three main tasks: - Ensuring that business rules and practices are correctly followed - Protecting the organization from threats and vulnerabilities - Maintaining legal and ethical compliance with organizational policies and procedures Essentially, GRC tools perform actions to help optimize the business for success. There are many ways that organizations can utilize GRC strategy in their workplaces, many of which can be achieved with the help of GRC solutions. GRC Best Practices for Organizations GRC tools can provide many valuable benefits to organizations through their automation and streamlined capabilities. Here are a few important concepts to consider when deciding on a GRC strategy for your organization. An essential part of any effective governance, risk, and compliance strategy is risk management. GRC tools help with this, as they manage an organization’s safety by evaluating the network for IT risks. This process usually involves capabilities like running tests and scans for potential vulnerabilities within an organization. Often GRC tools will perform risk scoring, where they assign scores based on the measured level of risk. Often GRC tools will perform risk scoring, where they provide a report on risk posture and assign scores based on the measured level of risk. These reports can be helpful to management and IT staff, as they can more easily identify weak spots and factors that could cause security issues and can prevent them ahead of time. Software may also contain functions to remedy security risks and follow up on incidents. Additionally, some GRC software will protect organizations from third-party risk factors by performing evaluations or taking measures to ensure endpoint protection. Although IT security software solutions exist independently, implementing a GRC tool can provide these features to your business as part of a unified software package. Document Organization Features GRC tools perform actions that assist with document organization and facilitate easy resource access. To stay on top of essential data and information, your organization could likely benefit from one of these solutions. Software with preconfigured or custom integration features can allow you to access your organization’s data from different systems and use it in your GRC software. Third-party integration with GRC products can also make it easy to locate and access these important documents, spreadsheets, reports, and pieces of information easily in one place through system tools like dashboards. These document organization features can be especially helpful for HR and administrative processes, as the ability to access this information from a single dashboard could help them make more informed decisions. Read more: Why Healthcare Risk Management Is Important Operational Management Features Streamlined operational management processes means your organization is more protected against potential mistakes. Automated management workflows help to simplify the jobs of your staff and save them time by performing repetitive processes for them. Automation can also remove the chance of human error and help to improve consistency when duplicating these actions. Streamlined operational management processes means your organization is more protected against potential mistakes. GRC software also has solutions to ensure correctness in business processes. Policy management is a critical feature of any suitable GRC tool. Managing compliance initiatives with robust policies means your organization is more protected from accidental deviations in procedures. Audit management features are helpful as well, as they support internal auditors by performing tasks that can help companies detect suspicious activity and maintain data accuracy. When choosing a GRC tool, it is best to pick one that is flexible and scalable. Your software should be able to continue supporting you throughout your business’s growth and appropriately adapt to changes in regulations, policies, and procedures. Read next: Best Risk Management Software for 2022
<urn:uuid:15c968e2-16b6-4b70-ae15-2033d4536eb1>
CC-MAIN-2022-40
https://www.cioinsight.com/it-strategy/grc/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00119.warc.gz
en
0.944915
1,030
2.609375
3
What is a data scientist? Data scientists are analytical data experts who use data science to discover insights from massive amounts of structured and unstructured data to help shape or meet specific business needs and goals. Data scientists are becoming increasingly important in business, as organizations rely more heavily on data analytics to drive decision-making and lean on automation and machine learning as core components of their IT strategies. Data scientist job description A data scientist’s main objective is to organize and analyze data, often using software specifically designed for the task. The final results of a data scientist’s analysis must be easy enough for all invested stakeholders to understand — especially those working outside of IT. A data scientist’s approach to data analysis depends on their industry and the specific needs of the business or department they are working for. Before a data scientist can find meaning in structured or unstructured data, business leaders and department managers must communicate what they’re looking for. As such, a data scientist must have enough business domain expertise to translate company or departmental goals into data-based deliverables such as prediction engines, pattern detection analysis, optimization algorithms, and the like. For more on data scientist job descriptions from a hiring perspective, see “Data scientist job description: Tips for landing top talent.” Data scientist vs. data analyst Data scientists often work with data analysts, but their roles differ considerably. Data scientists are often engaged in long-term research and prediction, while data analysts seek to support business leaders in making tactical decisions through reporting and ad hoc queries aimed at describing the current state of reality for their organizations based on present and historical data. Thus, the difference between the work of data analysts and that of data scientists often comes down to timescale. A data analyst might help an organization better understand how its customers use its product in the present moment, whereas a data scientist might use insights generated from that data analysis to help design a new product that anticipates future customer needs. Data scientist salary Data science is a fast growing field, with the BLS predicting job growth of 22% from 2020 to 2030. Data scientist is also proving to be a satisfying long-term career path, with Glassdoor’s 50 Best Jobs in America rank data scientist the third-best job in the US. According to data from Robert Half’s 2021 Technology and IT Salary Guide, the average salary for data scientists, based on experience, breaks down as follows: - 25th percentile: $109,000 - 50th percentile: $129,000 - 75th percentile: $156,500 - 95th percentile: $185,750 Data scientist responsibilities A data scientist’s chief responsibility is data analysis, which begins with data collection and ends with business decisions based on analytic results. The data that data scientists analyze draws from many sources, including structured, unstructured, or semi-structured data. The more high-quality data available to data scientists, the more parameters they can include in a given model, and the more data they will have on hand for training their models. Structured data is organized, typically by categories that make it easy for computers to sort, read, and organize automatically. This includes data collected by services, products, and electronic devices, but rarely data collected from human input. Website traffic data, sales figures, bank accounts, or GPS coordinates collected by your smartphone — these are structured forms of data. Unstructured data, the fastest-growing form of data, comes more likely from human input — customer reviews, emails, videos, social media posts, etc. This data is more difficult to sort through and less efficient to manage with technology, thus requiring a bigger investment to maintain and analyze. Businesses typically rely on keywords to make sense of unstructured data to pull out relevant data using searchable terms. Semi-structured data falls between the two. It doesn’t conform to a data model but does have associated metadata that can be used to group it. Examples include emails, binary executables, zipped files, websites, etc. Typically, businesses employ data scientists to handle unstructured data and semi-structured data, whereas other IT personnel manage and maintain structured data. Yes, data scientists do deal with lots of structured data, but businesses increasingly seek to leverage unstructured data in service of revenue goal, making approaches to unstructured data key to the data scientist role. For further insight into the working lives of data scientists, see “What does a data scientist do? 7 of these in-demand professionals offer their insights.” Data scientist requirements Each industry has its own data profile for data scientists to analyze. Here are some common forms of analysis data scientists are likely to perform in a variety of industries, according to the BLS. Business: Data analysis of business data can inform decisions around efficiency, inventory, production errors, customer loyalty, and more. E-commerce: Now that websites collect more than purchase data, data scientists help e-commerce businesses improve customer service, find trends, and develop services or products. Finance: Data on accounts, credit and debit transactions, and similar financial data are vital to a functioning business. But for data scientists in the finance industry, security and compliance, including fraud detection, are also major concerns. Government: Big data helps governments form decisions, support constituents, and monitor overall satisfaction. As in the finance sector, security and compliance are paramount concerns for data scientists. Science: Thanks to recent IT advances, scientists today can better collect, share, and analyze data from experiments. Data scientists can help with this process. Social networking: Social networking data can inform targeted advertising, improve customer satisfaction, establish trends in location data, and enhance features and services. Healthcare: Electronic medical records require a dedication to big data, security, and compliance. Here, data scientists can help improve health services and uncover trends that might go unnoticed otherwise. Data scientist skills According to William Chen, Data Science Manager at Quora, the top five skills for data scientists include a mix of hard and soft skills: - Programming: The “most fundamental of a data scientist’s skill set,” programming improves your statistics skills, helps you “analyze large datasets,” and gives you the ability to create your own tools, Chen says. - Quantitative analysis: Quantitative analysis improves your ability to run experimental analysis, scale your data strategy, and help you implement machine learning. - Product intuition: Understanding products will help you perform quantitative analysis and better predict system behavior, establish metrics, and improve debugging skills. - Communication: Possibly the most important soft skills across every industry, strong communication skills will help you “leverage all of the previous skills listed,” says Chen. - Teamwork: Much like communication, teamwork is vital to a successful data science career. It requires being selfless, embracing feedback, and sharing knowledge with your team, says Chen. Ronald Van Loon, CEO of Intelligent World, adds business acumen to the list. Van Loon says strong business acumen is the best way to channel the technical skills of a data scientist. It is necessary to discern the problems and potential challenges that need to be solved for an organization to grow. For a deeper look at what it takes to excel as a data scientist, see “Essential skills and traits of elite data scientists.” Data scientist education and training There are plenty of ways to become a data scientist, but the most traditional route is by obtaining a bachelor’s degree. Most data scientists hold a master’s degree or higher, according to BLS data, but not every data scientist does, and there are other ways to develop data science skills. Before jumping into a higher-education program, you’ll want to know what industry you’ll be working in to figure out the most important skills, tools, and software. Because data science requires some business domain expertise, the role varies by industry, and if you’re working in a highly technical industry, you might need further training. For example, if you’re working in healthcare, government, or science, you’ll need a different skillset than if you work in marketing, business, or education. If you want to develop certain skillsets to meet specific industry needs, there are online classes, boot camps, and professional development courses that can help hone your skills. For those considering grad school, there are a number of high-quality data science master’s programs, including the following: - Master of Science in Statistics: Data Science at Stanford University - Master of Information and Data Science: Berkeley School of Information - Master of Computational Data Science: Carnegie Mellon University - Master of Science in Data Science: Harvard University John A. Paulson School of Engineering and Applied Sciences - Master of Science in Data Science: University of Washington - Master of Science in Data Science: John Hopkins University Whiting School of Engineering - MSc in Analytics: University of Chicago Graham School Data science certifications In addition to boot camps and professional development courses, there are plenty of valuable big data certifications and data science certifications that can boost your resume and your salary. - Certified Analytics Professional (CAP) - Cloudera Data Platform Generalist Certification - Data Science Council of America (DASCA) Senior Data Scientist (SDS) - Data Science Council of America (DASCA) Principal Data Scientist (PDS) - IBM Data Science Professional Certificate - Microsoft Certified: Azure AI Fundamentals - Microsoft Certified: Azure Data Scientist Associate - Open Certified Data Scientist (Open CDS) - SAS Certified AI and Machine Learning Professional - SAS Certified Advanced Analytics Professional using SAS 9 - SAS Certified Data Scientist - Tensorflow Developer Certificate Other data science jobs Data scientist is just one job title in the expanding field of data science, and not every company that makes use of data science is hiring for data scientists per se. Here are some of the most popular job titles related to data science and the average salary for each position, according to data from PayScale: - Analytics manager – $100,099 - Business intelligence analyst – $70,868 - Data analyst – $62,723 - Data architect – $122,882 - Data engineer – $93,145 - Research analyst – $57,615 - Research scientist – $82,957 - Statistician – $77,545
<urn:uuid:5a3c5d8b-6765-4224-9e25-9f5ce6513f93>
CC-MAIN-2022-40
https://www.cio.com/article/230532/what-is-a-data-scientist-a-key-data-analytics-role-and-a-lucrative-career.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00119.warc.gz
en
0.913354
2,190
3.3125
3
The pandemic has resulted in a shift to online learning services, and this is here to stay. While the move was sudden and schools and households were unprepared to deal with it, hackers were ready to exploit it as best they could. In this article, we discuss the dangers for children associated with the Internet and we show how the FlashStart web content filter can help you grant a safe Internet experience for your Internet users, both the inexperienced and experienced ones. 1. Children and screen time: an overview 1.1 Before the pandemic In February 2020, when the first news about Covid 19 was spreading, the American Academy of Child and Adolescent Psychiatry published an article titled “Screen Time and Children”. The article reported that, back then, American children aged 8-12 were already spending an average of four to six hours per day in front of a screen, be it on a laptop, smartphone, gaming console or TV. Teenagers reported a higher number, an average of 9 hours of screen time per day. On top of these stats, the article also called the attention of parents and schools to a much wider issue, hence the type of content children might come across online, contents that are usually unsuitable for children and teenagers and that include, but are not limited to: » Violence, challenges, weapons and connected behaviors » Porn and sexual contents » Negative stereotypes and cyberbullying » Drugs and substance use » Targeted advertising for children » Fraud websites with misleading or inaccurate information The article then went on to describe the psychological dangers associated with too much exposure to these threats, as well as to a prolonged screen time. They include mood, weight and sleep problems, but are also connected with lower grades in school, body image issues and FOMO, that is, the fear of missing out, which will keep your children even more connected and exposed to unsuitable contents. 1.2 Effect of the pandemic At the end of April 2020, so just two months after the publishing of the article by the American Academy of Child and Adolescent Psychiatry, Unicef issued a warning on the online risks for children associated with the pandemic’s breakout. The article, which came out of Unicef’s experience in Bhutan, said that “Millions of children around the world including children in Bhutan are at increased risk of harm as their lives move increasingly online due to the COVID-19 pandemic”. Other warnings and articles appeared about the issues connected with longer online presence for children. Indeed, the pandemic resulted in classes moving online, and so in a whole new range of Internet users, often young and unaware of the cyber risks they might face online. While schools were unprepared to move online all of a sudden, hackers were ready to play their part well. USA Today reported that online video conferencing services were targeted by hackers, who for example showed videos of child abuse during online meetings for schoolboards. Also, according to the article, between August and September 2020, more than half of the ransomware attacks in the US targeted schools up to the 12th grade. The discussion up to now points in one, and only one direction: since online learning is here to stay, even if to a lesser degree than that of the pandemic’s height, parents and schools need to take action to protect the online experience of their children and pupils and grant their safety from cyber threats. >> FlashStart protects your children from a wide range of threats and prevents access to malicious websites → Start your free trial now and ensure your children navigate safely online. 2. Web content filter: the solution for a secure Internet experience The problem with undesired contents is a widespread one, which has prompted several websites and applications to develop their own way of blocking what is generally known as “explicit contents”. To find out more about the in-built options provided by Google, YouTube, Spotify and Discord, take a look at our dedicated article. Also, some operating systems are now offering options for a safer Internet navigation and one among these is the Ubuntu Internet filter. 2.1 Ubuntu Internet filter The Ubuntu Internet filter is part of the services offered by Ubuntu, the Linux-based operating system that was born in 2004. The name Ubuntu comes from a south African language that means “humanity towards others” and the idea behind it is to give relevance to the users’ community by making them participate in the development of the operating system and the other services offered. Within the package, what is known as the Ubuntu Internet filter actually takes the name of Web Content Control. Activating the Web Content Control in Ubuntu’s settings means you’ll be blocking access to unwanted or unsuitable contents, such as adult websites, violence, online gaming and other disturbing websites. The Ubuntu Internet filter is a valid parental control tool for all Linux users. We do however believe that it would be best to combine it with a different tool in order to grant the level of safety you wish for children and pupils. The Internet now pullulates with offers of web filtering services, but we recommendFlashStart since cyber security has always been its target. >> FlashStart is completely cloud-based and can be activated quickly on your devices with no need to buy any additional hardware → Start your free trial now! 3. The FlashStart web content filter FlashStart is a European company that has more than twenty years of experience in the cyber security business. FlashStart was one of the first European players to move from traditional, hardware-based, cyber security into the market for web-based cyber security services. If you wish to know more about the advantages of cloud-based services compared to traditional ones you can read our dedicated article. FlashStart product is the choice of customers all around the world since it is: » Continuously updated; » Fully personalizable; 3.1 FlashStart: your Internet protection, always up to date FlashStart is always updated so as to identify and block the latest cyber threats. The FlashStart web content filter exploits artificial intelligence algorithms to continuously scan the Internet for new threats. Also, it uses AI for learning purposes, to understand the “behavior” and changes in the threats that already exist and identify the new ones coming up. Being cloud-based, all the new updates are directly implemented in the cloud and, from there, on your devices, which will hence be granted up-to-date protection from all the identified threats. In this way, users won’t need to download anything and neither will they have to proceed with lengthy reboots of their devices: they will enjoy the updated FlashStart protection automatically, with no need for action on their side. 3.2 FlashStart: the flexible protection for all your devices FlashStart is a flexible tool that can be activated: » At the router level: you can activate the FlashStart web content filter directly at the router level in order to grant safety of navigation to all the devices connected to the network. This will give you the tranquility to know that all the Internet traffic going in and out of your router gets checked by the filter. » At the end-point level: you can choose to activate the FlashStart DNS filter at the end-point level through the ClientShield app. This allows you to grant Internet safety also to devices that connect to the web outside your household or educational institution and to mobile phones. 3.3 FlashStart: the Internet protection you can tailor to your needs FlashStart can be fully tailored to your needs thanks to its granular protection, which allows you to block access to a complete domain or to specific pages of it. Also, you can integrate FlashStart with the Microsoft Active Directory and apply different permissions based on different user groups. For example, within a school you may wish to prevent students from accessing some website categories, like online shopping websites, while you may see no need to block them for teachers, staff and the management. Also, you can decide to schedule the blocks. Indeed, FlashStart allows you to set times and dates for the blocks and, for example, prevent students from accessing social networks during school hours and allowing it during breaks. 3.4 FlashStart grants almost zero latency FlashStart is fast, extremely fast. The system exploits a network of anycast data centers that are spread all around the world and can thus grant an ultra-rapid response, both for router-based and end-point protection, at any time and for requests coming from everywhere. This means that latency during navigation is close to zero and most users don’t even notice that there are checks going on while they carry out their online searches or chat and play with their online friends. As a consequence, they will not want to deactivate the protection since it is not slowing them down. Nevertheless, should they try to do so, you can rest assured that FlashStart provides only the network administrator with the key needed to lift Internet protection and so children and students will not be able to bypass it independently. You can activate the FlashStart® Cloud protection on any sort of Router and Firewall to secure desktop and mobile devices and IoT devices on local networks.
<urn:uuid:0143ffd7-a17e-4183-8d74-b89c5b56cb26>
CC-MAIN-2022-40
https://flashstart.com/internet-content-filtering-protecting-your-children-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00119.warc.gz
en
0.94908
1,907
3.328125
3
All versions of Apache Tomcat are affected by a vulnerability dubbed Ghostcat that could be exploited by attackers to read configuration files or install The Apache JServ Protocol (AJP) is a binary protocol that can proxy inbound requests from a web server through to an application server that sits behind the web server. Tomcat Connector allows Tomcat to connect to the outside, it enables Catalina to receive requests from the outside, pass them to the corresponding web application for processing, and return the response result of the request. By default, Tomcat used two Connectors, the HTTP Connector and the AJP Connector, the lat The Ghostcat vulnerability in the AJP that can be exploited to either read or write files to a Tomcat server, an attacker could trigger the flaw to access configuration files and steal passwords or API tokens. It can also allow attackers to write files to a server, including malware or web shells. “By exploiting the Ghostcat vulnerability, an attacker can read the contents of configuration files and source code files of all “In addition, if the website application allows users upload file, an attacker can first upload a file containing malicious JSP script code to the server (the uploaded file itself can be any type of file, such as pictures, plain text files etc.), and then include the uploaded file by exploiting the Ghostcat vulnerability, which finally can result in remote code execution.” Tomcat versions affected by the Ghostcat vulnerability are: Chaitin experts discovered the vulnerability in early January, then helped maintainers of the Apache Tomcat project to address the issue.
<urn:uuid:715b729d-a64d-4fa9-902c-978026cd6325>
CC-MAIN-2022-40
https://securityaffairs.co/wordpress/98654/hacking/ghostcat-vulnerability.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00119.warc.gz
en
0.901019
339
2.8125
3
Question by zafar khan posted 20 Aug 2005 | cable questions?| |how many wire seqence r there?| i know about stright wire, roll over and cross over what r the wire seqence for this stright wire, rollover and cross wire? when to use this, I mean when contenting to same type of component like connecting lan connection or connect to different like hub or routers? i am trying hard for these answers on net but i am yet to get i hope this site will help me knowing this. Answer by Joseph Golan posted 21 Aug 2005 |Dear zafar ,| There are many different types of wiring schemes and your question is too broad for me to answer here. In addition to sequences there are standards and depending what part of the world you live in there may be additional ones. Basically here in the USA there are the voice jacks which have variations and are listed in as "Universal Service Order Code" or USOC. The most popular one is the the RJ-11 which is for a single line telephone (1-pair) and is wired on a 6 position jack but only the two center pins are active. Then for the data side the two most popular types of jacks both utilize an 8-pin modular jack (commonly incorrectly called an RJ-45) with all 8 conductors active wired to the T568A or T568B standard. Joseph Golan, RCDD Click here to see the expert's profile
<urn:uuid:fe17777a-9308-481e-a7b1-f6bba58e74e8>
CC-MAIN-2022-40
http://www.cabling-design.com/helpdesk/answers/1269.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00320.warc.gz
en
0.924927
322
2.734375
3
According to a new poll conducted on behalf of the National Mining Association, more than 75% of Americans are concerned that new EPA regulations - specifically targetting coal power plants - will lead to higher electricity prices on the consumer end. The new regulations are a part of Obama's climate change campaign and, in short, are meant to curb carbon emissions from coal plants and force energy companies to find cleaner methods of production. If they are passed as planned, the regulations will likely force some of the nationa's 600 coal-fired plants to shut down completely or, at the very least, force them to operate at a much higher cost because of the measures that will need to be taken to adhere to the new emission guidelines. According to the NMA, these regulations would push more than 20% of the current coal-derived electricity out of grid circulation by 2020, drastically decreasing America's largest source of power generation. "Americans are rightfully concerned about higher electricity prices. If EPA continues to push forward with unrealistic standards for coal-based power plants, consumers' fears will become locked-in for the foreseeable future," Hal Quinn, National Mining Association president and CEO, said. "The leap in electricity bills consumers saw this winter is as much the result of EPA's policies as it is the cold weather." Unsurprisingly, a large percentage of those concerned about the potential price hikes were elderly Americans and others living on a fixed income. This winter, with it's polar vortices and seemingly never-ending stay, was one of the harshest in recent history, which also translated into being a very expensive winter in terms of energy. Many poll respondents reported that their lives were noticably impacted by the high energy bills this winter, with almost one fifth stating that the higher electricity costs limited their ability to buy other necessities. Ultimately, the regulations and the people pushing them forward are well intentioned, even if it doesn't always feel like it from the energy industry perspective. Once you strip away the potential political fuel, the EPA and administration's goals are to preserve our environment and encourage the development of alternative energy sources by the only method that they feel works - force. However, forcing out this much coal energy production will undoubtedly have a drastic effect on the diversity of American energy sources and these widespread fears could become a reality. After a bitter winter full of energy troubles, 70% of respondents fear that removing coal power from the energy mix could lead to black and brown outs in the hot summer months. Will these regulations mean "running out" of power the way many fear? It's too soon to know. But it does seem clear that prices will rise, just as in any market where demand begins to outweigh supply. It also seems clear that in the push for energy diversity, we aren't just pushing toward "clean" sources. All energy sources yet known come with some environmental or health risk, perhaps none more notorious than natural gas obtained via fracking. While fracking is too new to have the same public consensus on it's environmental impact that the public generally has with coal, it's clear that fracking does pose some environmental risk. We're replacing one environmental problem with another, and this new one is less understood. While the American people are a complicated bunch, I'd be willing to be that nine times out of ten, they'd pick the devil they know over the devil they don't. So let's not kill coal; let's rehabilitate it. Let's incentivize making coal cleaner not just at the end of the process, but throughout the process. Let's make it easier to learn how to make coal cleaner and make it financially feasible for companies to actually do it. Forcing such an abrupt change is very likely to provoke a large number of plants to shut down, and with emissions also go the jobs and economic that branches out from any energy plant in a region. Shouldn't we be striving to keep any and all jobs? Let's put people to work researching better ways to solve the problem and then more to work actually solving it.
<urn:uuid:00e491f0-9085-4842-8cfd-a5009502a30b>
CC-MAIN-2022-40
https://www.mbtmag.com/global/blog/13209841/americans-agree-epa-regulations-go-too-far
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00320.warc.gz
en
0.973898
814
2.671875
3
As cyber security companies work to step up their game to prevent cyber attacks and data breaches, hackers also continue to adapt their strategies, seeking new and innovative ways to scam victims out of thousands or millions of dollars. One way they do this is by using spear phishing attacks. What is a Spear Phishing Attack? Spear phishing vs phishing — you may wonder what the difference is between different types of phishing. Both are examples of online attacks that are performed for the express purpose of acquiring confidential information or conning organizations out of money. However, there is a significant difference between the two — how generic vs. targeted they are. Unlike regular phishing, which aims to hook anyone willing to bite (think: Nigerian Prince), spear phishing attacks target specific individuals or organizations for a “long con.” TechTarget offers the following spear phishing attack definition: “Spear phishing is an email-spoofing attack that targets a specific organization or individual, seeking unauthorized access to sensitive information. Spear-phishing attempts are not typically initiated by random hackers, but are more likely to be conducted by perpetrators out for financial gain, trade secrets or military information.” Spear phishing attacks are far more successful than the untargeted efforts of generic phishing emails. According to a report from FireEye, “spear phishing emails had an open rate of 70 percent... Further, 50 percent of recipients who open spear phishing emails also click on enclosed links, which is 10 times the rate for mass mailings.” Why are targeted phishing attacks so successful? 1. Each Spear Phishing Email Looks Authentic Hackers spend a lot of time and effort planning their spear phishing attacks. They design their fake emails to look as accurate and authentic as possible to convince the intended victims that they are from a legitimate source. This means using imagery/graphics, design, language, and even email addresses that can pass as real without a thorough inspection. Because they don’t share a lot of the similarities of traditional phishing emails, these messages are often missed by spam filters and other email protections. 2. Spear Phishing Messages Target Each Intended Victim Spear phishing emails are highly personalized and use specific information to lure victims into believing they are legitimate. Sometimes, these messages are tailored to look like they are sent by a manager or even a high-level executive. They also can be customized to look like they come from a trusted vendor with whom your company conducts business. For example, a spear phisher posed as a legitimate Taiwanese electronics manufacturer, Quanta Computer. Over two years, the phisher conned two of the company’s major technology clients, Facebook and Google, out of more than $200 million combined for false invoices. 3. Spear Phishing Attacks Happen Over Time Rather than trying to accomplish everything at once, spear phishers are patient with their targeted phishing attacks. They often use multi-stage attacks that involve malware downloads and data exfiltration which can be set up over weeks or even months. According to CSO, spear phishing attacks can be broken down into three main steps: - Infiltration — This can be done by directing users to click on a malicious link that downloads and installs malware or leads them to a fraudulent website disguised as a real one that requests vital information. Either way, the phisher can use the information or access they gain to log in to the user’s account. - Reconnaissance — The phisher uses this opportunity to monitor and read emails to learn about the organization and identify key targets and opportunities. - Extract Value — Using the information and knowledge they gain over time, or even using the compromised email account itself (à la an account takeover, or ATO) the attacker can launch spear phishing attacks. 4. Spear Phishing Leverages Zero-Day Exploits When conducting spear phishing attacks, some hackers exploit zero-day vulnerabilities in browsers, desktop applications, and plug-ins. They use these methods to compromise the intended victims’ computer system to gain administrative access to the network and other resources, including personal and financial data. 5. Corporate Victims Often Lack the Right Tools Many companies are not as good as they could be about keeping their cybersecurity protections — email filters, firewalls, and network-level protections — up to date. This creates gaping holes in their cyber defenses that hackers and inside threats (such as unhappy former employees or contractors with a grudge) can walk through. This leaves businesses vulnerable to all types of threats, including spear phishing attacks. 6. Companies Lack or Don’t Enforce Computer Use Policies Computer Use or Acceptable Use policies should be things that every business has in place. However, that’s often not the case, and these rules are only effective when they are: - Kept up to date, - Followed by employees, and - Enforced by the company. Organizations that fail to educate employees about these policies or enforce them leave themselves vulnerable when their equipment is used for prohibited purposes. 7. Employees Are Uneducated/Ignorant of Phishing Risks Many employees are ignorant of the threat that a spear phishing attack poses to businesses. Every day, companies around the world trust the safety and security of their business and customers to employees who don’t know how to recognize a targeted phishing attack — or, if they do, may not pay attention and click on a bad email anyway. 8. Companies Lack Anti Phishing Platforms Designed for Spear Phishing According to a survey from The Ponemon Institute and Valimail, “Eighty percent of respondents are very concerned about the state of their companies’ ability to reduce email-based threats, but only 29 percent of respondents are taking significant steps to prevent phishing attacks and email impersonation.” Only 69% of the 650 surveyed IT and IT security experts report using anti-spam or anti phishing filters, with only 63% saying they use them to prevent impersonation attacks. However, many of these types of filters are ineffective for spear phishing attacks because they are created to identify generic phishing tactics. This is why companies need to invest in anti phishing platform that is designed to identify spear phishing. Why You Should Invest in Spear Phishing/Anti Phishing Services As you can see, there are many reasons to invest in a targeted anti phishing service. Another equally (if not more) important reason, however, is that phishing itself is a compliance issue for any company that falls victim to a spear phishing attack. In addition to costing them potentially millions of dollars in financial losses, corporations that don’t step up their internal controls to prevent phishing fraud can face additional costs in securities violations. According to a report from the Securities and Exchange Commission (SEC): “While the cyber-related threats posed to issuers’ assets are relatively new, the expectation that issuers will have sufficient internal accounting controls and that those controls will be reviewed and updated as circumstances warrant is not.” Clearedin is an anti-phishing service that protects users and organizations against these targeted spear phishing attacks. Our platform identifies spear phishing emails using an individualized Trust Graph of your organization’s chat and email communications platforms (Gmail, Slack, and Office 365) to catch these malicious emails before they hook your employees.
<urn:uuid:67e3ca83-7196-4b09-a543-44a037bc535b>
CC-MAIN-2022-40
https://www.clearedin.com/blog/spear-phishing-attack-success
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00320.warc.gz
en
0.945967
1,544
2.984375
3
In today’s day and age, fields of inquiry are becoming more and more interdisciplinary. It’s no surprise that this principle extends to neuroscience and data science. In the past few years, the two fields have come to inform one another. Recent advances in neuroscience, including international research into brain mapping and the decoding of brain signals, can be tied to neuroscientists’ increasing use of data science-based methodology to understand their field better. These are how machine learning helps us better understand how the brain works and what this kind of knowledge may lead to in the future. For the past decade, neuroscientists have taken on the ambitious project of mapping neural connections in the brain. This project is called The Connectome and aims to map the brain to understand the living brain’s functionalities better. A recent study conducted by researchers at the Onikawa Institute of Science and Technology (OIST) Graduate University using machine learning techniques has led to a breakthrough in the Connectome Project. The method: “Magnetic Resonance Imaging (MRI)-based fiber tracking.” This type of tracking uses the diffusion of water molecules in the brain to map neural connections. The distribution of water molecules creates a trail, which allows neuroscientists to trace connections in the brain. Previously, researchers tracked nerve cell fibers in animal experiments using marmosets. This method involves injecting a fluorescent tracing into multiple brain locations to create an image where various nerve fibers are located. However, this was a practice that could not ethically be used on humans, as it involved complete dissection of the brain–and hundreds of brain slices, at that. Neuroscientists speculate that diffusion MRI-based fiber tracking can be used to map the whole human brain and pinpoint the differences between a healthy and diseased brain. This may lead to a better understanding of treating disorders such as Parkinson’s and Alzheimer’s. In addition to ethical concerns of tracking nerve cell fibers, there were issues of convenience and efficiency. The results of this now outdated method were not wholly accurate and could not necessarily detect neurons that extend to the farthest corners of the brain. So, researchers had to set specific parameters. Instead of manually setting parameter combinations, researchers can now use machine intelligence to work for them. Using the fluorescent tracer and MRI data from ten different marmoset brains, the OIST researchers were able to test their algorithms against machine-learned algorithms. The researchers found that the machine-generated algorithm had the optimized parameters for their data sets. The researchers were able to generate a more accurate connectome in the marmoset brain. The reason for this accuracy is that machine learning can recognize patterns hidden in complex data, which is neuroscientists’ primary concern. Even when they can gather data sets, interpreting the results of studies can be difficult, almost impossible, without algorithms such as the ones derived in data science. Using the algorithms described above, researchers can construct hierarchical models. A hierarchical model in data science is a model that builds on itself in exponentially increasingly sophisticated layers. Though they belong to data science, hierarchical models apply to neuroscience because they conceptualize the hierarchy of brain cognition. This means that they can be implemented in the study of the human cortex and may be observable through functional Magnetic Resonance Imaging (fMRI). One type of hierarchical model that researchers have hypothesized to be useful is the Gaussian Filter Model, a non-uniform low pass filter used to reduce noise in imaging. It essentially creates a blur that eliminates unnecessary detail in a scanned image. By reducing this noise, neuroscientists can see the picture and identify only the essential and relevant data. Neuroscientists can decode brain signals with a technology called Brain-Computer Interface, or BCI. BCI refers to a wired communication pathway between the brain and an external device such as a wheelchair. Neuroscientists using BCI face three main challenges: calibration, low classification, and inadequate generalization limit. In the past few years, deep learning techniques have been combatting this. Calibration is a challenge because of the signal-to-noise ratio (SNR) discussed in the Gaussian Filter Model. Often, there is a level of noise that inevitably obscures the signals researchers are testing for. This leads to inaccuracies and loss of important information when there is an attempt to filter out the noise. Noise is any brain signal corrupted by either spontaneous features of the person being studied (blinks, concentration level, etc.) or the environment (temperature, environmental noise, etc.). Researchers can use deep learning to distinguish features from this spontaneous and involuntary noise. As for poor generalization, a machine is only as useful as the human knowledge behind it. You could decode algorithms all day, but the data is useless if you can’t conclude from the decoded data. On the other hand, deep learning can detect patterns and draw conclusions that would be nigh impossible to do manually. Though it isn’t perfect, deep learning allows neuroscientists to conclude they likely could not come to otherwise. Deep understanding is the most accurate method researchers currently have for studying brain signals and has been applied to models like an electroencephalogram (EEG), event-related potential (ERP), and fMRI. To summarize, the three advantages of deep learning in BCI research are the following: The brain is often referred to as the “final frontier,” and we are only now just beginning to decode its mysteries for a good reason. The brain may be the key to understanding our environment in the most sophisticated way; the key to unlocking that information? Data science and deep learning.
<urn:uuid:e88fbde7-fb70-49e1-8cea-f2fcbf99a80f>
CC-MAIN-2022-40
https://plat.ai/blog/how-neuroscientists-are-using-machine-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00320.warc.gz
en
0.935971
1,163
3.421875
3
IBM pledges to help direct the equivalent of up to $200 million for up to five climate-related projects judged to offer the greatest potential impact, and will then broadly share the experiments’ results. IBM is inviting members of the global science community to propose research projects that could benefit from World Community Grid, an IBM Citizenship initiative that provides researchers with enormous amounts of free computing power to conduct large-scale environmental and health-related investigations. This resource is powered by the millions of devices of more than 730,000 worldwide volunteers who sign up to support scientific research. World Community Grid volunteers download an app to their computers and Android devices, and, whenever they are otherwise not in full use, the computers automatically perform virtual experiments, with the aim of dramatically accelerating foundational scientific research. Scientists who submit proposals for climate-related experiments may also apply to receive free IBM cloud storage resources, so that they can work with their experiment data in a secure, responsive, and convenient manner. They may also apply to receive free access to data about historical, current, and forecasted meteorological conditions around the globe from The Weather Company, an IBM Business. The in-kind, donated resources offered by IBM can support many potential areas of inquiry. These might include gauging the impacts on watersheds and fresh water resources; tracking and predicting human or animal migration patterns based on changing weather conditions; analyzing weather that affects pollution or clean-up efforts; analyzing and improving crop or livestock resilience and yields in regions with extreme weather conditions, and more. IBM’s World Community Grid has previously hosted numerous environment-related projects led by scientists around the world. For example, Harvard University identified 36,000 carbon-based compounds with the potential to perform at approximately double the efficiency of most organic solar cells currently in production. “World Community Grid enabled us to find new possibilities for solar cells on a timescale that matters to humanity–in other words, in a few years instead of decades,” said Dr. Alán Aspuru-Guzik, Professor of Chemistry and Chemical Biology, Harvard University. “Usually, computational chemists who try to do this type of thing are studying 10 or 20 molecules at a time. World Community Grid allowed us to screen about 25,000 molecules every day. We had to start thinking in terms of millions of molecules and formulate new ideas based on this massive scale.” Other environmental initiatives hosted on IBM’s World Community Grid have included a project led by Tsinghua University in China, which uncovered a phenomenon that could lead to more efficient water filtration using nanotechnology. Scientists have also used IBM’s World Community Grid to better understand crop resiliency to extreme weather, and to model the impact of water management practices on sensitive watersheds. IBM has a long history of environmental leadership. Just last week, IBM announced that it achieved two major commitments four years ahead of schedule in its effort to help combat climate change. Earlier this month, IBM also reaffirmed its support for the Paris Climate Agreement and signed on to the #WeAreStillIn pledge, expressing its commitment to help continue leading the global fight against climate change. “Computational research is a powerful tool for advancing research on climate change and related environmental challenges,” said Jennifer Ryan Crozier, Vice President of IBM Corporate Citizenship and President of the IBM International Foundation. “IBM is proud to help advance essential efforts to combat climate change by providing scientists with free access to massive computing power, cloud resources, and weather data.” IBM will select up to five projects to receive support. Proposals will be evaluated for scientific merit, potential to contribute to the global community’s understanding of specific climate and environmental challenges or development of effective strategies to mitigate them, and the capacity of the research team to manage a sustained research project. Resources provided are valued at up to $40 million per project, for a total of approximately USD $200 million. IBM will accept applications on a rolling basis, with a first-round deadline of September 15, 2017. Scientists from around the world are encouraged to apply. Up to five winning research teams will be announced beginning in Fall 2017. Since its founding in 2004, World Community Grid has supported 28 research projects on cancer, HIV/AIDS, Zika, clean water, renewable energy and other humanitarian challenges. To date, World Community Grid, hosted in IBM’s Cloud, has connected researchers to one half-billion U.S. dollars’ worth of free supercomputing power. More than 730,000 individuals and 430 institutions from 80 countries have donated more than one million years of computing time from more than three-million desktops, laptops and Android devices. Volunteer participation has helped researchers to identify potential treatments for childhood cancer, more efficient solar cells and more efficient water filtration.
<urn:uuid:0705cf7f-bbfb-4edc-92e8-69223857427c>
CC-MAIN-2022-40
https://www.e-channelnews.com/ibm-and-citizen-scientists-poised-to-contribute-equivalent-of-up-to-200-million-for-climate-environmental-research/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00320.warc.gz
en
0.937184
994
3.171875
3
Study shows decision-making in real time. It takes just a few seconds to choose a cookie over an apple and wreck your diet for the day. But what is happening during those few seconds while you make the decision? In a new study, researchers watched in real time as people’s hands revealed the struggle they were under to choose the long-term goal over short-term temptation. The work represents a new approach to studying self-control. In one key experiment, participants viewed pictures of a healthy and an unhealthy food choice on opposite sides of the top of a computer screen and moved a cursor from the center bottom to select one of the foods. People who moved the cursor closer to the unhealthy treat (even when they ultimately made the healthy choice) later showed less self-control than did those who made a more direct path to the healthy snack. “Our hand movements reveal the process of exercising self-control,” said Paul Stillman, co-author of the study and postdoctoral researcher in psychology at The Ohio State University. “You can see the struggle as it happens. For those with low self-control, the temptation is actually drawing their hand closer to the less-healthy choice.” The results may shed light on a scholarly debate about what’s happening in the brain when humans harness willpower. Stillman conducted the study with Melissa Ferguson, professor of psychology, and Danila Medvedev, a former undergraduate student, both from Cornell University. Their research will appear in the journal Psychological Science. The study involved several experiments. In one, 81 college students made 100 decisions involving healthy versus unhealthy food choices. In each trial, they clicked a “Start” button at the bottom of the screen. As soon as they did, two images appeared in the upper-left and upper-right corners of the screen, one a healthy food (such as Brussels sprouts) and the other an unhealthy one (such as a brownie). They were told to choose as quickly as possible which of the two foods would most help them meet their health and fitness goals. So there was a “correct” answer, even if they were tempted by a less healthy treat. Before the experiment began, the participants were told that after they finished they would be given one of the foods they chose in the experiment. At the end, however, they could freely choose whether they wanted an apple or a candy bar. The results showed that those who chose the candy bar at the end of the experiment – those with lower self-control – had tended to veer closer to the unhealthy foods on the screen. “The more they were pulled toward the temptation on the computer screen, the more they actually chose the temptations and failed at self-control,” Stillman said. But for those with higher levels of self-control, the path to the healthy food was more direct, indicating that they experienced less conflict. In two other studies, similar results occurred in a completely different scenario, in which college students could decide whether they would rather accept $25 today or $45 in 180 days. Those with lower levels of self-control had mouse trajectories that were clearly different from those with higher self-control, suggesting differences in how they were dealing with the decisions. “This mouse-tracking metric could be a powerful new tool to investigate real-time conflict when people have to make decisions related to self-control,” he said. The findings also offer new evidence in a debate about how decision-making in self-control situations unfolds, Stillman said. When the researchers mapped the trajectories people took with the cursor in the first experiment, they observed that most participants did not automatically start directly toward the unhealthy treat before abruptly switching course back to the healthy food. Rather, the trajectories appear curved, as if both the temptation and goal were competing from the beginning. Why is that important? Some researchers have argued that there are two systems in our brain that are involved in a self-control decision: one that’s impulsive and a second that overcomes the impulses to exert willpower. But if that were the case, the trajectories seen in this study should look different than they do, Stillman said. If dual systems underlie these choices, there should be a relatively straight line toward the unhealthy food while people are under the influence of the impulsive first system and then an abrupt change in direction toward the healthy food as the system in charge of self-control kicks in. “That’s not what we found,” Stillman said. “Our results suggest a more dynamical process in which the healthy and unhealthy choices are competing from the very beginning in our brains and there isn’t an abrupt change in thinking. That’s why we get these curved trajectories.” Stillman said these results should help lead to a more accurate view of how our cognitive processes unfold to allow us to resist temptation. Source: Paul Stillman – Ohio State University Image Source: NeuroscienceNews.com image is credited to Jeff Grabmeier. Original Research: The study will appear in Psychological Science.
<urn:uuid:b784f3a8-459b-4b6d-99ae-39466d27c3e8>
CC-MAIN-2022-40
https://debuglies.com/2017/07/08/your-hands-may-reveal-the-struggle-to-maintain-self-control/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00320.warc.gz
en
0.967955
1,071
3.234375
3
Security is one of the most crucial parts of managing a data center. Our world revolves around connectivity, data, and digital telecommunications. This is the reason why data center security should be important to everyone. There have been many innovations when it comes to data center security, but one of the most attention-grabbing modernizations has been the announcement of data center security robots. Will security robots be the standard for data centers in the future? As businesses grow, the need to store their important data in a separate facility becomes more essential. Companies that are looking for an off-site data center trust that all of its critical information will be protected from any outside threats, data breaches, or cyberattacks. Data centers are some of the most secure operations for many reasons. Data centers have layered security measures. Any potential threats would need to go through several layers of security before successfully attaining any important information. Even if one layer of security is breached, the other layers would likely prevent a further breach. Video surveillance also plays a large role in data center security. Most data centers will have closed-circuit television cameras or CCTVs that keep an eye on every aspect of the data center. These CCTVs are normally digitally backed up and archived offsite to protect against corrupting. Data center operations usually have physical security staff on-premises to respond to potential threats. Security guards routinely patrol the premises to ensure that all operations are running smoothly and are secure. Although this has been harder for data center operations during the current pandemic. Data centers are quite the operation. Background checks are completed on everyone working within the data center from the security personnel to technicians. Data centers need to know who is coming in and out. And all current employees need multi-factor authentication to move from one area to another. Everyone needs two different forms of identification to move from one area to another. It may seem as though data center security uses old standards and procedures, but it’s more highly technical than it seems. Data centers also use some of the most state-of-the-art technology currently available to secure their operations. Biometric technology has been one of the security innovations that data centers have incorporated in their operations. Biometrics are biological measurements or physical characteristics that can identify specific individuals. Whether you know it or not, you might already be using biometric technology in your everyday life. Some examples of biometrics are facia recognition, fingerprint scanning, and retina scanning. All of these biometrics can be used to identify and the person is allowed access to certain parts of a data center and the data itself. Cybersecurity threats are a very real concern for data center operators and managers. And to analyze and prevent cyber-attacks requires specific expertise. It is also very time-consuming. The best way to prevent cyber-attacks is by anticipation and early detection. This is where the power of artificial intelligence helps cybersecurity. The analytic capability of artificial intelligence and machine learning can ascertain the difference between normal network behavior and something unusual. These abnormalities in the system usually stem from certain security threats. Artificial intelligence has played an important role in securing many data center operations. Some of the best technologies are going into keeping the world’s data secure. The use of biometric technology, artificial intelligence, and machine learning shows how critical security is for data centers. Both of these distinct technologies already make data centers seem futuristic—so what does the future of data center security look like? The data center company, Switch, is looking to deploy an autonomous robot to help with security efforts. Named, SENTRY, this robot is a cutting-edge fully autonomous security system that can navigate on its own. It can also remotely track, record, and assess the environment around it in real-time. This data center security robot is a 250lb machine that has 360-degree cameras, heat sensors to scan visitors’ temperature and also check for Covid-19 symptoms. It is built to climb curbs and stairs, which allow it to monitor both inside the data center as well as the outside. It can also recognize a car’s license plate to know who may be in the building. Although the robot is fully autonomous, it can also be controlled by human controllers if needed. This data center security robot combines a lot of the best technology already being used by data center operations into an autonomous system. The use of biometrics, artificial intelligence, and autonomous robots show that data centers are not only looking to use the best technologies in their operations, but it is also looking to be more dependable and efficient. Data centers are also looking into the idea of an unmanned fully automated operation. Data center automation is rapidly being more realistic. The amount of data the world is creating and consuming is astounding, and the energy that is used to create and store this data is also astonishing. An automated data center using the power of artificial intelligence and machine learning will improve how data centers use energy. It will also help with security. Another example of why data centers are trending towards automation is another AI-powered robot, Dac. This artificially intelligent robot can manage daily data center operations. It can recognize and repair any problems, it can fix water leaks in the cooling system, repair the electrical system, and more. The data center industry is on the brink of a fully automated operation. Data center security is one of the most important aspects of data center operations. Data centers are using the best technology to stay ahead of potential threats. Biometrics, artificial intelligence, and even advanced robotics are being used to secure the world’s data. These systems are also showing what the future of the data centers will be, and it looks to involve the use of innovative fully automated robots.
<urn:uuid:45879277-4b5e-49c6-b27e-a6f269e2ae36>
CC-MAIN-2022-40
https://www.colocationamerica.com/blog/robots-of-the-future
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00520.warc.gz
en
0.947326
1,176
2.65625
3
Anti-corrosion paints mainly refers to the paint which contains corrosive-resistant pigments such as zinc chromate, lead chromate, or red lead, and linseed oil used as a binder in the anti-corrosion paints. The anti-corrosion paint protect from corrosion by reducing the direct contact of air and water to the metal. The principle of anti-corrosion paint is to protect metal surfaces and also act as a barrier to hold back the contact between chemical compounds or corrosive material. The anti corrosion paints prevents industrial equipment against degradation due to exposure, or oxidation, salt spray, and moisture from industrial chemicals. The most reliable corrosion prevention methods used by industries are anti-corrosive paints; as it offers continuous protecting layers to protect industry equipment from the harsh effects of corrosion and abrasion. Anti-corrosion paints are majorly used to protect equipment used in marine, oil & gas, industrial, infrastructure, and power generation end-user industries. Desired characteristics of anti-corrosion paint include rust prevention, abrasion resistance, impact resistance, and water resistance. Anti-corrosion is cheap in price and provides long lasting corrosion protection. Up gradation of infrastructure, increasing losses owing to corrosion and growing end-user industry such as infrastructure, oil & gas, and power generation are the prime driving factors of the market. Other factor that drives market is rising demand of anti-corrosion paint in automotive industry. As it plays a large role in the automotive sector by providing the thickness and composition of protective layers which must be correct to ensure the effectiveness of rust protection for bodywork and chassis. However, environment regulation might impact the growth of anti-corrosion paint, as raw material used during production of anti-corrosion paint is hazardous to environment and human health. However, developing regions and demand about high efficiency anti-corrosion paint are projected offer growth opportunities for the manufacturers of this market during the forecast period (2018-2025). Among all anti-corrosion paint types, powder-based held major market share. Powder-based paints have wide range of applications such as automobiles, pipelines, domestic appliances, architecture, IT and telecoms. Powder-based anti-corrosion paint is anticipated to be growing at the utmost rate, owing to its zero volatile organic compounds (VOCs) emission and eco-friendly nature. In addition, if offers variety of benefits such as resistance to chalking, scratch resistance, superior durability, and gloss retention which make them suitable for use in automotive, construction, and oil & gas industries. Furthermore, powder based anti-corrosion paints are likely to grow at the highest CAGR due to its zero volatile organic compounds emission and eco-friendly in nature. A wide range of construction companies are adopting powder coating to offer long term exterior finishes for outdoor venues and public works projects. However, solvent based anti-corrosion paints captured the maximum share by revenue of the overall market and is expected to grow at moderate rate during forecasted period. The rise in demand of solvent based due to its superior block resistance features, take less time to dry, and wide range of applications. The market research study on “Anti-Corrosion Paints Market (By Type: Water-Based, Solvent-Based, Powder-Based; By Application: Marine, Oil & Gas, Industrial, Infrastructure, Power Generation, Others) - Global Industry Analysis, Market Size, Opportunities and Forecast, 2018 - 2025” offers a detailed insights on the global anti-corrosion paint market entailing insights on its different market segments. Market dynamics with drivers, restraints and opportunities with their impact are provided in the report. The report provides insights on global anti-corrosion paints market, its type, application, and major geographic regions. The report covers basic development policies and layouts of technology development processes. Secondly, the report covers global anti-corrosion paints market size and volume, and segment markets by type, application, and geography along with the information on companies operating in the market. The anti-corrosion paints market analysis is provided for major regional markets including North America, Europe, Asia-Pacific, Central & South-America, and Middle East & Africa followed by major countries. For each region, the market size and volume for different segments has been covered under the scope of report. In terms of demand, Asia Pacific is expected to dominate the global high performance anti-corrosion paint market in the near future, due to increase in end users such as oil & gas, marine, construction, and power generation. In Europe, demand for high performance anti-corrosion paint is estimated to be steadydue to stringent governmental regulations coupled with saturated end-users market in North America region. However, in forecasted period the stable growth of high performance anti-corrosion paint is likely to be supported by the power generation industry. Latin America is likely to witness slow growth, while Middle East & Africa is anticipated to be the fastest-growing market for high performance anti-corrosion paints during forecasted period. The players profiled in the report include Azkonobel, PPG, Sherwin-Williams, Henkel, Jotun A/S, RPM International Inc., Nippon Paint Holdings Co., Ltd., BASF SE, Chugoku Marine Paints, Ltd, Hempel, 3M, Axalta, Sika, KCC Corporation among others. The global anti-corrosion paints market is segmented as below: Market By Type Market By Application Market By Geography
<urn:uuid:a32bb9ea-478b-4e27-a664-7fd6547cb781>
CC-MAIN-2022-40
https://www.acumenresearchandconsulting.com/anti-corrosion-paints-market
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00520.warc.gz
en
0.943131
1,166
2.765625
3
What is DNS? DNS or Domain Naming System is a fundamental component of modern-day networking. DNS can be likened to a phone book containing information about systems on the internet allowing systems to find each other to communicate. Essentially DNS translates IP addresses into a human-readable format such as a website address. Although DNS is often associated with websites specifically, DNS is used to underpin almost every type of network request. What is the vulnerability of DNS? Due to its age, widespread use, simplicity and lack of authentication DNS has been the target for attackers over recent years. Attacks such as DNS reconnaissance could allow an attacker to query the DNS server in order to extract information from the victim network such as live hosts or the hostnames of high-value targets such as email or file shares. DNS attack example Additional attack vectors include DNS cache poisoning also known as DNS Spoofing and Denial of Service (DoS) or Distributed Denial of Service (DDoS). With DNS cache poisoning an attacker will attempt to enter false information into a DNS cache to redirect victims to an attacker-controlled website. If you like this blog post, find more content in our Glossary.
<urn:uuid:d5717e6a-eb69-4097-9109-8acc65524dea>
CC-MAIN-2022-40
https://www.covertswarm.com/post/what-is-dns
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00520.warc.gz
en
0.941624
241
3.484375
3
As smart buildings are getting increasingly intelligent, strict requirements for cost-efficiency and environmental sustainability highlights the need to make the right hardware choices for your building automation system. Fueled by rapid technological advancement and the of the Internet of Things (IoT), buildings are getting continuously and exponentially smarter. Whereas early Building Automation Systems (BAS) mainly served to control basic functions such as lighting, heat, ventilation, and air conditioning, today’s smart buildings play an integral role in: - Providing a more productive and comfortable work environment - Reducing and regulating energy consumption (and costs) - Limiting the environmental impact - Minimizing operational and maintenance costs - Improving overall cost-efficiency - Increasing safety and security - Collecting vast amounts of precise and actionable data Simply put, a modern building automation system uses advanced technology to collect and share information about the many important processes within the building in order to optimize the performance of the building itself, and its tenants. Connectivity is key The purpose of the modern BAS is to provide complete autonomous control of a building or facility. In particular, the automated system monitors and controls: - Climate and energy consumption - Safety and security - System performance and cybersecurity To do so, the BAS relies on a comprehensive network of sensors, controllers (computers) and output devices that collect and share information to and through a central hub or workstation. Generally, this network consists of several smaller sub-systems, responsible for handling a specific task or group of tasks related to individual processes. Connectivity is key for building automation. All systems integrated into the building’s network infrastructure must communicate effectively to provide quick, reliable and safe processing of data. Gateways and communication protocols To ensure connectivity and efficient communication between systems, your hardware must satisfy certain criteria. The BAS is only as intelligent as it’s “dumbest” sensor, so every single device needs to be secure, connectible and ready to interact with other devices in the system. Connectivity is enabled through gateways. In building automation, a gateway is a device that transmits data between two or more data sources using communication protocols specific to each of them. Smart buildings may use a wide range of protocols – CAN, DeviceNet, PROFIBUS, BACnet, LonTalk, Ethernet/IP, Modbus TCP, POWERLINK, CC-Link, EtherCAT, SERCOS III, MB/RTU, RS422, RS485, MB ASCII RS232, Controller Area Network, DeviceNet, FOUNDATION fieldbus, HART, C-Bus, Z-Wave, Zigbee to mention some, Thanks to the IoT and the never-ending development of new and smarter devices, it is necessary to implement a solution that can enable communication across all protocols, ensuring interoperability. Hardware components in building automation There are three main types of hardware components in a building automation system: - Sensors: Devices that measure important values (e.g. temperature, humidity, and occupancy) and monitors and registers events (e.g. abnormal activity, security breaches, and fire outbreaks) - Controllers: Specialized computers that process collected data and initiates the appropriate response or action - Output devices: Relays, actuators and similar devices which react to and carry out commands issued by the controllers These devices communicate through pre-defined communications protocols and may be accessed and interacted with by the user through dashboards and user interfaces at the workstations. Workstations can be as simple as a small panel PC with a comprehensive HMI, that communicates with the database through a gateway to extract and present the accumulated data collected by the sensors. As mentioned above, the BAS system is divided into several smaller sub-systems, each assigned to their specific task. Security sensors may be connected to one controller, heating and air conditioning to another, and lighting shutters to a third. These controllers are subsequently connected to the main hub via a secure web server. Network design and the number of sub-systems varies depending on building size, the complexity of tasks, amount of data to be processed etc. Balancing costs and quality Identifying the right hardware for your automation system may seem as a challenging task, while each link in the value chain is highly cost-sensitive and all costs must be kept at a minimum. Additionally, energy savings and resource conservation have become top priorities when constructing new buildings, as stricter government guidelines and increasing public awareness highlight the need for sustainability. Nevertheless, certain hardware quality requirements must be met and in order to identify the optimal solution for your needs, you must find the perfect balance between costs and quality. EDGE, ARM and advanced gateways Cost-sensitive environments require cost-efficient solutions and more often than not the best place to start is to consider your controller hardware needs. The key is to identify the hardware options with the appropriate input/output capabilities and processing power relative to the size and complexity of the task at hand. Naturally, a controller monitoring solar panels on the roof does not need the same processing power as a system tasked with controlling security, CCTV and alarms. Consider these options: - Edge computing: Edge computing has become increasingly popular in building automation. Edge computers may be installed inside (or in the immediate vicinity of) the sensor itself, reducing the volume of data traffic, lowering latency and reducing transmission costs. - ARM-based computers: In environments where Edge computing is not a viable option, or you only need a limited amount of processing power, your most cost-efficient option may be compact ARM-based computers. ARM-based computers are a license-free solution based on Reduced Instruction Set Computing (RISK) architectures and may use on Android or Linux operating systems. These computers are designed to perform smaller tasks at higher speeds and have significantly lower operational costs than standard industrial computers. Gateways have input/output modules with multi-protocol implementation (including open source, proprietary and wireless), allowing for secure communication and interoperability between each component of the BAS network. Gateways bridge proprietary protocols to standards and allow communication between them. Choosing the right controller and gateway for each individual sub-system within the BAS network is one of the most effective ways to ensure a cost-effective smart building. Hatteland Technology understands your needs Hatteland Technology is one of the Nordics’ leading suppliers of hardware for building automation systems. Our knowledgeable consultants can: - Assist you in evaluating and identifying potential risks and weaknesses in your existing system - Provide hardware adaptations, converters and a range of other components to ensure connectivity with the infrastructure for all capable devices - Help you assemble a complete and cost-efficient hardware solution for your BAS communications infrastructure Unlike many others, we do not exclusively offer proprietary solutions from a single manufacturer. Instead, through our strong partnerships with different manufacturers, we aim to satisfy the needs of system integrators wishing to take advantage of the best solutions available.
<urn:uuid:7d158348-4395-4a94-93cd-819632fe7a66>
CC-MAIN-2022-40
https://www.hattelandtechnology.com/blog/understanding-the-hardware-needs-of-smart-buildings
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00520.warc.gz
en
0.908527
1,452
2.640625
3
During the past decade most enterprises started and advanced their digital transformation. It was the decade of the technology acceleration and the of the digital transformation of industrial enterprises. Nowadays, companies are prioritizing their sustainable development and their adaptation to the emerging climate change. In this direction, they leverage digital technologies to improve their sustainability and environmental performance. Hence, the current decade is characterized by the twin transformation of industrial enterprises, which includes digital and green aspects. In most cases, digital technologies are used to improve the economic performance and the eco-friendliness of business processes. In this context, enterprises are seeking ways to improve the environmental performance of their IT operations. Hence, they integrate Green IT initiatives in their business processes. Green IT projects span hardware operations such as energy efficient data centers, and software operations such as green AI (Artificial Intelligence) libraries and software modules with optimized Input/Output cycles. Green IT initiatives and related sustainability solutions are usually part of a wider strategy for optimizing enterprise sustainability. This strategy includes IT and non-IT green practices, which are continually monitored based on tangible KPIs (Key Performance Indicators) such as carbon emissions indicators. Green IT projects and other sustainability initiatives are driven by various factors. Many enterprises face tangible sustainability challenges, which put them under pressure to accelerate their green transformation. For instance, several manufacturers are striving to build a competitive advantage around the development of eco-friendly products. For instance, there are many cases where consumers prefer products that are manufactured based on sustainability best practices. However, there is also a significant political push for improving enterprise sustainability. This political push is reflected on: In quest for exceptional environmental performance, modern enterprises take a holistic, integrated approach to pursuing ambitious sustainability targets. For example, they specify sustainability strategies that comprise tens of sustainability activities, ranging from the use of a recycling bin and incentives for economizing on water consumption, to strategies for energy savings (e.g., smart lighting systems) and the provision of green incentives to their employees. As part of their holistic strategy to sustainability optimization, modern enterprises strive to minimize the carbon emissions of their IT operations. In this direction, they employ Green IT initiatives including: Alongside core Green IT practices, companies employ green technologies that enable Green IT operations. For instance, LED lighting is used in computer offices. LED lights provide substantial sustainability gains, as they obviate the need for incandescent lights like the conventional light bulbs. Likewise, they also use programmable thermostats, which save money and energy based on scheduling and automated adjustment of the ambient temperature. Recently, there is also a surge of interest in electric vehicles, which are increasingly comprising computers and connected devices. Similarly, Green IT deployments are used in conjunction with Industrial Internet of Things (IIoT) deployments that reduce waste and improve the sustainability of production operations in settings like manufacturing shopfloors, energy plants and oil refineries. Overall, modern CIOs must leverage their digital transformation as a primary vehicle for realizing the green transformation of their enterprise. In this context, they must deploy green IT products and related sustainable services (e.g., low energy servers, power efficient data centers), while at the same time engaging in sustainable supply chains. Twin transformation is set to ensure energy efficient operations, provide cost-savings, and improve a company’s brand image. Therefore, it is at the very top of the strategic agendas of modern enterprises and CIOs are obliged to deliver it. The Potential of Big Data in the Telecom Infrastructure Industry How CIOs can track and reduce carbon footprint to meet sustainability goals How to create an effective technology vision and strategy CIOs in 2021: New Mindsets, Cultures and Leadership Rules Top Strategic Priorities for CIOs in 2021 Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? The emerging role of Autonomic Systems for Advanced IT Service Management Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:cb1b1d59-fea8-4df5-8674-e03b9e45b480>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/green-it-initiatives-for-the-twin-transformation-of-industrial-enterprises/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00520.warc.gz
en
0.922846
984
2.890625
3
Combine existing security concepts and best practices together and design more secure distributed applications. Part 1 of this two-part series discussed what services and microservices are, the role of APIs and API gateways in modern application architectures, the importance of user-level security context, and end-to-end (E2E) trust. Now part 2 covers authorization (authZ) and different ways of handing it across microservices, what authentication (authN) and authZ protocols to use, what to do when an API is invoked by applications and services outside its trust boundary, additional security policies to consider beyond authN and authZ, logging and monitoring considerations, and how group policies can help build a more secure API and microservices based application. Some exposure and previous knowledge of APIs and microservices-based architectures help you better grasp of the security aspects discussed. However, it is not necessary. Take about 30 to 45 minutes to read both parts of the series. Part 2 should take about 15 to 20 minutes. The need for AuthZ Part 1 discussed authenticating the end-user and enforcing the required security policies across microservices of an application at the API gateway. In addition, each microservice may have specific authZ requirements. For example, in an online banking application, a microservice handling bank accounts should only allow read operations to a checking account for a user with a basic authentication context. But the checking account should also allow the user to perform both read and write operations with a 2FA authentication context. The security token service can evaluate the information about the service or resource requested (typically the HTTP URL location of the target service or resource), the scope or type of request, as well as the user’s security context and issue a security token that represents the authZ issued, including valid claims that are made. The set of claims can include the end-user and issuer’s identities, identities of specific consumers, expiration time, and more. Consider an architecture where a token-exchange service is used to obtain a security token for each request as shown in the following flow diagram. The service can handle authZ aspects, such as whether a specific request type (for example, READ) under a given user-level security context is allowed for the requested service or resource, and it can add this information as valid claims and scopes in the token. Figure 1: A microservices application architecture with a token-exchange service The microservice receiving the security token along with the service request verifies the authenticity of the token to ensure it is issued by a trusted service, and then provides the functionality requested by the sender based on the valid authorized claims and scope. If there are additional microservices required to complete the invocation, the token-exchange service is used to obtain a new security token that can be used by the downstream microservice with the appropriate protocol, claims, and scope. If a single E2E trust token is issued at the API gateway for the entire journey across one or more microservices and no token-exchange service is used downstream, each of the microservices across the call chain must handle all authZ related aspects. Then each microservice must be configured with the appropriate authZ policy, and ensure they are enforced correctly. Although this model removes the need to get a new security token each time, central handling of authZ and using a token-exchange end-point allow for more flexibility and better access control (as discussed in part 1). What authN and authZ protocols can I use? You can also use the SAML protocol for API security, and it supports both authN and authZ. SAML gained popularity with the rise in adoption of web services inside organizations and became the standard of choice for service-oriented architectures. However, due to its heavy nature and SOAP requirement, external RESTful HTTP based APIs opted for the lighter OAuth 2.0 and OIDC protocols. Invocation by external applications and services In some cases, to gain access to a specific function or resource (transparently provided to the user as part of the application that is accessed), an application might need to invoke a microservice provided by another application. For example, a mash-up microservice in an online banking application that presents a single view of all of the user’s account balances including chequing and mortgage accounts will need to obtain this information from separate core banking and mortgage systems respectively. The mortgage application’s microservice is invoked by the mash-up microservice (in this case the online banking application) that resides in another application trust-zone where end-user authentication has already been performed. Because the mortgage application might not use the same IdP (Identity Provider) as the online banking application (and the goal is to not require the user to go through “heavy” authentication again), there needs to be a mechanism to pass the user’s security context and valid claims in a way that is usable by the requested microservice within the mortgage application. In addition, the security token issued by the online banking application’s security token service proves that the user has successfully met the conditions required by the security policies of the online banking application. However, the mortgage application might have different security policy requirements that need to be satisfied. In this situation, some form of trust needs to be established between the two applications’ trust zones to allow verifying the authenticity of the claims as well as the user’s security context that is presented by the security token in the request. You can enable the verification of the token’s signature by the security token service of the called application, and by issuing a new token that can be verified by the microservices of that application. The requested microservice’s API gateway can also enforce any additional security policies that are required before the request is passed downstream. Without central handling of these activities by an API gateway and a security token service, each microservice would need to be able to have a trust relationship with external consumers, which would result in a very complex and unmaintainable architecture. Unless it is specifically indicated that a given security policy has already been satisfied by the trusted caller, all required security policies should be enforced by the API gateway of the called application. You cannot always trust that external applications have performed the required security checks, therefore a defense-in-depth strategy should be followed where security is applied at all layers. Should I care about other security issues? So far, this article focuses primarily on the enforcement of security policies related to authN and authZ concerns. In addition to acting as central a point of such enforcement, the API gateway should also apply other security policies. Consider the following options: - Rate-limiting prevents calling of an API more than the allowed number of times for a given period by a specific consumer. - JSON threat protection protects against malicious input in the JSON payload that results in an attack. - XML threat protection protects against malicious input in the XML payload that results in an attack. - Other custom policies centrally address applicable web application security threats. Rather than each development team creating separate policies, the teams should be able to configure and apply required security policies centrally at the API gateway as needed. This approach eases the development process and ensures consistent enforcement. You still might need to develop custom policies to meet specific needs in case they do not already exist. Don’t forget to log, monitor, and detect As with all security architectures, detective controls such as logging and monitoring play an important role as well. Each API needs to log all important events including security-related ones and send the data to a central system for further correlation, analysis. and detection of potential security concerns. It’s important to record security related events, such as a success or failure of compliance to a specific policy, for further analysis and threat detection. Also, collecting information like how many times end-points are invoked and response times, can assist your team in both preventing denial of service and in better profiling the application and system. To help with fraud detection, APIs can act as instruments to provide data about the specific device and the location that accessed the API. This information allows detection of scenarios that match a specific fraudulent access pattern. Apply group policy Grouping related objects into a single unit and applying configuration values or policies across the group is not a new concept. This approach helps you consistently apply those values and policies to the given set of objects in that group. The same principle is applicable to the microservices of an application. Those microservices and their APIs that address a particular business need should be grouped together and presented to developers along with a service plan and set of policies that apply to that group. For example, consider a set of APIs that provide data in XML format and that are used to build an internal application for account servicing by staff. These APIs require an authentication policy that integrates with the corporate AD, as well as XML threat protection and a rate-limiting policy that focuses on inside initiated access. Another set of APIs use JSON as a data format and are used to build an external application for customers to perform Internet banking. These APIs require an authentication policy that uses the customer data repository (for example, LDAP), enforcement of JSON threat protection, and rate-limiting designed for external use-cases. After all security policies applicable to the group are successfully enforced, access is provided and limited only to specific microservices or resources within the group, which is indicated by the security token. In cases where individual microservices have additional security policy requirements over and above the group policies, handle the custom policies either at the API gateway along with the security token service, at the token exchange service, or at the microservice itself. Modern API and microservices-based applications are often distributed and communicate over networks. But it is important that the same level of security assurance is present as in monolithic applications. API gateways help with consistent enforcement of security policies across the microservices of an individual application and can assist with handling authZ related aspects. It is also important to have user-level security context and E2E trust across the entire journey, in addition to service level trust among the microservices of an application. You can use protocols such as OpenID Connect, OAuth 2.0, and SAML to facilitate authN and authZ, and aid in designing a system that handles security at the right place and the right time and guarantees end-to-end trust across the entire journey. This article also covered why you should apply other security policies beyond authN and authZ (for example JSON threat protection and rate limiting), the importance of appropriate logging and monitoring, and use of policies at group level in order to build a more secure API and microservices based application. In summary, take the following key security concepts into consideration when you design and implement microservices and API-based applications or services: - Maintain user-level E2E trust across the entire journey. - Ensure authZ is enforced at the right place with the right level of granularity. - Group your APIs and use an API gateway to apply configurable security policies consistently. - Don’t forget to log, monitor, and detect. - Follow a defense-in-depth strategy, and add security at all layers. “OpenID Connect Core 1.0” by N. Sakimura, J. Bradley, M. Jones, B. de Medeiros, and C. Mortimore: “The OAuth 2.0 Authorization Framework” IETF RFC 6749, by D. Hardt (Ed.): “Assertions and Protocols for the OASIS Security Assertion Markup Language (SAML) V2.0”, Scott Cantor, John Kemp, Rob Philpott, Eve Maler, Eric Goodman
<urn:uuid:da9657aa-e3c8-4392-bf69-5b6b70dc636b>
CC-MAIN-2022-40
https://forwardsecurity.com/2020/05/07/securing-modern-api-and-microservices-based-apps-by-design-pt2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00520.warc.gz
en
0.920898
2,715
2.71875
3
To create miniature organs grown in labs helpful test subjects in the past, but they don’t replicate how drugs affect other parts of the body. Now, a team of scientists engineered micro hearts, lungs and livers that can potentially be used to test new drugs. Combining the micro-organs into one monitored system to call “body-on-a-chip”. Drug compounds currently screened in the lab using human cells and then tested in animals. But, neither of these methods adequately replicates how drugs affect human organs. The organ structures made from cell types found in native human tissue using 3D printing and other methods. Heart and livers selected for the system because toxicity to these organs is a major reason for drug candidate failures and drug recalls. A nutrient-rich fluid filled that circulates through the system keeps the organoids alive and introduces potential drug therapies. The researchers first tested the organoids to ensure their similarity to human organs. body-on-a- chip system The data show a significant toxic response to the drug also mitigation by the treatment, accurately reflecting the responses seen in human patients, said, Aleks Skardal, Ph.D., assistant professor at the Wake Forest Institute for Regenerative Medicine. Importantly, how an individual organ responds to drugs, and how the body responds. In many cases during testing of new drug candidate’s drugs have unexpected toxic effects in tissues not directly targeted by the drugs themselves. “If you screen a drug in livers only, for example, you’re never going to see a potential side effect to other organs,” said Skardal. “By using a multi-tissue organ-on-a-chip system, you can hopefully identify toxic side effects early in the drug development process, which helps to save lives also millions of dollars.” The scientists conducted multiple scenarios to ensure that the body-on-a- chip system mimics a multi-organ response. Known to cause scarring of the lungs, the drug also unexpectedly affected the system’s heart. However, a control experiment using only the heart showed no response. Scientists working to increase the speed of the system for large scale screening, and also to add additional organs. This system has the potential for advanced drug screening and also used in personalized medicine. More information: [Scientific Reports]
<urn:uuid:93d16aae-b7f2-4c1d-bb86-4ffc3ddeed2e>
CC-MAIN-2022-40
https://areflect.com/2017/10/06/body-on-a-chip-how-the-engineered-human-micro-organs-responds-to-medications/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00520.warc.gz
en
0.923544
497
4.03125
4
Today’s educational industry, both in the public and private sector, face a number of unique challenges when it comes to provisioning and securing data infrastructure. Educational institutions are continuously confronted with the same explosion of data and mounting demands for faster, more intuitive service offerings as other sectors of the economy. They are also operating with even tighter budgets and less in-house technical expertise. At the same time, regulatory burdens continue to highlight the conflict of maintaining privacy while fostering an open, equally distributed learning experience. For most organizations, three critical data security issues arise when provisioning and securing data infrastructure: Most modern educational programs rely on data to identify and promote effective teaching and learning strategies. But these programs are highly dependent upon secure infrastructure, both on the physical and virtual levels, to guard against breaches or misuse of data by legitimate users. At the same time, both educators and administrators require better training to ensure the integrity of systems and data, both of which are evolving at a rapid pace. Governance policies should encompass both privacy and transparency along the entire data lifecycle, from creation to collection, use, sharing, and archiving. This is the only viable way to build trust among students, parents, faculty, and other stakeholders that data is both accurate and protected, all while ensuring that it is being used to improve the educational experience. The enormous amount of data being generated these days is only part of the challenge. Equally important are the myriad systems that data traverses throughout the lifecycle. These can range from student information systems, enterprise resource solutions, learning management platforms, library systems, and a wide range of vendor-managed tools. These tools and systems must all be hardened against intrusion and monitored for misuse. Educational policymakers play a key role in resolving the educational industry’s challenge with provisioning and securing their data infrastructures. For one thing, they need to recognize the numerous support functions and systems that foster the twin goals of making data systems usable and secure. They also need to recognize that adequate funding is necessary, not just for the various systems and tools but for proper IT staffing and training for the entire knowledge workforce. To accomplish these goals in an effective manner, it helps to concentrate on the following key elements: It should be noted that many of these issues can be addressed quickly and at less cost by converting legacy infrastructure to modern cloud resources and services. In the cloud, maintenance and upgrades are done by the provider, while security is often better than in most legacy deployments. At the same time, workloads can scale dynamically in the cloud, so you only pay for what you need. And with adequate mirroring and replication, backup data is better preserved even if primary systems are lost completely, as in a natural disaster. Education is one of the most important social functions within a modern society, but it is also one of the most expensive and complicated. The cloud can ease much of this burden, allowing schools to concentrate more fully on what they do best: teaching. Learn how CBTS partnered with a private university to create a comprehensive plan for upgrading wireless and wired network access in residence halls, setting the stage for campus-wide WiFi connectivity. Learn more about the CBTS partnership with the Dayton Public School District here. Discover more about how CBTS delivers state-of-the-art technology for today’s schools and universities to keep up with the ever-increasing demands of students, parents, faculty members, administrators, and community stakeholders.
<urn:uuid:b904bde2-288b-4497-a14f-95b126e6b21b>
CC-MAIN-2022-40
https://www.cbts.com/blog/cloud-helps-schools-secure-data-infrastructure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00520.warc.gz
en
0.945126
709
2.578125
3
The U.S. Department of Transportation on Monday announced it would move forward with a plan that would make car-to-car communication mandatory among light vehicles, a measure that could lead to safer roads. Vehicle-to-vehicle, or V2V, communication allows cars to share data including speeds and brake applications with nearby cars. That data can then help warn drivers about possible collisions. For instance, if one car turns a corner and unexpectedly slams on the brakes, that car could communicate the action to nearby vehicles, which could then warn their drivers. The technology would be used only to share information affecting safety, the DoT emphasized. It would not be used to collect personal data or for location tracking. The technology could be especially useful in preventing common crashes such as rear-end or lane-change collisions, the DoT said. V2V connectivity has been tested extensively; a program in Ann Arbor, Mich., deployed close to 3,000 vehicles in the largest road test of the technology. The National Highway Traffic Safety Administration plans to release a report on its analysis of some of that testing and a year-long pilot program for V2V communication technology, which will provide more details about its technical feasibility, costs and safety benefits. The NHTSA will then begin the process of drafting a proposal for a regulation that would make it mandatory in new vehicles in a “future year.” Enabling the Connected Car Engineers have been developing connected car applications for years: V2V communications; vehicle-to-infrastructure (V2I) connectivity; and autonomous driving technology. They’re making huge strides in all of those areas, but V2V communications currently appears to be the most realistic application for the mainstream consumer, said Joyoung Lee, connected car researcher and assistant professor in the Department of Civil and Environmental Engineering at New Jersey Institute of Technology. “Google’s driverless car … relies on a very expensive radar-based detection system to capture the driving information of adjacent vehicles,” he told TechNewsWorld. “V2V communication-based detection is expected to produce the same performance at a much cheaper cost.” The DoT’s endorsement of car-to-car communication shows that the time is right to start putting it into the average driver’s car, said Panagiotis Tsiotras, professor and director of the Dynamics and Control Systems Laboratory at the Georgia Institute of Technology. “This technology has been under development for many years now,” he told TechNewsWorld. “The V2V connectivity can have a great positive impact on automotive transportation by reducing the risk of accidents by properly sending warning signals for an impending collision.” In addition, the connected car also could lead to greater fuel efficiency, said Bill Holloway, transportation policy analyst at the State Smart Transportation Initiative. “As this type of technology becomes more widespread, it could also generate major efficiency gains,” he told TechNewsWorld. “Cars have gotten much more efficient in recent decades, but a lot of the efficiency gains have gone towards increasing horsepower and adding safety features, while vehicle weight hasn’t changed too much,” Holloway said. “However, if smarter vehicles can help avoid crashes altogether, they would make lighter vehicles a much more attractive option for consumers,” he pointed out, “which would yield major fuel efficiency gains.” Waiting for Green Light Like any advancement in technology, the implementation of V2V communications likely will be gradual, said Tsiotras. “The main issues will be reliability of the technology and, most importantly, also the issue of mixing vehicles — having this technology with vehicles that lack the ability to communicate with other vehicles,” he explained. “It will take some time before all the vehicles will have this capability,” Tsiotras observed. “Till then, I think that the focus will be mainly on the use of the V2V technology for traffic regulation purposes. Collision avoidance will follow afterwards.”
<urn:uuid:ee0c8dc5-8498-4ef7-9151-816f722a45bd>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/talking-cars-coming-down-the-pike-79914.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00720.warc.gz
en
0.950258
856
3.046875
3
Using the Transport Layer Security The Transport Layer Security (TLS) protocol and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols that provide security and data integrity for communications over TCP/IP networks. TLS is used in conjunction with Microsoft Certification Authority®, to generate digital certificates. For organizations that use sensitive, confidential information or are subject to stringent security regulation, deploying TLS on the server and client is the best assurance against compromising communication integrity, specifically: - Peer identity can be authenticated using public key cryptography, allowing the safe exchange of encrypted information. - Message contents cannot be modified en route between TLS negotiated hosts. Using TLS with Ivanti Device and Application Control affects: - Ivanti Device and Application Control client-Application Server communication - Ivanti Device and Application Control inter-Application Server communication
<urn:uuid:6ee6302f-a170-4124-bb5c-b70562ac40f0>
CC-MAIN-2022-40
https://help.ivanti.com/ht/help/en_US/IDAC/latest/device-control/transport-layer-security.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00720.warc.gz
en
0.81722
169
3.375
3
1. Malware samples target Windows operating systems from its Linux subsystem - Security researchers discovered a strain of malware samples developed to compromise the Windows subsystem in Linux and then laterally move to the native Windows enclave. Threat actors created the malware samples using Python code. They run on Debian systems and have a low detection rate for traditional security controls. - The malware developers packaged the samples in an Executable and Linkable File (ELF) binary. When the malware opens, it loads and executes a secondary payload, injected into an active Windows process using Windows API calls. - While security researchers claim that the malware samples are unique, experts had floated theories about similar malware attack techniques back in 2017. Expert Commentary: These malware samples appear to have a limited spread rate, targeting only France and Ecuador for now. Additionally, these malware samples have only been discovered on one publicly routable IP address, suggesting that they may not be as widespread as anticipated. Many security experts theorize that this malware sample is still in its development stages, and the threat actors are testing the malware potency and execution. 2. Two-thirds of cloud-based attacks are preventable with proper infrastructure configuration - A recent study demonstrated that two-thirds of cloud-based security incidents could be prevented if users properly manage the configuration of software applications, databases, and security policies. - Properly managed configuration processes that could have prevented cyberattacks within cloud environments include robust system hardening, proactive implementation of security policies, and a robust system patching cadence. - The study also suggests that introducing unauthorized tools (shadow-IT) into a corporate environment can increase the probability and impact of a compromise. This is because most shadow-IT tools are not monitored or managed by a centralized IT team. Expert Commentary: IT teams are composed of human beings. Like anyone, they can forget, get tired, or even lazy. Threat actors exploit these human weaknesses and manifest them into technical flaws such as improperly configured systems. However, an automated desired state configuration solution would allow IT teams to match evolving policies with required infrastructure configurations without thinking about it. This type of automation significantly reduces cyber incidents. 3. Former U.S. intelligence officers admit to hacking on behalf of a Middle Eastern company - Three former U.S. intelligence officers were fined $1.68 million by the U.S. Department of Justice (DOJ) for their involvement in multiple cyber-mercenary operations on behalf of a United Arab Emirate (UAE) based company. The former National Security Agency (NSA) cyber intelligence officers were accused of providing offensive and defensive cyber weapons services to commit clandestine crimes. - These cyber weapons were developed using sophisticated spyware technology that requires zero clicks to execute payloads. The sophisticated zero-click exploit was used to illicitly gather credentials for online accounts owned and controlled by U.S.-based organizations. - According to the DOJ, the UAE government leveraged the cyberweapons to break into mobile devices owned by people deemed as dissidents, i.e., journalists and activists. Expert Commentary: It appears that the U.S. government is charging the accused individuals because they failed to register and attain a license from the State Department’s Directorate of Defense Trade Controls (DDTC), which oversees the flow of defense services in and out of the U.S. While cyber weapons and hackers for hire are becoming a hot commodity in the black market, the U.S. government is focused on tracking the misuse of cybersecurity knowledge and skills against American interests at home and abroad. Like any other lethal weapon, the dangerous cyber tool will now have a special designation within the U.S. government. 4. Australia, the U.K., and the U.S. announce security partnership - The United States, United Kingdom, and Australia recently announced a trilateral security and defense accord known as the AUKUS pact. This pact involves sharing emerging technologies, including artificial intelligence, cyber capabilities, quantum computers, and other critical defense industrial bases and supply chains. - Throughout the AUKUS pact, the U.S. and U.K. will give Australia the technology it needs to build nuclear-powered submarines to counter China’s influence within and around the contested South China Sea. This is the first time the U.S. will be sharing its submarine technology in about 60 years. - With nuclear-powered submarines, sophisticated cyber weapons, and other emerging technologies flowing between the three nations, the Chinese government was not pleased about the announcements. They believe that Australia is simply turning itself into an adversary of China. Expert Commentary: Australia and China have been undergoing diplomatic friction, which has intensified from trade tariffs to full-on cyberattacks. As a result, Australia has been seeking security offensive and defensive support to fend off China’s influence in both cyberspace and the physical world. This security and defense pact is a significant win for the Australian government as they will now receive closely/guarded U.S. cyber and marine technologies. 5. Past victims of REvil ransomware receive a master decryptor - A free master decryptor was released for victims whose systems were encrypted before the REvil ransomware gang. It allows them to recover their files without paying any ransom demands. - A cybersecurity firm called Bitdefender developed both the decryptor and law enforcement agencies to lessen the financial burden for REvil victims. - According to Bitdefender, the decryption tool works against all previous REvil and Sodinokibi ransomware infections across the board. Expert Commentary: Security experts are now familiar with the attack tactics, techniques, and procedures of REvil. As a result, we expect more free decryptors to be developed to help us tackle ransomware. We also expect ransomware actors to use ‘free ransomware decryptors’ as a lure to trick unsuspecting victims into downloading poisoned code that would end up encrypting their systems.
<urn:uuid:e8b02012-95a1-4eeb-b7f1-5489abd7632e>
CC-MAIN-2022-40
https://www.meetaiden.com/blog/malware-samples-target-windows-os-from-linux-subsystem-new-findings-about-cloud-based-attacks-aukus-pact-u-s-hacking-on-behalf-of-uae-master-decryptor-for-revil-victims/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00720.warc.gz
en
0.944629
1,223
2.765625
3
Today In History May 16 1860 Chicago: Republican convention selects Abraham Lincoln as candidate In May 1860, the country’s consideration moved in the direction of Chicago, where the Republicans were meeting to choose their presidential competitor William H. Seward, the Republican leader from New York, sent his political group to Chicago to bolt up his gathering’s selection. In the mid-nineteenth century, it was not viewed as appropriate for the hopeful possibility to go to the show himself, so Seward sent his political chief, Thurlow Weed, alongside his states’ 70 representatives and 13 railroad vehicles of supporters. The inhabitants of Chicago were pleased to have their city of 100,000 picked for the Republican party’s second presidential show. At the expense of about $6,000, Republicans there constructed another assembly hall for the event. Nicknamed “The Wigwam,” it had incredible acoustics and could situate more than 10,000, which purportedly would be the biggest crowd yet collected in the nation under one rooftop. Polling form three started. Lincoln kept on getting votes 4 more from Kentucky, 15 from Ohio- – while Seward lost votes. At the point when the pencils quit scratching, Lincoln had 231 and a half votes- – one and a half shy of those required for the selection. A quiet fell, and everyone’s eyes moved in the direction of D. K. Cartter of Ohio, who faltered out: “I-I emerge, Mr. Executive, to a-declare the ch-change of four votes, from Mr. Pursue to Abraham Lincoln!” For a second, the crowd was quiet – at that point it ejected. The sound was stunning to the point that the main way individuals could tell that guns outside the Wigwam were being discharged was by watching the smoke float from the barrels. So Lincoln was named and would be chosen the country’s sixteenth president. He delegated Seward secretary of state, Cameron secretary of war, Chase secretary of the Treasury, and Bates lawyer general 1866 US Congress authorizes the nickel 5 cent piece On May 16, 1866, Congress authorized the creation of a new American coin: the five-cent piece composed of copper and “not exceeding twenty-five per centum of nickel.” In other words, they created the nickel which celebrates its 150th birthday on Monday. The idea makes sense: five cents is a useful amount, especially in a world in which that amount could buy about 15 times more than it could today. Except the U.S. already had a five-cent coin in circulation. The so-called “half dime” had been around since the 1790s. The new law would leave the nation without five-cent fractional bills or enough silver half-dimes. And so nickels which, not coincidentally, are set apart from other U.S. currency by being named after the metal they contain came into being soon after. The first ones are known as “shield nickels” for the image on their face. Meanwhile, in the same act that authorized the nickel, Congress put an end to the printing of any fractional notes worth less than ten cents. 1874 1st recorded dam disaster in US Williamsburg, Massachusetts On May 16, 1874, the modest and disgraceful Mill River dam crumbled, killing 139 individuals and clearing out four towns in western Massachusetts inside 60 minutes. It was the first synthetic dam catastrophe and one of the most noticeably awful of the nineteenth century. In spite of the fact that plant proprietors and designers were unmistakably answerable for the fiasco, nobody was considered responsible for it. Be that as it may, at any rate the flood led to dam security laws. The Mill River is a 15-mile-long stream that drops 700 feet from the high slopes of the Berkshires to the Connecticut River. During the nineteenth century, makers manufactured a series of factories along the waterway to exploit the modest force. Manufacturing plants made silk string and woolen material, metal products, pounding haggles. By the 1860s, factory proprietors acknowledged they could manufacture supplies to keep up enough stream during the dry summer months. In 1864, 11 producers framed the Williamsburg Reservoir Company to dam the Mill River in Williamsburg. On May 16, the store was at that point full, and it came down in basins. At 7 a.m., George Cheney, the dam guardian, saw a 40-foot chunk of earth slide off the substance of the dam. The earthen bank started to disintegrate as surges of water began pouring through openings in the dam. The stone divider had been grouted ineffectively. Water from the store spilled through the splits and soaked the downstream earthen bank. The dam attendant comprehended what might occur straightaway. Without the help of the earth dike, the stonewall couldn’t withstand the weight of the store. 1st animal breeding society forms in NJ New Jersey ranchers were confronting a period that offered guarantee of new strategies for cultivating that would take care of issues, improve effectiveness and welcome a superior profit for their dollar. Augmentation authorities and agrarian operators executed an assortment of strategies and practices to connect with ranchers. One such model, which flourished in New Jersey and went into the archives of rural history in the U.S., is the improvement of planned impregnation in dairy cows. Enos J. Perry (1891-1983) served Rutgers University from 1923 to 1956 as an augmentation pro in dairy cultivation. His most prominent commitment to horticulture was the foundation of the main helpful fake rearing relationship for steers in New Jersey and the U.S. furthermore, the pragmatic use of the method of managed impregnation (AI) of livestock. Perry started the primary dairy cows AI Co-operation in the U.S. in 1938, and his book “The Artificial Insemination of Farm Animals,” distributed in 1945, was the standard instructional booklet regarding the matter, giving essential data on manual semen injection for a large number of laborers and understudies.
<urn:uuid:47bd2b02-c90c-4285-ab11-40a94a6497b8>
CC-MAIN-2022-40
https://areflect.com/2020/05/16/today-in-history-may-16/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00720.warc.gz
en
0.965906
1,292
3.59375
4
Today In History June 23 June 23 is the 174th day of the year 2020 (175th in leap years) in the Gregorian calendar. 23rd June is now destined for a glittering place within the calendar of contemporary British history, having been designated the day for a referendum on whether Britain should remain part of the European Union (EU). But even before David Cameron set the date, 23rd June had, over the centuries, acquired some significance. What might a number of those previous June 23s presage for the result of the referendum? 1314: The Scottish question The Battle of Bannockburn began on 23rd June 1314. Unusually for a medieval battle, it didn’t endways equivalent day. The Scots’ victory over the English which ensured that Robert the Bruce became a Scottish folk hero didn’t come until the subsequent day. But 23rd June saw the opening of what’s sometimes referred to as the First War of Scottish Independence. Nowadays, people are warning that the referendum of 23rd June 2016 might yet trigger another battle for Scottish independence. The Scottish National Party says that if the referendum decides that the U.K. should leave the EU, then there should be another referendum on the independence of Scottish. 1661: Marriage of convenience On 23rd June 1661, after negotiations between England and Portugal, a pact was signed for the marriage of Catherine of Braganza to King Charles II. In the days before the EU, the marital arrangements of heads of state were a way to secure diplomatic relations. The wedding pact involved promises of territory for England in North Africa (Tangiers) and India (Bombay) and trading privileges for the British within the Portuguese territory of Brazil. In return, Portugal obtained the military support of the British against the Spanish. The diplomacy of 1661 failed to bear fruit in one important respect: Catherine bore Charles no heir, therefore the crowns of England, Scotland and Ireland passed to his brother James, whose brief reign of unflinching Catholicism prompted a revolution and forced the parliament to show for a successor to a Dutch protestant. Nowadays, there’s much speculation about what diplomatic and trading relations Britain might put in place to supplant its membership of the EU in the event of aBrexit. In this day and age, royal marriages are unlikely to be so rewarding. 1795: Revolution time In France, 23rd June might be remembered as a landmark in the country’s constitutional history were it not for the peculiarities of the French Revolutionary calendar. For it was on 23rd June 1795 that the national convention published a new constitution that became referred to as the constitution of Year III — which was ratified on August 22, 1795. But due to the suppression of the old calendar, and therefore the renaming of the months of the year, the constitution is remembered as being published on 5 Messidor (from the Latin name for harvest) and approved on 5 Fructidor.
<urn:uuid:ed9ab4f6-2193-425b-972f-e792707a680a>
CC-MAIN-2022-40
https://areflect.com/2020/06/23/today-in-history-june-23/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00720.warc.gz
en
0.958698
630
3.734375
4
By 2025, it is estimated that there will be at least 75 billion connected devices in what is being called the “Internet of Things” (IoT). With advances in microprocessors, sensing devices, and software, pretty soon anything that can be connected will be connected. Here's What You Need to Remember: Seven years ago, the DoD created Comply to Connect (C2C) as a way to secure its growing array of network endpoints. The proliferation of devices on the Internet is becoming a tidal wave. In addition to your phone, computer, video game console, and television, the Internet now connects practically everything that has electronics and sensors: household appliances, heating, and air conditioning systems, cars, airplanes, ships, industrial robots, public utilities, home security systems, children’s toys, and medical devices.
<urn:uuid:e67b0902-1400-4f7e-b85a-b36d862008b8>
CC-MAIN-2022-40
https://training.nhlearninggroup.com/blog/tag/cisco
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00720.warc.gz
en
0.942657
174
2.78125
3
Greenpeace is pushing for a cleaner Internet with their new #clickclean campaign. The new promotion is pushing for the major Internet players—the Facebooks, Googles, and Amazons—to use renewable energy in their data center infrastructures. It takes very little energy of your own to open up your browser, type a URL and click go, but it actually takes a lot of energy to power the infrastructure behind what you just did. Data centers use up enough power to run 180,000 homes. That’s a lot of energy just to load up Facebook on your device, so Greenpeace is looking to replace the standard energy that is used by data centers and convert it to sustainable, renewable energy. Already, a few of the bigger tech giants have been working towards using green energy to power their data centers. Microsoft recently purchased a wind farm and will use it as an energy source for their data centers, Facebook has employed solar energy to run a few of their facilities, and Google has gotten into the renewable energy game by purchasing wind farms that will power their data center and sell energy back into the grid at a wholesale price. Estimates show that by 2017, half of the world’s population will be online, but that also comes with a 60% increase in energy consumption by 2020. So, something has to give. By those numbers, Greenpeace is pressuring the Internet giants to begin a movement towards clean energy–something that a few of the Internet’s biggest contributors seem to be ignoring. Amazon Web Services received the lowest grade on Greenpeace’s clean energy index—a 15% rating—which is concerning since Amazon is the infrastructure behind much of the Internet. And Amazon still hasn’t made many moves to fix the energy-sucking facilities behind AWS. Greenpeace also understands that of course it can’t happen overnight from a business, economical, or infrastructural standpoint, but at least addressing the problem is good enough for them at this moment. Their campaign hasn’t been as in-your-face and brow-beating as previous campaigns, and this time around they’ve employed comedian Reggie Watts to help spread the message in a series of viral videos. The movement towards clean energy needs to happen for data centers and the earth to continue to thrive. Hopefully Greenpeace’s #clickclean movement can be a catalyst towards change. For more information contact Albert Ahdoot
<urn:uuid:27bc5349-a683-4649-9da1-ffb951ebe7c8>
CC-MAIN-2022-40
https://www.colocationamerica.com/blog/greenpeace-green-data-centers
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00720.warc.gz
en
0.945424
511
2.53125
3
In this guest post, Chris Darvill, vice president of solutions engineering covering Europe, Middle East and Africa (EMEA) at cloud-native API platform provider Kong, talks about the environmental benefits of moving from monolith to microservices-based application architectures. The most popular transformation companies are making is the shift from monolith to microservices. With many business-critical processes still being powered by systems built in and before the 1970s, it’s a transformation that won’t let up any time soon and is one that can drive the next wave of sustainability. It’s easy to think that replacing systems built in the 1970s with modern technology delivers an immediate efficiency gain. Microservices are independently scalable and can be individually configured, resulting in less wasteful usage of resources. However, as we decompose the monolith into microservices, we go from a handful of in-app connections to an exponentially increasing number of microservices all talking to each other over various networks – creating a considerable increase in network traffic. We need to ensure this increase does not translate into a net increase in resource consumption. To prevent this, we should use the most appropriate transfer protocol for the traffic. Consider implementing services in gRPC rather than REST, which tests have shown is seven to 10 times faster due to the use of HTTP/2 and streaming, and the highly compressed Protobuf message format. Additionally, think about compressing large payloads before sending them over the wire. Service mesh: a network necessity With an increasing amount of network traffic, it becomes imperative to manage that traffic: eradicating unnecessary requests, shortening the distance travelled, and optimising the way messages are routed. This can be achieved with a service mesh. By managing all inbound and outbound traffic on behalf of every microservice, implementing load balancing, circuit breaking and reliability functions unnecessary requests can be minimised and visibility provided into the requests that do take place. In our digital world, consumers expect a real-time response after an interaction. This has seen a shift from batch processing to real-time processing over the last several years, to deliver the enhanced capabilities that people expect. Consider what real-time means: the way a system immediately reacts to something that has happened; an event, essentially. The way this is implemented with RESTful APIs is through polling. An API is configured to run every X seconds, to check if something has happened. If nothing has, then it waits for the next poll X seconds later to see if something’s happened then. If something has happened, then it takes that data and triggers downstream processing (for example, updating a customer’s direct debit details on their profile). However, 98.5% of API polls don’t return any new information. This means that most polls are a waste of energy. Event-driven architectures (EDA) only act when there is something that needs to be done – consuming energy when it’s actually needed. When an “event” occurs, such as a payment details update, downstream services can be invoked to do the relevant updating. Reusable APIs vs. point-to-point integration A key principle of green software engineering is to use fewer resources at higher utilisation, reducing the amount of energy wasted by resources sitting in an idle state. This correlates to integration patterns: the more we reuse APIs, the less time they’re idle and therefore the less energy they waste (assuming every API call is necessary). On the contrary, in a point-to-point approach, code is built for one specific purpose: to connect A to B. It cannot be reused for connecting B to A, sending data in the opposite direction. It cannot be reused for connecting A to C. Assuming the average company integrates over 400 data sources, this equates to an unmanageable 159,600 single-use connections when following a point-to-point approach. That’s 159,600 individual services, all deployed on infrastructure running somewhere, using energy from somewhere to power them to sit idle the vast majority of the time. What a waste. With this many connections, the overall architecture is complex. Pathways between systems are convoluted and unexpected, resulting in “spaghetti code.” There is a lot of wasted network traffic, trying to find the shortest path from A to B, and wasted traffic means wasted energy consumption. On the other hand, an API-first approach leads to much simpler architectures and highly reused services, particularly those sitting around back-end systems. This means more efficient message routing and load balancing, simpler code, and higher utilisation of deployed code. Playing our part Whether motivated by conscience or by the fact that more efficiency means higher profits for the business, we need to acknowledge and accept there is a problem. Not just that global warming exists but that IT is a large and growing part of that problem. We know we need to fly less – why don’t we think about how we build and use technology with the same guilt? We can make a difference. Consider following green engineering principles when building or versioning an API. When breaking down a monolith into microservices, minimise the microservice traffic. Remove unnecessary network hops. We need to work together to figure this out and right now we’re just at the beginning.
<urn:uuid:dee3cafc-6b37-469b-857f-fe38ff8d4e2c>
CC-MAIN-2022-40
https://www.computerweekly.com/blog/Green-Tech/The-environmental-impact-of-common-architecture-patterns
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00720.warc.gz
en
0.928311
1,102
2.609375
3
Most people are only familiar with the last stages of testing. But testing occurs throughout the software development process for nearly every type of software system. This includes websites, apps, and document processing systems, among many others. Testing Process: First Up Unit Testing Unit testing is usually the first test performed by software testing companies. Unit tests break down lines of code to verify that each line is working as intended. If the system doesn’t function correctly, then the program probably won’t perform as it should. You can’t wait until the end to conduct this type of testing, though. Unit tests must run during the development process. It’s also the foundation for other software tests. Next, developers and programmers need to verify that each line of code operates and seamlessly integrates with other lines of code. This determines whether the code will act as a single entity. Black Box/System Testing Similar to ensuring that the code works as a whole, black box or system testing ensures that individual software components function as a single unit. Commonly referred to as a Minimum Viable Product (MVP), the program should work, but it won’t have all the features and may still contain some bugs. If programmers find bugs or errors, they must perform regression testing to find and remove them. Note that code may need to be eliminated or changed to fix these flaws. At this point, the product is almost ready but still needs final testing. Alpha testing involves the internal team testing on their own servers. Conversely, beta testing can include consumers or outside test firms to conduct testing. This stage is critical because it verifies that the product is performing as well as possible. If you need assistance with testing, iBeta provides on-demand services with fast ramp-up times. Contact us to learn more.
<urn:uuid:19937844-09e0-4260-8093-4326c9bc4ba5>
CC-MAIN-2022-40
https://www.ibeta.com/the-testing-process-how-is-it-done/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00720.warc.gz
en
0.94684
378
2.984375
3
What are the Risks of Data Destruction? When it comes to destroying sensitive data, there is no such thing as being too safe. That’s why it’s important to carefully consider all your options for hard drive destruction before making a decision. Without proper hard drive destruction methods, there is a high degree of cybersecurity risk to companies. After all, over 60% of small to medium companies end up closing down after a significant cyberattack on their system. This makes it important to securely dispose of hard drives for their own and their client’s safety. In this article, we will take a closer look at the different hard corporate electronic asset destruction methods out there, the effectiveness of each method, and most importantly, the method’s impact on the environment. Corporate Electronic Asset Destruction – What Are Internal Data Destruction Risks? For organizations, a lot of sensitive data is stored on various electronic devices, such as computers, laptops, servers, and mobile phones. This data can include customer information, financial records, employee files, and more. If this confidential data falls into the wrong hands, it could be used for any malicious activity, such as fraud, identity theft, etc. That’s why it’s important for organizations to have a plan for destroying this data when the devices are no longer needed. Otherwise, the data could potentially be accessed and used by anyone – even if deleted. There are several risks associated with internal data destruction, such as: 1. Data Breach If data is not properly destroyed, it could be accessed by unauthorized individuals. This could lead to a data breach, which could result in the theft of sensitive information. 2. Environmental Risks Improperly disposed of electronic waste can release harmful toxins into the environment. These toxins can contaminate soil and water, and potentially harm wildlife. 3. Legal Risks If data is not destroyed properly, it could be used in a way that is illegal. For example, if customer information is accessed without permission, it could be used for identity theft or fraud. 4. Reputational Risks If an organization’s data is not destroyed properly, it could damage the organization’s reputation. This could lead to a loss of customers or investors, and potentially legal action. 5. Financial Risks If data is not destroyed properly, it could be accessed and used in a way that is unauthorized. This could lead to monetary loss for the organization, as well as legal action. How to Destroy a Hard Drive – The Different Methods of Hard Drive Destruction Now that we’ve looked at the risks associated with internal data destruction, let’s take a closer look at the different methods of hard drive destruction. 1. Physical Destruction Physical destruction is the most common method of hard drive destruction. This involves using a hammer or other physical force to break the hard drive into pieces. While this method may seem effective, it is not always reliable. If the hard drive is not destroyed completely, the data could still be accessed and potentially used maliciously. Pros & Cons of Physical Hard Drive Destruction Unfortunately, this method creates a lot of e-waste. It is a good idea to include on-site hard drive shedding on your own for better results. Degaussing is a method of hard drive destruction that uses a magnetic field to erase the data on the hard drive. This method is considered to be more reliable than physical destruction as it is less likely to leave data intact. However, degaussing is not always 100% effective, and it can be expensive. Pros & Cons of Degaussing 3. Data Erasure Data erasure is a method of hard drive destruction that uses software to overwrite the data on the hard drive. This method is considered more reliable than physical destruction or degaussing as it is less likely to leave data intact. However, data erasure can be time-consuming, and it may not be effective on all hard drive types. Pros & Cons of Data Erasure What Are Mobile Hard Drive Shredders? Mobile hard drive shredders are special machines that are designed to destroy hard drives. These machines are typically large and expensive, but they are considered the most reliable method of hard drive destruction. It is often hard drive destruction policy to include these shredders policies and should be designed to ensure that hard drives are destroyed securely and effectively. Physical Destruction vs. Data Erasure Physical destruction and data erasure are the two most common methods of hard drive destruction. Both methods have their pros and cons, and it is important to choose the method that is right for your needs. Physical destruction is generally less expensive than data erasure, but it is not always 100% effective. Data erasure is generally more expensive than physical destruction, but it is more likely to be effective. It is important to note that physical destruction, such as hard drive shredding, can be done on-site, but data erasure usually requires special software and may not be effective on all hard drive types. Hard drive shredding is considered the most reliable hard drive destruction method, but it can be expensive. It involves breaking down the hard drive first and then shredding everything down to minute pieces to completely make it irrecoverable. How Digital Data Destruction Services Can Help? Digital data destruction services, such as CompuCycle can help you choose the right method of hard drive destruction for your needs. CompuCycle can also provide on-site hard drive destruction services, so you do not have to worry about doing it yourself. CompuCycle offers expert digital data destruction services via our policies and knowledge gained by experience and digital data destruction certificates. CompuCycle ensure safety, reliability, and affordability above all. Contact CompuCycle today to learn more about our services and how we can help you protect your data! Let CompuCycle help you mitigate corporate electronic asset destruction risks properly!
<urn:uuid:c6daf8c9-da72-45b9-be92-94d3607955bb>
CC-MAIN-2022-40
https://compucycle.com/blog/what-are-the-risks-of-data-destruction/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00720.warc.gz
en
0.938434
1,241
2.859375
3
When IPv4 addresses were invented in the early 1980s nobody suspected the world would ever run out of IP addresses. An IPv4 address is a 32-bit address (232) and allows for over 4 billion IP addresses. But then the world wide web happened, and now we have run out of IPv4 addresses. The solution to the problem is IPv6. An IPv6 address has 128 bits (2128). With each bit the number of possible IP addresses doubles. So, a 33 bits gives you over 8 billion values, 34 bits over 16 billion and so forth. The number of IPv6 addresses is insanely large. An IPv4 address is made up of four eight-bit octals. If you are reading this then I am sure you easily recognise an IPv4 address: $ dig example.com A +short 220.127.116.11 IPv6 addresses are are made up of eight hextets separated by colons. Each hextet has four hexadecimal digits, and each digit uses four bits. So, each hextet has 16 bits (4 * 4). And because we got eight hextets we got a 128-bit address (8 * 16 = 128). If you are not familiar with the base-16 system, my article about hexadecimal numbers goes over the details. Here is an example IPv6 address: $ dig example.com AAAA +short 2606:2800:220:1:248:1893:25c8:1946 The address looks a little intimidating – in particular if you are not familiar with the base-16 system. Still, using hexadecimals makes IPv6 addresses relatively readable. It is possible to use octets instead, like we do with IPv4. However, an IPv6 address would have 16 octets. For instance, the above address would become this: Clearly, IPv6 has a “readability problem”. That is unavoidable – it is the price we pay for a near-infinite pool of IP addresses. However, there are some techniques to shorten IPv6 addresses. The first is suppressing zeros. You might have noticed that the third, fourth and fifth hextet in the example.com AAAA record have fewer than four digits. That is because leading zeros may be omitted. So, the third hextet is 0220, which can be written as 220. The fourth hextet is 0001, which can be shortened to just 1. You get the idea. Similarly, consecutive hextets that are 0 can be replaced with a single colon. This is known as compressing zeros. To illustrate, this is one of Cloudflare’s public DNS servers: The address is made up of three hextets followed by an extra colon and another hextet. We know that an IPv6 address has eight hextets and that an extra colon takes the place of consecutive hextets that are zero. So, the full address is: The colon-trick can be performed only once in an IPv6 address. For instance, this is not a valid IPv6 address: The problem with the address is that it is impossible to know how many “zero hextets” need to be added where. There are five hextets in the above address that are all zero, but you have no way of telling how many of them go before and after the 4700 hextet. Just as with IPv4, there are different types of IPv6 addresses. The most common type is the global unicast address. This is a public IPv6 address. The address is routable (i.e. you can connect to it from anywhere in the world) and has to be unique – two routable nodes can’t have the same IP. A global unicast address is typically a /64 address. This means that the first 64 bits – that are the first four hextets – are the network portion. The last four quartets are the host portion. Network | Nodes -------------------+------------------- 2001:0DB8:85A3:08D3:1319:8A2E:0370:7348 The last hextet in the network portion can be used for subnets. So, the network part is made up of a 48-bit global routing prefix and a 16-bit subnet identifier. And now that we are doing jargon bingo, the remaining 64 host bits are also known as the interface identifier. Subnets are not always /64 addresses. If you are interested in subnetting, you can learn in my article about subnets in IPv6. The allocation of IP addresses is overseen by IANA. The body follows the RFC3513 specification. Among others, the specification states that the prefix 2000::/3 is reserved for global unicast addresses. This means that the first hextet of an IPv6 address can tell you what type of address it is. I explain the 2000::/3 prefix in more detail in the above-mentioned article about subnets in IPv6. The second hextet is managed by the five Regional Internet Registries. An RIR manages the allocation of IP addresses for a region of the world. For instance, the RIR for Europe, the Middle East and parts of Central Asia is RIPE. The RIRs get huge blocks of IPv6 addresses from IANA and assign IPv6 ranges to Internet Service Providers in their region. The IANA website lists the global unicast address assignments to RIRs. RIPE has IPv6 prefixes such as 2001:0800::/22 and 2001:0c00::/23. ISPs enter the stage at the third hextet, and they in turn assigned IPv6 addresses to their customers. They can then use the fourth hextet for subnets.
<urn:uuid:dd3fbb84-d2ea-48a0-9b1d-0b9434e43f4c>
CC-MAIN-2022-40
https://www.catalyst2.com/knowledgebase/networking/introduction-to-ipv6/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00120.warc.gz
en
0.931113
1,299
3.25
3
A-Z Cloud glossary terms of commonly used words and phrases that puts everyone on the same page. At Macquarie Cloud Services, we’re big believers in making everyone technologically literate while remaining enthusiastically human. Here’s an A-Z Cloud glossary terms of commonly used words and phrases, with easy-to-understand definitions for each one. Something missing? Contact us to let us know. Australia colocation is the term for using an Australian data centre facility to rent space for equipment, servers, and other computing hardware. Colocation providers offer fitted racks, cooling, power, cages, cables, bandwidth and security for the customer’s servers and storage. Colocation allows businesses and organisations to benefit from economies of scale that would not be available to them with an in-house option.Australian Cloud Hosting Australian cloud hosting services provide hosting on virtual servers which pull their resource from networks and physical servers. It is available as a service and users can use the service as much as they need, depending on the business demands. Cloud hosting can manage high loads without difficulty and bandwidth issues since another cloud server can provide additional resources.Australian Data Centres Australian Data Centre is a facility composed of networks, computers, and storage that businesses or organisation use to manage, store, and disseminates a large amount of data. Data centres are equipped with servers, storage systems, networking switches, routers, firewalls, cabling and physical racks – along with backup generators, and centre cooling systems. A business relies heavily on applications, services, and data contained within a data center, making it important and critical asset for daily operations.Australian Signals Directorate (ASD) To be approved as a cloud computing provider to Australian Government agencies, the physical infrastructure allocated to government must be physically separate from other infrastructure. This means that data from private enterprises and Australian Government agencies can not be stored on the same physical server in the cloud.Australian Virtual Data Centres A virtual data centre is a collection of resources provided over the cloud specifically designed for business needs. It is a virtual representation of a physical data centre located in a virtual environment hosted in one or actual data centres across Australia. It can provide on-demand computing, storage, networking, and applications – which can integrate into an existing infrastructure. Virtual data centre allows organisations the option of adding capacity or installing new infrastructure without the need to buy or install new hardware.Azure Azure is Microsoft’s public cloud computing platform. It provides a range of cloud services, including compute, analytics, storage and networking.Azure Availability Zones Azure Availability Zone is a unique physical location within a region. Each zone is made up of one or more data centres equipped with independent power, cooling, and networking. BBaaS - Backup-As-A-Service BaaS (backup-as-a-service) is a solution provided by cloud and IT service businesses. It enables businesses to store their data on a private, public or hybrid cloud for the purpose of disaster recovery.Business Continuity Business continuity is the outcome of an effective business continuity and disaster recovery plan, where your company overcomes serious incidents or disasters to continue normal operations within a reasonably short period of time. A cloud architect is an IT professional who looks after a company's cloud computing. This involves all types of cloud from public to hybrid and private.Cloud Hosting Provides hosting for applications or websites on virtual servers, which gets their computing resource from networks of physical web servers. It is available as a service rather than a product. Cloud hosting providers will manage the setup, infrastructure, security, and maintenance – allowing clients to customise hardware and applications or scale servers online as needed.Cloud Migration Cloud migration is the process of moving data, software and applications from an on-premise environment to a cloud environment.Cloud Services Refers to services provided over the Internet and made available to users on demand. It covers a wide range of resources that enable users to deploy various types of cloud services. Cloud services include migration, deployment, customization, private and public cloud integration (hybrid cloud) and managed hosting. These services can scale to meet the requirements of its users and enable them to focus on their core businesses instead of allocating own resources.Cloud Services Gateway Cloud Services Gateway is private access for customers to connect their on premise environments to the cloud.Colocation Australia Macquarie Cloud Services built a state-of-the-art data centres in Australia that provide a colocation with scalability, flexibility, security, and reliability - backed with Australia’s best service support. Their data centres are designed with fitted racks, fitted power, fitted cables, and fitted cages – to match their customer’s needs.Colocation Data Centers Australia Macquarie Cloud Services has 3 highly certified data centres (Intellicentres) located in Australia, trusted by the Federal Government with 100% service level guarantee. Intellicentre 1 was their first data centre located in Sydney. Intellicentre 2 is the most certified data centre in Australia and the first to achieve up-time institute tier III certification. Intellicentre 4 was built and designed to support Federal Government’s gateway consolidation program.Colocation Servers Australia Macquarie Cloud Services offers colocation servers in Australia. Businesses and organisations can source and manage their infrastructure without the hassle of managing the actual facility. They provide certified and compliant data centres with flexible space to cater for high-scale and high-performance computing needs.Colo provider Colo providers allow business and organisations to rent space for servers and other computing hardware. They provide the space, cooling, power, bandwidth and physical security - taking the hassle of managing the actual facility away from the customer.Colocation A data centre facility located in Australia where equipment, servers, and space, are available for rental to organisations and businesses. Colocation provides fitted racks, cooling, power, cages, cables, bandwidth and security for the customer’s servers and storage. Colocation allows businesses and organisations to benefit from economies of scale that would not be available to them with an in-house option.Cross Connect A cross connect is a physical cable in a data centre that connects two different end points. It provide connectivity between your data centre & external environments. A data center is a facility containing a large group of networked computer servers used by the Government and organisations for remote storage, processing, or distribution of data. Data centres consist of a well-constructed facility that houses servers, storage devices, cables, and a connection to the Internet.Data Centre Migration The process of moving an existing data centre environment to another data centre environment. It also refers to moving to a cloud or managed data centre platforms instead of in-house facilities. Relocating data centres need to be planned carefully to ensure proper and smooth transition – these include compatibility, backup plan, risk reduction, and testing.Data Centre providers They provide space for servers and other computing hardware that businesses and organisations use to organise, process, store, and disseminate large amount of data. Data centres serve as physical or virtual infrastructure for business and organisations composed of house servers, storage devices, cables, and extensive backup power supply systems.Data Disaster Recovery Refers to a set of policies and procedures to allow organisations maintain and resume critical business functions following a natural or human-induced disaster. Businesses with disaster recovery should continue operating as possible when equipment fails. Two important measurements in disaster recovery and downtime are Recovery Point Objective (RPO) and Recovery Time Objective (RTO). RPO is the max age of files that must be recovered from backup storage to resume after a disaster, while RTO is the max amount of time for an organisation to recover files from backup storage and resume normal operations.Data Sovereignty Data sovereignty is the global acknowledgement that data collected by companies is governed by the laws of the nation that it is collected in.Dedicated Hosting Dedicated hosting is a hosting which a server is leased to the client exclusively. A dedicated server is needed for a single client or sole purpose only, such as a website that needs to handle large volumes of traffic each day. This is in contrast to shared hosting, in which a server acts as a host to multiple clients. Dedicated hosting is flexible as the user has full control over the server – choice of hardware, bandwidth, and operating system.Dedicated Server A dedicated server is a single server in a network entirely dedicated to an individual, organisation, or application. Dedicated servers can handle high traffic volume and manage resource intensive applications. In most cases, the hosting company manages and maintains the dedicated server such as operating system updates, application updates, monitoring of server, firewall, cyber security, data backup, and disaster recovery.DevOps A software development methodology that combines people, process and technology to deliver fast and continuous apps and service.Disaster Recovery Refers to the process of returning an organisation or business infrastructure to a state of normality in the event of a disaster. Different type of measures can be included in the disaster recovery plan (DRP) and classified in three types: preventive measures, detective measures, and corrective measures. A disaster recovery plan provides a structured approach for responding to unplanned incidents that can threaten an organisation or business’ infrastructure, networks, hardware, software, and people.Disaster Recovery Plan Documented processes and procedures to recover and protect business infrastructure in the event of a disaster. It specifies procedures that an organisation is to follow in the event of a natural or man-made disaster to minimise downtime and data loss. Minimising downtime and data loss is measured in two concepts: recovery time objective (RTO) and the recovery point objective (RPO). Failover is a backup operational mode that automatically switches to a standby component or network upon the failure or abnormal termination of the previous active application, hardware, infrastructure or network. Hybrid cloud is the combination of a public cloud provider with an on-premise, private cloud platform – designed for use by a single organisation or business. Private and public cloud operate independently of each other and communicate with an encrypted connection, which allows for the portability of data and applications. A hybrid cloud gives businesses greater flexibility and more data deployment options. IIaaS - Infrastructure-as-a-Service The base layer of cloud computing, outsourcing your virtual machines, firewalls, physical machines, IP addresses and network to a 3rd party specialist cloud provider. Kubernetes is an open source system to manage containers across private, public and hybrid cloud environments. It was originally developed by Google as a way of managing containerised applications in a clustered environment. A load balancer improves the distribution of workloads between numerous computing resources. Managed Cloud is partial or complete management of IT infrastructure and applications by a third party.Managed Colocation Managed colocation services are for businesses that do not want to deal with the hassle of managing their servers. It gives the desired level of control with the added benefit of a team of experienced engineers to proactively manage their servers, backups, disaster recovery, and security processes.Managed Hosting Managed Hosting is the leasing of Hardware, networking and storage from a provider which is used solely by the customer but housed in infrastructure and facilities owned by the provider.Managed Disaster Recovery Managed Disaster Recovery, powered by Zerto, allows customers to quickly recover entire sites and applications to a state seconds before an attack, with always-on replication and dynamic journaling technology.Multicloud (multi-cloud, multi cloud) Using multiple cloud services in a single architecture is an example of multicloud. Also known as multi-cloud or multi cloud, some businesses may find efficiencies in using different cloud providers for their IaaS and SaaS. Or multiple IaaS providers. Security is paramount for your cloud strategy, and this is the main concern for companies considering a multicloud strategy. NNAS Ransomware Defender NAS Ransomware Defender is a solution available to some private cloud customers that provide rapid recovery enabling an RTO of < 2 hours at a petabyte scale to get unstructured data back online. An auto airgap is implemented for your data in a 3rd availability zone.NV1 - Negative Vetting 1 Security Clearance NV1 is the abbreviation of negative vetting level 1, which is an Australian Government security clearance governed by the Department of Defence. An NV1 security clearance requires the applicant to provide at least 10 years of background information. OOn-premise vs cloud On-premise vs cloud; what does it mean in cloud computing? Cloud based software is hosted off-site, while on-premise software if hosting in internal data facilities. PPaaS - Platform-as-a-Service PaaS (platform-as-a-service) is the layer above IaaS (infrastructure-as-a-service), comprising of platforms such as databases & web servers. PaaS is especially popular for developers and development businesses, as they can concentrate on developing their apps and core business, while the management of the platform is left to a service provider.Physical Separation To be approved as a cloud computing provider to Australian Government agencies, the physical infrastructure allocated to government must be physically separate from other infrastructure. This means that data from private enterprises and Australian Government agencies can not be stored on the same physical server in the cloud.Private Cloud Refers to a model of cloud computing where cloud services are provided over private infrastructure for the dedicated use of a single organisation or business. A private cloud provides the same basic benefits of public cloud in addition to limited hosted service while minimising the security concerns.Public Cloud Public cloud is an IT model where on-demand computing services and infrastructure are managed by a third-party provider and shared with multiple organizations using the public Internet. Software as a Service is a software licensing and delivery model in which is licensed to the client. The software or application can be accessed over the Internet – usually referred as on-demand software. It has become a common delivery model for business applications including Payroll system, Customer relationship management, and Human resources management software. VVirtual Data Centre (VDC) A virtual data centre is a pool of cloud infrastructure resources (cpu, memory, storage) that you have the flexibility to use as needed, where needed. It’s a virtual representation of a physical data centre, providing on demand storage, networking and applications.Virtual Server A virtual server is a server that shares hardware and software resources with other operating systems. This is in contrast to dedicated servers, which is a server that isn't shared with any other operating systems or users.VMware cloud Cloud built fully on the VMware platform, allowing users or organisations to migrate to the cloud with ease and without having to switch hypervisors. It enables them to use the same tools they already know. vCloud Director enables the user to control the environment with simple technology. Powered with NSX, virtual networks can also extend directly to the cloud via layer 2 connections – maintaining existing network domain and security.
<urn:uuid:3af83b13-17b2-4829-ac97-e19616b528be>
CC-MAIN-2022-40
https://macquariecloudservices.com/glossary/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00120.warc.gz
en
0.921741
3,156
2.640625
3
Resources include do-it-yourself activities, live virtual chats with astronauts and officials, formal lesson plans, an augmented reality app and more. NASA recently launched a new “internet and social media special” that’s compiled of heaps of activities, research opportunities and media for people of many ages to binge while they’re stuck indoors to help slow the spread of COVID-19. “Kind of our overarching goal for NASA when we talk about what we do is to ‘go where people are,’ and right now people are at home,” NASA Spokesperson Allard Beutel told Nextgov Monday. “So we want to make it as easy as possible for people to get information there.” Beutel explained that the resource compilation that led to the eventual launch of the fully-loaded NASA at Home web special came together organically over the last several weeks, as people across the nation were beginning to shelter in place. “NASA is not a very huge organization in terms of the federal budget,” Beutel noted. “But we're lucky to have a lot of materials, a lot of very cool and unique things, just by the nature of what we do—our discoveries and research and exploration. So we started pulling that together in different groups and we started to coordinate that.” There’s now mandatory telework across the agency except for mission-critical work, such as people who are physically building the next Mars rover that’s getting ready for a launch this summer. But Beutel said NASA officials from its Office of STEM Engagement, Science Mission Directorate and beyond began compiling resources for kids and adults to access from home weeks ago—and that soon ballooned into something much larger. Over time, agency officials started using the hashtag #NASAatHome across social media sites to spread that information to the broader community, and they eventually launched the full site of assets Friday. “It’s a combination of things that have already existed, along with new material,” Beutel said. The NASA at Home offerings are spread across six categories: virtual tours and augmented reality, videos, e-books, podcasts, for kids and families and be a scientist. The latter offers a range of opportunities for people to participate in some of the agency’s real, ongoing projects. “If you're interested in being a citizen scientist, there's actual NASA research that you can do from your home, wherever you are, that can contribute to these efforts,” Beutel said. People can, for example, personally search for brown dwarfs, planets and other new objects at the edges of the solar system. They can also help hunt for undiscovered worlds and support climate research. And for those with telescopes, there’s also an opportunity to actively participate in NASA’s Juno Mission. The space agency’s effort also includes a range of virtual tours of NASA research facilities and sites, as well as guided and 3D tours of the International Space Station. People with at-home capabilities to access virtual and augmented reality experiences also have opportunities to do so, including via an AR app that can put them “in the pilot’s seat of a NASA aircraft.” For families and students in kindergarten and up, NASA offers a variety of projects and resources, such as formal at-home lesson plans, do-it-yourself activities, and other educational materials. Through NASA Television, the agency is also running NASA at Home-themed programming every weekday and other around-the-clock programming from across the universe to help keep people entertained. The special will also feature ongoing opportunities to interact and hear from agency experts—some of which will be on social media. For example, record-breaking astronaut Christina Koch reads children’s books on weekday afternoons via Instagram live, as part of educational and STEM activity for students. Koch recently returned to Earth after spending 328 days in space—the longest single spaceflight by a woman. “She wanted to do this on her own,” Beutel said. “She's like, you know, ‘I'd love to read some children's books, and be able to do that on a regular basis.’” Koch’s offer is not necessarily a new concept, as astronauts will often read to children live from the International Space Station. But Beutel said through some of the other fresh offerings, officials from across NASA’s many fields will participate in live virtual chats and interact directly with people in their homes. “The idea is ... knowing that we’re obviously going to be in this situation for a while, we wanted to coordinate this in such a way that you're at home, and you have a reason to come back day after day for something new,” he said. “We’re trying to be as engaging as possible.”
<urn:uuid:e058b61f-008a-4914-80bd-fc2c1ab267bf>
CC-MAIN-2022-40
https://www.nextgov.com/cxo-briefing/2020/04/space-agency-launches-nasa-home-engage-those-coronavirus-confinement/164291/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00120.warc.gz
en
0.956052
1,033
3.078125
3
What is HIPAA? The Health Insurance Portability and Accountability Act (HIPAA) was passed in 1996 to help US workers keep their health insurance when they changed or lost their jobs. HIPAA was expanded in 2009 by the Health Information Technology for Economic and Clinical Health (HITECH) Act. Together, HIPAA and HITECH established national standards for how healthcare organizations and their business associates use, share, and store the personal health information (PHI) of their patients or clients. Cloud Service Providers like Amazon Web Services (AWS) are not directly regulated by HIPAA and HITECH, however they do need to meet strict federal data-security standards that align with the HIPAA Security Rule. HIPAA Security Rule The HIPAA Security Rule applies to health plans, health care clearinghouses, and any health care provider that transmits health information in electronic form. Among other regulations, it creates three levels of safeguards related to the protection of electronic PHI (e-PHI). These include administrative safeguards, technical safeguards, and physical safeguards. - Security Management Process – Organizations must identify and analyze potential risks to e-PHI and implement security measures that reduce vulnerability to a reasonable level. - Security Personnel – Organizations must designate security officials responsible for developing and implementing security policies and procedures. - Information Access Management – Organizations must implement policies and procedures for authorizing access to e-PHI. - Workforce Training and Management – Organizations must provide all employees or staff that work with e-PHI training and supervision regarding security policies and procedures. They must also apply appropriate sanctions against workforce members who violate the policies and procedures. - Evaluation – Organizations must perform periodic assessments of how well their security policies meet the requirements of the Security Rule. - Access Control – Organizations must implement technical policies and procedures that allow only authorized persons to access e-PHI. - Audit Controls – Organizations must implement hardware, software, and/or procedural mechanisms to record and examine access and other activity in information systems that contain or use e-PHI. - Integrity Controls – Organizations must implement policies and procedures to confirm that e-PHI is not improperly altered or destroyed. - Transmission Security – Organizations must implement technical security measures that guard against unauthorized access to e-PHI being transmitted over an electronic network. - Facility Access and Control – Organizations must limit physical access to its facilities while ensuring that authorized access is allowed. - Workstation and Device Security – Organizations must implement policies and procedures to specify proper use of and access to workstations and electronic media. They must also ensure protection of the transfer, removal, disposal, and re-use of electronic media containing e-PHI. How can AWS help maintain HIPAA compliance? AWS follows the risk management standards determined by the Federal Risk and Authorization Management Program (FedRAMP), which align with the HIPAA Security Rule. Cloud Service Providers that work with the US government must demonstrate FedRAMP compliance. FedRAMP uses the National Institute of Standards and Technology (NIST) Special Publication 800. Among other things, NIST SP 800 requires cloud service providers to complete an independent, third-party security assessment to ensure that authorizations are compliant with the Federal Information Security Management Act (FISMA). AWS offers a wide range of tools and services to ensure HIPAA compliance with encryption, auditing, data back-up and disaster recovery requirements. HIPAA requires PHI to be encrypted while it is both in storage and being transmitted, according to guidance issued from the Secretary of Health and Human Services (HHS). AWS provides a variety of products and services like Key Management Service (AWS KMS) to help in the management and encryption of e-PHI. HIPAA eligible organizations must allow independent security analysts to audit their activity logs and records that track all access to PHI. This information must be stored for extended periods of time and be readily accessible during an audit. Amazon Elastic Computer Cloud (EC2) allows customers to store activity log files and detailed audits on their virtual servers. They can also keep track of IP traffic and save log files into Amazon Simple Storage Service (S3) for long-term reliable storage. Data Backup and Disaster Recovery HIPAA also requires organizations to keep and protect back up copies of e-PHI data in case of an emergency. Amazon Elastic Block Store (EBS) provides persistent storage for Amazon EC2 virtual server instances. Customers can store Amazon EBS files automatically in Amazon S3. When a file or image is saved, Amazon S3 automatically creates multiple redundant copies and stores them in separate data centers until intentionally deleted. InterVision’s Expertise with AWS InterVision has more than 25 years of experience helping IT teams solve their data-related problems. If you want to know more about how AWS can help your organization maintain HIPAA security compliance, the experts at InterVision are available to answer your questions. Visit this webpage to learn more about our AWS expertise related to HIPAA compliance. Please visit our website or call us at 844-622-5710 for a free consultation.
<urn:uuid:c4262f30-e04f-4f6b-817d-de3b24f45aeb>
CC-MAIN-2022-40
https://intervision.com/blog-how-do-you-maintain-hipaa-compliance-in-aws/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00120.warc.gz
en
0.907478
1,074
2.90625
3
In this blog, we will discuss the differences and benefits of Hadoop and Oracle which is also known as Big Data. Oracle is a database which is a collection of the data and treated as a single unit. The main purpose of the database is to retrieve related information. Oracle is the first database designed for computing purpose which is cost-effective and the easiest way to manage the huge data. Hadoop is an open-source software framework which is used for storing the data & running different applications on the clusters of commodity hardware. Hadoop is a collection of different open source software and runs as an HDFS (Hadoop Distributed File System – A distributed storage framework) and is used to manage a large number of data sets. Objective of Hadoop is to store, manage and deliver the data set for analytical purpose. Hadoop is not a database at its core, rather a powerful file system. The 3 Vs and the Cloud Hadoop has various perks over Oracle which are generally explained by 3 Vs. They are as follow: Volume: Hadoop has distributed type MPP architecture which makes it perfect for large data volumes. Large number of terabyte data sets are automatically partitioned among many servers and processed out in parallel. Variety: In the oracle, it is required to define the structure and type of the data you are loading but in hadoop, it is not necessary. Loading of the data is just like copying the data and can be of any format. This makes Hadoop easy to manage, storing and integrating data from the database is stress-free. You can extract XML documents or digital photos without any difficulty. Velocity: Because of the MPP architecture & powerful memory tools like Spark, Kafka & Storm it becomes a perfect solution for dealing real or non-real-time steaming feeds that comes at a velocity. It simply means that it can be used to deliver analytics-based solutions. For instance, it can be used to tell some options to a customer using predictive analysis. The invention of cloud computing technology has brought various advantages. It is the ability to provide on-demand scalability with the help of cloud-based servers which deals with unpredictable workloads. It simply means that the whole network of the machines can be spin up at the time of large data processing challenges while keeping the hardware cost restrained by using pay as you go model. In some industries like financial services where data is highly sensitive, the cloud may be seen with suspicion, in that case, consider ” On-Premises Cloud” to secure your data. Hence the first thing which is to be cleared out that Hadoop is not a database whereas Oracle is a database. Hadoop is Cheaper than Oracle If we compare the cost of the Hadoop and Oracle systems, Hadoop seems a little bit inexpensive. Inexpensive hardware also permits to host Hadoop system rather than going for the Oracle database. There is a scarcity of Hadoop skills which may increase the cost. On any system storing of 168 Terabytes of data, taking account of license costs, personnel, support, hardware, and license costs is approx two hundred percent cheaper in comparison to Oracle, a study reveals. It does not mean that Oracle should not be used as it has its advantage. It depends upon the application, cost, and network to choose Hadoop or Oracle.
<urn:uuid:3b63f9a4-727e-4412-bcd8-78ae9813ecc1>
CC-MAIN-2022-40
https://ipwithease.com/hadoop-vs-oracle/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00120.warc.gz
en
0.940939
718
3.078125
3
There’s been a lot going on recently in the DDoS mitigation field with two of the biggest DDoS (1.3 Tbps attack and 1.7 Tbps attack) attacks that ever happened all happening at the same time – with only four days apart from each other. Previously the Mirai botnet and the exploitation of IoT devices has spawned the infamous Dyn attack that ‘brought down half the internet in 2016’, making it the biggest recorded attack at the time. This time the type of attack is a reflection amplification DDoS attack that uses memcached servers to amplify attacks by a factor of 10,000 to 51,000. This type of attack doesn’t even need a botnet to be potentially more powerful than the Mirai attack. We wanted to touch base and explain what are the actual repercussions of these attacks, how they were launched and explain all the details behind the launch. The DDoS Attack Anatomy To start things off let’s first dive into how a reflection amplification DDoS attack works. Reflection amplification attacks have always been one of the strongest weapons for malicious actors, allowing them to amplify the power of their botnets (a group of infected computers controlled as a group) that are sending requests by staggering factors while also hiding the source of their attack behind a server they used to amplify the launched attack. The simple capability of turning small requests into larger while also not being seen as the attacker has changed the DDoS landscape drastically. A good example of a reflection amplification attack (other than the ones we’ll mention below) is a DNS amplification attack. DNS reflection amplification attacks are asymmetrical DDoS attacks in which the attacker sends out a small look-up query with spoofed target IP (using the UDP protocol), making the spoofed target the recipient of much larger DNS responses (also using UDP protocol). With these attacks, the attacker’s goal is to saturate the network by continuously exhausting bandwidth capacity. Vulnerabilities in DNS servers are exploited to turn initially very small queries into much larger payloads. This, in turn, brings the victim’s servers down. The reflection is achieved by eliciting a response from a DNS resolver to a spoofed IP address. During the attack, the perpetrator sends a DNS query with a forged IP address to an open DNS resolver, prompting it to reply back to that address with a DNS response. Because numerous forged queries are being sent out, and because DNS resolvers reply simultaneously, the victim’s network is overwhelmed. The above is what an attack using a DNS server to amplify attacks looks like. Now let’s see how a reflection amplification attack that is using Memcached servers looks like. What is Memecached and who is using it Memcached (pronounced “mem-cash-dee”) is an open source software that many organizations instal on their Linux servers to cache data and ease the workload on heavier data stores (disk or databases). It works by caching data in system memory and is intended for use only behind a firewall and on enterprise LANs – and this is where the problem begins. A lot of organizations have hosted Memcached in such a way that it is easily approachable from the public internet even though it is highly recommended practice not to do so since it communicates using UDP (port 11211) which allows communication without any authentication. Now, all that attackers have to do is to search for these hosts, send a spoofed IP address of the intended victim, and then use them to direct high-volume DDoS traffic. From various sources it has been reported that currently there are 50,000 Memcached servers available for exploit on the public internet. The geographical distribution of Memcached servers that can be abused (taken from Shadowserver.org): The true power of a DDoS attack that is using a Memcached server lies in the fact that a Memcached server amplifies the original request by a factor of up to 51,200. This is more than a hundred times larger than the amplification factor DNS servers have as reflectors in an reflection amplification attack. “With DNS amplification, for instance, an attacker might be able to generate a 50KB response to a 1KB request. But with a Memcached server, an attacker would be able to send a 100-byte request and get a 100MB or even 500MB response in return. In theory, at least, the amplification could be unlimited” – Karsten Desler (CTO of Link11) This is possible due to the fact that attackers are able to influence the amplification factor of Memcached servers for any given nodes by inserting records into the open server while also maliciously configuring the constraint on the size of the said records. This allows them to use larger objects when using Memcached servers as reflectors and reach amplification factors of never-before-seen sizes. There’s more – ransom notes! As reported by KrebsOnSecurity in a detailed article, there have been reports that attackers are adding short ransom notes and a payment address into the junk traffic they’re sending to Memcached services. Since Memcached can accept files and host them in its temporary memory, the attackers place a 1 mb file full of ransom requests onto the server and request that file thousands of times to be sent to the victims IP address. An example of a ransom note requesting 50 XMR (Monero virtual currency): The interesting thing about running DDoS attacks using Memcached servers is that it has always been possible – attackers just haven’t considered this option until now, and now they’re using it intensely which can be seen here, where’s there’s a live running list of the latest targets that are getting attacked. How to stop the attacks There’s been a worldwide debate on how to best approach this problem and stop this type of attacks running once an for all. The first suggestion to anyone running a server which has Memcached installed is to block the 11211 port to prevent hackers from accessing it. However, it’s a bit much to expect every system admin in the world will be able to react in time – and this is why every website needs to be prepared for an attack of this magnitude. Luckily, GlobalDots can help you protect your business from DDoS attacks of any size and intensity. Even with the most complex and sophisticated setups, GlobalDots can provide you with the technology stack that ensures that the most important aspects of your site are always up & running: deliverability, speed, availability, failover and web security (including web application protection, bot protection, DDoS protection and mitigation). Customers like Lufthansa, Playtika, Trading View, Lamborghini, Bosch, Fiat, Rocket Internet, Benetton, Bulova and other leading brands and small-medium enterprises rely on GlobalDots services to keep their sites and applications fast & secure. Contact us today to help you out with your performance and security needs.
<urn:uuid:ad57b6b5-f211-4d83-ba6c-2f41cb859edf>
CC-MAIN-2022-40
https://www.globaldots.com/resources/blog/memcached-servers-ddos-attacks-the-complete-analysis/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00120.warc.gz
en
0.940422
1,471
2.546875
3
How can technology inject trust and reliability in vaccine distribution? Blockchain and cloud offer government and healthcare leaders new visibility into a historic vaccine rollout Technology like blockchain and cloud can help streamline vaccine distribution and adoption. On December 30, for the second time in as many weeks, General Gustave Perna, head of the U.S. government’s Project Warp Speed vaccine program, sat inside a Pentagon briefing room and apologized—even for issues well beyond his command. While 20 million vaccines had been made available as promised, only a fraction had made it into people’s arms. “There’s two holidays, there’s been three major snowstorms,” General Perna, dressed fittingly in battle fatigues, explained. “There is everybody working through, you know, how to do the notification, how to make sure we’re administering it the right way, how to ensure that it stays in accordance with the cold chain. There’s numerous factors.” Weeks later and the situation has only grown more dire. While truly historic efforts were made in developing the COVID-19 vaccine, the federal and local governments and public and private healthcare providers are still relying on a medical system that has long been fragmented and inefficient. The strain of COVID-19 may only be fracturing things further, as the familiar lines grow and forms pile up. Add to that mistrust of the new vaccines over the course of last year—reaching 49 percent at one point, according to the Pew Research Center—while an alarming new strain first identified in the United Kingdom could be as much as 70 percent more contagious. Suddenly, the planet’s race for more shots versus more spread has become an all-out sprint. Looking to gain any advantage they can any way they can, many leaders and organizations are turning to technology for a leg up. While they can’t cure disruptions or distrust, services like blockchain, AI and cloud computing are proving as essential as nurses, needles and refrigerators for the successful adoption and distribution of vaccines around the world. The benefits start with vaccines to slow down the spread of COVID-19 but by no means will end there. As with so many things over the past year, the work won’t be easy, but it will be necessary. It’s a challenging moment, as well as a historic one, in almost every way. “We’re at a time with vaccines that’s a bit like hailing a cab a few years ago,” Chris Moose, a partner in the Healthcare and Life Sciences group at IBM Services, said in a recent interview with Industrious. “You used to just call a cab and know it was going to show up eventually. With all the apps, suddenly you knew how long it would be, who was coming, where from, you could communicate along the way. With COVID, that’s what we’ve needed, an execution engine.” Blockchain for a trustworthy supply chain Moose had been thinking about similar problems within the medical supply chain for years. He and a team at IBM had been looking at technologies, particularly blockchain, as tools to build up trust and transparency to combat issues as widespread as the counterfeiting, smuggling, false sale and mismanagement of drugs and therapeutics. Just as the parts of a car or smartphone may be shuttled between factories, cities, even countries during manufacturing, the same has been happening with drugs and medical devices. “The industry’s developed a very complex supply chain where the components of drugs, active ingredients or entire molecules, are being made all over the world, and we really don’t know where,” Mark Treshock, the blockchain solutions lead for Healthcare and Life Sciences at IBM, told Industrious. For all the benefits, each new link in that supply chain also introduced the opportunity for intellectual property to be copied or medicines left out in unstable conditions. Between June and December 2019, IBM, Merck, Walmart and KPMG completed a pilot sponsored by the US Food and Drug Administration that demonstrated blockchain could help overcome medical supply chain blind spots. By uploading origination, shipment and usage data to an immutable blockchain ledger, the companies were able to demonstrate connect disparate systems and organizations in order to record a common view of product traceability. As COVID-19 swept the globe early last year, the IBM team began to look for ways to aid deployment of the rush of vaccines already in research and production. “There’s trust issues, but there’s also a crush of demand for vaccines,” Treshock said. “How can you leverage technology to give people confidence that they’re receiving good vaccines?” Meetings with Project Warp Speed and other healthcare and government leaders ensued over the summer. With many still squarely focused on the historic pace of vaccine production, questions around its equally historic global distribution lingered. In December, IBM proposed an open-source vaccine management system built on the existing blockchain framework and supported by the Linux foundation. As before, it can provide trust and transparency in the provenance of medicine—with the COVID-specific parameters now featured, such as whether the vaccines are kept at the right temperature or even if extra doses may be available. “A lot of healthcare business models are built upon hoarding information, and that’s what we’re trying to avoid by going open source,” Moose explains. “It’s become in everyone’s best interests to understand where this stuff is coming from and where it’s going, to keep it as affordable as possible.” COVID challenges trust everywhere This trust will only grow in importance as demand—or lack thereof—shifts from those who have rolled up their sleeves for the vaccine to those who remain wary of it. Dr. William Kassler, chief medical officer of Watson Health, outlined for Industrious how the issues have become as widespread as the virus itself: “We are challenged by not only the complexities of the cold chain, of multiple doses, of a mass vaccination campaign that has never been attempted. We are likewise challenged by an atmosphere of distrust and vaccine hesitancy, some of which has been building for years. “You add to that the polarization in our country that has caught up COVID-19, turning healthcare, public health and science into partisan issues. You add to that the fear of the unknown, with regards to these new vaccine technologies and the unprecedented speed of trials. “You add to that the mistrust of certain communities, like African-Americans, because of systemic racism, and because the health care system and the government have an unfortunately checkered history of not treating that community ethically. “And you add to that not every American has equal access to healthcare, and these are communities COVID-19 has really run rampant through. There’s transportation, mental health, housing or food-security issues that means they can’t prioritize or can’t get access to healthcare.” “We’ve got to get ahead of this thing” Overcoming so many factors, and more, means officials will need all the support and confidence they can get in the vaccines they have. The recent struggles with vaccine rollouts has caused delays, making the need to get back on track that much greater—for the health of not just individuals but businesses and economies, as well as the health of the flagging medical system itself. “This isn’t just an economic reopening story,” Moose said. “This is a ‘stamp this thing out before things get even worse’ story.” And, with a new, more contagious strain running rampant in many places, that could already be happening, undercutting any gains the current batch of vaccines is achieving. “We’ve got to get people vaccinated,” Carl Zimmer, the New York Times columnist and author of A Planet of Viruses, recently said. “We’ve got to get ahead of this thing, because otherwise, it’s going to really do us a lot of damage.” That’s one thing a blockchain-based vaccine management system can also help with. The transparency the drives trust can also give officials and leaders a greater handle on supplies, directing and distributing them with new speed and certainty. Older systems and processes are a big part of the holdup over the past month. The initial desire to reserve shots for the most essential workers and at-risk populations confront the challenges of reaching and scheduling those groups. Consequently, access was throttled and some medicines even wound up being wasted precisely because it was harder to schedule and manage. Many states and eventually the CDC loosened their restriction by January 11, which led to the opposite problem: the crunch many initially feared when drafting the tighter rules overwhelmed the fragile medical system. Hundreds are lining up at injection sites, sometimes without appointments. Others are spending hours online trying to get even that far. Data for equitable injections Signs of a straining system are everywhere. In the span of a few days, those eligible for vaccination in New York state jumped from fewer than 1 million to 7 million. Some Florida counties turned to sites typically used for selling concert and event tickets to schedule shots, calling it “the quickest, easiest, and most efficient way that we can think of to help the department of health solve this issue right now.” “What we’re seeing is a data visibility problem,” Treshock said. “If we knew where every vaccine was in a state, in the country, even in the whole world—and we can do this with the right data on the blockchain—we could understand how these vaccines are moving around and getting used, and that would alleviate at least some of what we’re seeing now.” Such visibility would also boost efforts at ensuring an equitable distribution of the vaccine that many are seeking. And as data grows on who is getting the vaccine, it can be fed back into the system. Using cloud-based AI analysis, organization could then determine the communities that are either missing out on the vaccine or resisting it. “We can identify those populations where they live and what messages are likely to resonate with them,” Dr. Kassler said. “Do they trust doctors? Their peers? Their leaders and institutions? We can craft our outreach around that.” With the right approach, the data can continue to build, and continue to do good into the future. “If we learned our lesson, if we make investments in technology and procedures and changes in practice, it can leave us stronger,” Dr. Kassler said. “Not only for future emergencies and future pandemic, which will inevitably come, but in terms of really reinventing how we work on a day-to-day basis.”
<urn:uuid:be3f0129-8ced-4bbb-a142-86f415b1a6b1>
CC-MAIN-2022-40
https://www.ibm.com/blogs/industries/vaccination-management-ibm-blockchain-covid-19-vaccines/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00120.warc.gz
en
0.95917
2,293
2.546875
3
GraphQL is a query language that is used to transfer data from an API into the user’s application since it describes how to request data. It was developed by Facebook, before being moved to the QL Foundation in 2018. This language helps to give a comprehensive understanding of API Data and allows the clients to ask what they are looking for in an easy and simple manner. - It shows exact and relevant results, as the clients have the ability to request specific queries and receive similar answers in return - Furthermore, GraphQL also has the ability to fetch a variety of different data through just one single query, so the client does not need to make a different one each time – by simply changing the command they can retrieve the information - A centralized location of data storage is provided, as GraphQL Schema is the source of truth in related applications - This is less vulnerable to errors because it is a strongly typed language in SDL Format - Security features include route change options and authentication - Web protocol used by GraphQL is HTTP
<urn:uuid:efd0fcd0-9e32-47cc-82fa-f53cb5f6f21c>
CC-MAIN-2022-40
https://data443.com/data_security/graph-ql/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00120.warc.gz
en
0.956598
215
2.890625
3
Biometric authentication is a process of proving your identity using unique biological characteristics such as fingerprints, voice, retinal patterns, etc. This authentication technique is becoming more popular since Apple introduced a fingerprint scanner in the iPhone. In this type of authentication, there is no need to remember any details or carry around security keys. It's also highly secure, as it's difficult to break into a system that requires an identifier that cannot be copied or possessed. The authentication process is done in a few seconds and requires little to no training, as the users only need to touch a scanner or click a selfie. A biometric identifier is a parameter that can be measured to identify a person uniquely, and it serves as an access code in biometric authentication. They can be either physiological or behavioral identifiers. Fingerprint authentication compares a user's fingerprint to the stored fingerprint templates to validate the user's identity. Face recognition systems detect a face from a live camera source and compare it with the available database of known faces to find a match in order to complete authentication. In retinal authentication systems, the identifier is the unique blood vessel patterns of the retina. In this biometric, users are identified by the shape of their hand. Body odor is a new biometric identifier that is proving to be more effective than other emerging identifiers. This identifier is still under development and not yet in use. Voice recognition systems analyze a person's voice to validate their identity. A person's typing pattern is unique due to neuro-physiological factors. This can be used to identify a person. Similar to typing rhythm, the handwriting of a person can serve as an identifier, as it is distinct for each person. As simple and secure as it sounds, biometrics do come with their own cons. For instance, since skin elasticity decreases with age, older individuals may experience difficulty authenticating themselves using their fingerprints. Worse yet, leaked biometrics could lead to compromised identities. It's important to remember that biometrics are not 100 percent accurate. The biometric authentication system simply tries to find the best match to the given input identifier from the available collection of biometric data. To combat these issues, there are biometric systems with modifications. Adaptive biometric systems auto-update their biometric data with the changing environment and aging of the biometric identifiers. Biometric system in which authentication requires more than one biometric identifier is called a multimodal biometric system. This improves the accuracy and also provides alternatives. We already know why it's better to use biometrics in conjunction with other authentication techniques. Multi-factor authentication systems use multiple authentication methods to verify users identities. They generally include identifiers that involve: Even though biometrics are an easy and effective security solution, we don't see widespread use of it in IT enterprises because: ADSelfService Plus is an integrated Active Directory self-service password management and single sign-on solution that offers over 15 authentication methods for machine logon, application logon, and VPN logons. The biometric authentication methods supported by ADSelfService Plus include: The biometric data required for verification is not stored in a central database. When the fingerprint/Face ID has to be verified, ADSelfService Plus requests the mobile phone's OS to check if the given fingerprint/Face ID matches the stored one. There is no need to deploy and maintain a separate biometric authentication system, as ADSelfService Plus utilizes the fingerprint scanner and facial recognition system readily available in almost every smart phone. This eliminates the added costs of purchasing the required hardware, too. Enable users to reset forgotten passwords and unlock their accounts without involving the help desk, anytime, anywhere. Secure machine logon, application logon, and VPN logon with over 15 authentication methods that can be configured in minutes. Sync the Windows Active Directory user password across various platforms automatically, eliminating password fatigue. Ensure strong passwords that are equipped to fight dictionary attacks, brute-force attacks, and other password threats. Allow users to update personal information in Active Directory, freeing the help desk from this daunting and repetitive task. Implement single sign-on for over 300 major enterprise applications and custom applications from a single portal.
<urn:uuid:8358ce19-aa27-4094-b1d7-3ce5a14a16c6>
CC-MAIN-2022-40
https://www.manageengine.com/products/self-service-password/biometric-authentication.html?utm_source=adssp&utm_medium=webpage&utm_content=endpoint-mfa
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00320.warc.gz
en
0.896169
869
3.4375
3
A recent Associated Press poll indicates that most Americans think their personal information is vulnerable online. What’s more, 71% of Americans believe that individuals’ data privacy should be treated as a national security issue. In other words, the American people get it: data privacy and security are sadly lacking across the digital ecosystem and consumers are suffering the consequences. Information was first digitized in the 1950s, thus ushering in the dawn of data. Then, as now, software was used to create and process data, and like most new technology inventions, security was not inherently built in. Software developers didn’t feel the need to apply controls to the new data objects created. Anyone with access to the software and the rare, expensive computer on which to run it could open, read, modify, delete, or copy this data without limits. Data is just the geek word for information, right? If I were to provide information about the room in which I write this, I might say that it’s 10 feet by 8 feet with a 12-foot ceiling. You’d realize that it’s a comfortable but not overly large space. To put that information into a database, you would use software to enter each dimension into the appropriate cell and save it to your device’s hard drive. Although this description seems straightforward, the information I just conveyed to you and the corresponding data in a database differ in important respects. Without understanding the distinction, we will always struggle to think accurately about data ownership, privacy, and even cybersecurity.
<urn:uuid:dbd9ecac-7338-40f1-a306-08244a721f34>
CC-MAIN-2022-40
https://www.absio.com/tag/privacy-by-design/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00320.warc.gz
en
0.957018
315
2.796875
3
For those looking to purchase an energy saving computer to be more environmentally conscious, ultrathin laptops may (or may not) be the way to go. The Electronic Product Environmental Assessment Tool (EPEAT), a certification for environmentally-friendly electronics created by manufacturers and the U.S. Environmental Protection Agency, declared in October that five ultrathin notebooks passed their requirements and could be considered green devices, including the MacBook Pro with Retina Display. TreeHugger reported that the agency deemed the devices sufficiently eco-friendly and easy enough to disassemble for recycling purposes. However, some eco-savvy techies are crying foul over the latest EPEAT ruling, saying that the ultrathin laptops by their design are difficult to recycle and generally bad for the environment. Not only are the displays extremely difficult to repair, but the batteries are affixed to the machine using adhesive, which makes disassembly especially cumbersome. Technology repair website iFixit gave the initial incarnation of the Macbook Retina a score of one out of 10, with 10 meaning the computer is incredibly easy to take apart and repair TreeHugger reported on October 25. EPEAT defended its position, saying its guidelines do not specifically say that parts cannot be glued together in order for the machine to be certified. Also, it only certifies the eco-friendliness of a computer, as it makes no indication of how easy or difficult it is to repair, TreeHugger reported on October 19. “The standard also doesn’t forbid specific construction methods such as fasteners versus adhesives – it just requires products to be easy to disassemble for recycling,” EPEAT said in a statement, according to the October 19 article. “The test lab went through the disassembly process and reported that the products were all easy to disassemble with commonly available tools.” Do you think ultrathin laptops should be certified as eco-friendly? Is PC power management important to you when buying a new computer, or do other concerns take precedent? Leave your comments below to let us know what you think about this budding controversy!
<urn:uuid:235a9baa-a3b9-4647-946e-5f1f8dc258b5>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/ultrathin-laptops-deemed-green-but-some-cry-foul
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00320.warc.gz
en
0.958128
436
2.578125
3
The United States is experiencing a “steep and sustained” spike in sexually transmitted diseases, a new government analysis shows. Cases of gonorrhea, syphilis and chlamydia all increased in 2017, making it the fourth straight year in which STD infections continued to expand. “The United States continues to have the highest STD rates in the industrialized world,” said David Harvey, executive director of the National Coalition of STD Directors. “We are in the midst of an absolute STD public health crisis in this country. It’s a crisis that has been in the making for years.” CDC researchers discussed the new statistics — based on preliminary data from 2017— today (Aug. 28) at the National STD Prevention Conference. They found that doctors diagnosed nearly 2.3 million cases of chlamydia, gonorrhea and syphilis in the U.S. that year. That’s 200,000 more cases than were reported the year before. “We are sliding backward,” Dr. Jonathan Mermin, the director of the CDC’s National Center for HIV/AIDS, Viral Hepatitis, STD and TB Prevention, said in a statement. The most common STD in 2017 (and the one most commonly reported to the CDC in general) was chlamydia, with over 1.7 million cases identified in 2017, according to the report. This infection, which is caused by the bacterium Chlamydia trachomatis, can infect both men and woman who have unprotected vaginal, anal or oral sex, according to the CDC. Sexually active young people are particularly at risk of a chlamydia infection, the CDC says. Of the reported cases in 2017, 45 percent were among females between the ages of 15 and 24. This is also true of gonorrhea, according to the CDC. Gonorrhea is another bacterial infection, in this case, caused by the bacterium Neisseria gonorrhoeae. Like chlamydia, this STD can infect both men and women. Diagnoses of gonorrhea increased 67 percent from 2013 to 2017, with infection rates nearly doubling among men from 169,130 cases to 322,169 cases, according to the preliminary data. Both chlamydia and gonorrhea, if left untreated in women, can lead to a condition called pelvic inflammatory disease, which can damage the reproductive system and may lead to infertility. In men, though less likely to cause health problems, can sometimes spread to the tubes that carry sperm from the testicles and cause pain and fever, according to the CDC. Rarely, it can also lead to sterility. Syphilis infections have also increased, the preliminary data showed. This infection is caused by the bacterium Treponema pallidum, and the infection is divided into four stages, depending on the severity. Diagnoses for the first two stages — when the infection is most contagious — increased 76 percent from 2013 to 2017. Of the more 30,000 syphilis cases diagnosed in 2017, the majority (70 percent) occurred in gay and bisexual men and other men who have sex with men. People can get syphilis through direct contact with a syphilis sore during vaginal, anal or oral sex. All three infections can be treated with antibiotics, as of now. However, like with all bacterial infections, the STDs run the risk of becoming resistant to the antibiotics that treat them. In fact, the bacteria that cause gonorrhea have become resistant to every class of antibiotics used to treat the disease except for one. The last remaining shield, ceftriaxone, is now prescribed along with another oral antibiotic, called azithromycin, to help delay the resistance, according to the statement. Though treatment is still effective, laboratory testing has found that the gonorrhea bacteria are becoming resistant to azithromycin: 1 percent of samples tested in 2013 were resistant to the drug, and over 4 percent were resistant in 2017. Researchers are concerned this could eventually lead to a strain of gonorrhea that’s entirely antibiotic-resistant. “We expect gonorrhea will eventually wear down our last highly effective antibiotic, and additional treatment options are urgently needed,” Dr. Gail Bolan, the director of CDC’s Division of STD Prevention, said in the statement. “We can’t let our defenses down — we must continue reinforcing efforts to rapidly detect and prevent resistance as long as possible.” The risk of STD infections can decrease by using protection during sex. The CDC recommends STD screening and timely treatment. “Most cases go undiagnosed and untreated,” the organization wrote in the statement. This “can lead to severe adverse health effects,” such as infertility, ectopic pregnancy (in which a fertilized egg begins to grow outside the uterus), stillbirth and increased HIV risk.
<urn:uuid:8afb81cc-3b12-4b94-9d67-50f8c69c48f3>
CC-MAIN-2022-40
https://debuglies.com/2018/09/26/rates-of-sexually-transmitted-diseases-stds-in-the-u-s-continue-to-rise/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00320.warc.gz
en
0.953743
1,024
2.921875
3
A new and unorthodox approach to deal with discriminatory bias in Artificial Intelligence is needed. As it is explored in detail, the current literature is a dichotomy with studies originating from the contrasting fields of study of either philosophy and sociology or data science and programming. SwissCognitive Guest Blogger: Lorenzo Belenguer It is suggested that there is a need instead for an integration of both academic approaches, and needs to be machine-centric rather than human-centric applied with a deep understanding of societal and individual prejudices. This article is a novel approach developed into a framework of action: a bias impact assessment to raise awareness of bias and why, a clear set of methodologies as shown in a table comparing with the four stages of pharmaceutical trials, and a summary flowchart. Finally, this study concludes the need for a transnational independent body with enough power to guarantee the implementation of those solutions. Bias leading to discriminatory outcomes are gaining attention in the AI industry. The let’s-drop-a-model-into-the-system-and-see-how-it-goes is no longer viable. AI has grown with such ubiquity into our daily lives and can, and do, have such dramatic effects in society, that an effective framework of actions to mitigate bias should be compulsory. The most disadvantaged groups tend to be the most affected. If we aim for a more equal and fairer society, we need to stop looking the other way and standardise a set of methodologies. As I explore with more detail in my paper published in the Springer Nature Journal, AI and Ethics, industries with a long history of applied ethics can greatly assist, such as the pharmaceutical industry. The reader will grasp a better understanding by starting from the flowchart included in this article. The model is inspired by the four stages that a pharmaceutical company will conduct before launching a new medicine and its regulatory follow up. Finally, the whole process is monitored by an independent body like the FDA in the US before being allowed to reach the market. Harm is minimised and as soon as detected, removed. It includes a compensatory scheme if negligence is proven, as we are witnessing with the overprescription of opioid drugs in the US. Before we start, an awareness of individual and societal prejudices is paramount. I would add a good understanding of the protected groups’ concept. Machines can be biased, because we are and we live in a society that is biased. This is one of the reasons why anthropology is gaining predominance in Ethics AI – especially since historical data is one of the main sources of data to feed ML models. The first phase would consist of testing the system in a closed environment while checking the quality of the data, and how it has been collected, to train the models. And the first round of detecting bias by specialised algorithms such as FairTest or AIF360. In the second phase, the system is tested secure open environment, and a second round of detecting bias is conducted again by specialised algorithms such as FairTest or AIF360. By the second stage, we are better positioned to unearth its possible flaws and discriminatory outcomes. As the first and second phases, we are ready to conduct a bias impact assessment in the third phase. Impact assessments are as old as humankind when a hunter would assess an environment to spot any risks and benefits. They can be very helpful in clearly identifying the main stakeholders, their interests and their position of power when blocking or allowing necessary changes and the short- and long-term impacts. If we want to mitigate bias in an algorithmic model, the first step is to be aware of the biases and why they occur. The bias impact assessment does that, and that is why its relevance. It is helpful to provide a list of essential values to facilitate a robust analysis to detect bias, as provided by the EU white paper on Trustworthy AI, 2019 p. 14. They are respect for human autonomy, prevention of harm, fairness and explicability. Those values are further explained in my paper (link provided in the first paragraph). Once the tests are passed, the AI system can be deployed fully accessible by its users, either a specific group of professionals, such as the HR department, or the general public, for example, credit rating when applying for a mortgage. Finally, the fourth phase is implemented by close monitoring the four values, rapid assessment feedback from its users and a compensating scheme when caused harm can be proven. At this stage, an independent body, on a transnational level is possible, is needed with enough power to guarantee the implementation of those safeguarding methodologies, and their enforcement when not. The time for voluntary cooperation is over, and the time for action is now. About the Author: Lorenzo Belenguer is a visual artist and an AI Ethics researcher. Belenguer holds an MA in Artificial Intelligence & Philosophy, and a BA (Hons) in Economics and Business Science.
<urn:uuid:573ab733-52c0-428e-98e1-8985f26094a6>
CC-MAIN-2022-40
https://swisscognitive.ch/2022/04/26/what-can-ai-learn-from-the-pharmaceutical-industry-to-solve-bias-five-solutions-that-might-help/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00320.warc.gz
en
0.948555
1,009
2.609375
3
Active Directory (AD) is Microsoft’s identity and access management (IAM) solution that allows IT teams to centrally manage user accounts and devices within an IT infrastructure. AD has become an increasingly integral component for many IT environments due to its benefits, such as single sign-on (SSO), enhanced security, and streamlined IT management. Therefore, understanding how Active Directory works should be a top priority for any IT administrator because nearly all cybersecurity attacks affect it. In this post, you’ll learn more about AD, how it works, and why you may want to consider migrating from an on-prem to fully optimized cloud-based IAM. Why Did Microsoft Release Active Directory? The history of AD dates back to 2000 when Microsoft officially released Windows 2000 Server operating system (OS) as a replacement for Windows NT-based user authentication. At the time, Windows NT-based platforms only provided a flat and non-extensible domain model for user authentication, which didn’t scale well for large enterprises. With AD, the company could now anchor user management and access control in IT infrastructures that were largely dominated by Windows OSs. Over the years, Microsoft strengthened AD capabilities, adding features such as federation services, rights management, and SSO. Today, AD is part of nearly every task that users perform on Windows-based networks, including Exchange Server, SharePoint, and Office Communications Server, among others. Users can also leverage the lightweight directory access protocol (LDAP) to add Unix and Linux-based machines under access controls in AD and other third-party applications. Today, most organizations predominantly use AD as an on-prem IAM solution. However, you can also synchronize AD with Azure AD to accomplish hybrid identity goals through the Azure AD Connect feature; however, you can only get this feature if you enroll in an Azure subscription. Understanding Active Directory Services The primary goal of AD is to allow IT administrators to manage permissions and control access to corporate resources. Active Directory Domain Services (AD DS) is the foundation of AD that allows it to provide these services. AD DS provides authentication and authorization measures to users, determining which corporate resources they can access. On Windows Server OSs, a domain controller (DC) is a server that responds to authentication and authorization requests within the domain. A DC can either be a physical host or a virtual machine (VM). AD DS uses a hierarchical layout structure comprising domains, trees, and forests to coordinate network resources. A domain is the smallest unit of the main tiers, while a forest is the largest. Various objects like users and devices that share a database form the domain. A tree is a collection of domains with hierarchical trust relationships. AD DS provides various types of trusts, including one-way, two-way, trusted, transitive, and intransitive, among others. On the other hand, a forest is a set of multiple trees. It consists of shared catalogs, application information, directory schemas, and domain configurations. A forest provides a security boundary in the entire Active Directory infrastructure. Besides the domain services, AD also provides essential services that expand on the solution’s directory management capabilities, detailed below. Active Directory Lightweight Directory Services (AD LDS) This is a directory service that uses LDAP to provide data storage and retrieval capabilities for directory-enabled applications. AD LDS can work without the dependencies associated with AD DS. For example, you can concurrently run multiple instances of AD LDS on a single machine with an independently managed schema for each instance. Active Directory Certificate Services (AD CS) This is a server role that allows users to create, manage, and share their encryption certificates. This allows them to exchange information over the internet securely. Active Directory Federation Services (AD FS) AD FS is a feature that provides SSO capabilities. It enables users to access applications and other resources while outside of the enterprise firewalls. Active Directory Rights Management Services (AD RMS) This is a set of security technologies that IT teams can use to manage and secure data. Such technologies include encryption, authentication, and certificates. How Does Active Directory Work in Modern IT Environments? AD remains the single point of identity management for many organizations that use Windows OSs. It’s the linchpin for authentication and authorization in most businesses, controlling access to critical resources even in an era where organizations use cloud-based services and support a mobile-first approach. Most companies have heterogeneous IT environments. For example, IT systems may consist of on-prem and cloud-based assets where users access them through various methods, including desktops, laptops, and smartphones. They may also include non-Windows systems, including macOS devices and Linux servers. To manage IAM across such environments, companies often rely on the Azure AD Connect tool to synchronize on-prem AD with Azure AD, as well as additional point solutions to accomplish critical tasks for non-Windows based resources and remote employees. This results in a complex and costly IT stack. Additionally, the security controls on Azure AD are different from those of on-prem AD deployments. For example, while Azure AD supports multi-factor authentication (MFA), on-prem AD doesn’t. As such, it’s not simple to seamlessly integrate MFA into IT resources with AD, even if you’re only using Windows-based systems. Meanwhile, you can’t just switch off the on-prem AD and transition to Azure AD because the two platforms are independent. For example, Azure AD lacks a DC and cannot provide the same capabilities you’ll find with on-prem AD. While IT teams can implement the federated SSO in on-prem AD environments to manage access controls, such a feature cannot work in hybrid environments. With the threat landscape increasing by the day, the need for MFA is a must for both on-prem and cloud-based systems. All these issues play into the need for AD modernization. Many companies that relied on the AD during the on-prem computing era and built their IT infrastructures around it are finding that its future is not guaranteed. As cloud-based services continue to expand, and with distributed workforces the new norm, managing user access and authorization is increasingly becoming an issue — for both IT departments and users. For example, IT teams have to create and manage multiple user accounts in both AD and numerous software-as-a-service (SaaS) applications. The same problem extends to users, who have to remember their login credentials across Windows-based networks and each SaaS application they connect to. Leverage JumpCloud Directory to Modernize AD JumpCloud is an all-in-one cloud directory platform that reimagines the role of AD. It allows IT teams to manage user identities similar to AD’s group policies (GPOs), as well as Windows, Mac, and Linux devices, files, networks, servers, and more. JumpCloud is ideal for small to mid-sized enterprises (SMEs) that want to centralize their IAM services, or build their IT stack from the ground up without the prohibitive cost and complexity of AD.
<urn:uuid:e46f0a31-8d48-442d-9523-1342cf723582>
CC-MAIN-2022-40
https://jumpcloud.com/blog/how-active-directory-works
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00320.warc.gz
en
0.927479
1,487
3.046875
3
Let’s take a look at how to configure static NAT on a Cisco router. Here’s the topology I will use: Above you see 3 routers called Host, NAT and Web1. Imagine our host is on our LAN and the webserver is somewhere on the Internet. Our NAT router in the middle is our connection to the Internet. There’s a cool trick on our routers that we can use. It’s possible to disable “routing” on a router which turns it into a normal host that requires a default gateway. This is very convenient because it will save you the hassle of connecting real computers/laptops to GNS3. Host(config)#no ip routing Web1(config)#no ip routing Use no ip routing to disable the routing capabilities. The routing table is now gone, let me show you: Host#show ip route Default gateway is not set Host Gateway Last Use Total Uses Interface ICMP redirect cache is empty Web1#show ip route Default gateway is not set Host Gateway Last Use Total Uses Interface ICMP redirect cache is empty As you can see the routing table is gone. We’ll have to configure a default gateway on router Host and Web1 or they won’t be able to reach each other: Host(config)#ip default-gateway 192.168.12.2 Web1(config)#ip default-gateway 192.168.23.2 Both routers can use router NAT as their default gateway. Let’s see if they can reach each other: Host#ping 192.168.23.3 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 192.168.23.3, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 8/8/12 ms Reachability is no issue as you can see. Now let me show you a neat trick: Web1#debug ip packet IP packet debugging is on I can use debug ip packet to see the IP packets that I receive. DON’T do this on a production network or you’ll be overburdened with traffic! Now let’s send that ping again… Web1# IP: s=192.168.12.1 (FastEthernet0/0), d=192.168.23.3, len 100, rcvd 1 Above you see that our router has received an IP packet with source IP address 192.168.12.1 and destination IP address 192.168.23.3. IP: tableid=0, s=192.168.23.3 (local), d=192.168.12.1 (FastEthernet0/0), routed via RIB And it will reply with an IP packet that has source address 192.168.23.3 and destination address 192.168.12.1. Now let’s configure NAT so you can see the difference: NAT(config)#interface fastEthernet 1/0 NAT(config-if)#ip nat inside NAT(config)#interface fastEthernet 0/0 NAT(config-if)#ip nat outside First we’ll have to configure the inside and outside interfaces. Our host is the “LAN” side so it’s the inside. Our webserver is “on the Internet” so it’s the outside of our network. Now we can configure our static NAT rule:
<urn:uuid:203db0bb-72b8-4a63-a65d-0047f6b67a85>
CC-MAIN-2022-40
https://networklessons.com/cisco/ccie-enterprise-infrastructure/how-to-configure-static-nat-on-cisco-ios-router
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00320.warc.gz
en
0.785685
761
2.6875
3
Why is Continued Professional Development so Important? - March 17, 2021 - Posted by: Jade Scammells - Category: Career Advice Many people think that when they have completed their GCSEs, ‘A’ Levels or university degree, the hard work is done. They can then walk into their dream career and no longer have to worry about studying or learning. However, this is not the case. Continued professional development (or CPD for short) is essential for you to keep up to date with the latest knowledge and skills during the course of your professional career. In fact, according to Gallup, 87% of Millennials see growth and development opportunities as important to them in their role. 69% of Gen Xers and Boomers see CPD as essential. Why is training and development at work so necessary, and how can learning on the job enhance your career prospects? What is CPD? Continuous professional development is when staff undertake learning to develop and enhance their skill set. CPD can be formal or informal, structured or unstructured. Examples of learning include: - Reading an article or case study - Listening to a podcast or watching a video - Attending a seminar, webinar or workshop - Sharing knowledge with colleagues informally or formally - Being part of a Facebook or LinkedIn group that shares best industry practice - ‘On the job’ learning with other staff and secondments/placements - Studying for a professional qualification or accreditation This means that you may be carrying out CPD already, but you don’t know it. For example, reading articles about new technology in your chosen career path on your lunch break counts as continuous professional development! Why is CPD so important? You may be wondering why CPD is so important. After all, if you are experienced in your chosen career you must know all there is to know… right? Here are five reasons why continuous professional development is so critical. 1. It keeps you up to date with the latest developments The work landscape changes all the time. Brand new technologies come into play, new businesses launch and new ways how to carry out tasks are discussed. Being aware of these developments will not only put you ahead of your competitors but help your business save time and money too. 2. It helps customers put their trust in what you do CPD also helps keep you legally compliant. For example, when GDPR was implemented in 2018, you may have had training to learn what the new law was about, and how it would affect you in the workplace. Being legally compliant not only increases your confidence in your role, but means that third parties, other staff and clients can put their faith in what you do too. 3. It helps you prepare for your next role Sometimes we undertake continuous professional development for the role that we want, rather than the position we currently have. For example, if you are interested in becoming a manager, you may be keen to learn more about management or team leadership. You may even ask if you can become a mentor to new staff If you work in IT but want to specialise in cybersecurity, you may choose to complete a penetration testing course. Undertaking CPD shows your commitment to the business and demonstrates that you have the experience and skills to take the next step in your career. This means when a promotion comes up, you’re more likely to be chosen to take the role. 4. It keeps you interested in your job We all occasionally lose passion for the work we do. Sometimes if you do the same work day in, day out, it can make you feel frustrated and bored. This may cause you to start searching for another job. CPD makes you aware of new knowledge and trends and can make you more interested in your chosen career path. For example, if you work in digital marketing you may sign up for webinars about how to create videos or enrol on a short course to enhance your graphic design skills. Passion is critical for all jobs, and constant learning will help keep you invested. 5. It’s good for your well-being As well as helping you in your career, CPD can also help you in your personal life too. Learning new things helps keep your brain healthy, boosts your self-confidence and improves your mental health. You may also be able to apply what you learn to your home life! The benefits of CPD for businesses So far, we’ve looked at the benefits of CPD to you as an employee. However, it is critical for businesses as well. Different companies have different approaches to CPD. Some make it mandatory through personal development reviews, while with other companies it is something that is ‘nice to have’, but not essential. Benefits of CPD for business include: - Improved staff morale and engagement - Higher standards - More experienced staff - Higher staff retention rates - Recruitment of a higher calibre of employees A dedication to continued professional development can help bridge the skills gap too. Retraining staff can help companies avoid wasting time and money on filling new roles as well as reducing the number of staff made redundant. 70% of employees would take part in retraining if it were an option. This means that staff are ready and willing to learn. Looking to move forward in your chosen career? IT Online Learning is here to help In conclusion, CPD is something that everyone should be carrying out at work. It’s vital for keeping up with the latest trends and developments, as well as expanding your knowledge. If you want to grow your knowledge and keep up to speed with the latest developments in your profession of choice, we can support your needs. We provide a wide range of IT, project management, health & safety and leadership courses that will help you grow and develop in your chosen career. All our courses can be completed online, meaning that you can study at home or in the office.
<urn:uuid:d1399e7a-46e3-401d-9e60-8c104cef6d73>
CC-MAIN-2022-40
https://www.itonlinelearning.com/blog/why-is-continued-professional-development-so-important/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00320.warc.gz
en
0.961529
1,255
2.53125
3
Security Information and Event Management (SIEM) Security Information and Event Management (SIEM) refers to products that aggregate and analyze information from different sources to help an enterprise defend company resources. When SIEM hardware, software, and services discover abnormalities, they trigger the reporting and responses helpful for disrupting or mitigating cyberattacks. SIEM can be part of an enterprise’s on-premise infrastructure or delivered by managed service provider (MSP). The services typically use many nodes such as firewalls, anti-virus scans, intrusion detection, behavioral scanning, Active Directory, applications, routers, switches, and more to detect incidents above the normal state of an enterprise. Often these nodes and SIEM as a whole monitor traffic and scan for automated attacks such as credential stuffing and password guessing, as well monitoring devices for malicious software installations and these programs’ activity. SIEM was first coined by Gartner in 2005. Today it is part of the tools that large enterprises use for data loss prevention (DLP). "Our SIEM is picking up anomalous traffic to our web app in the form of rapid failed login requests. We're being targeted for credential stuffing. Please report it to the authorities and see if we can track its source, however I am sure it's proxied."
<urn:uuid:37b9b969-dfe9-4ab7-a4b5-97dc56349946>
CC-MAIN-2022-40
https://www.hypr.com/security-encyclopedia/security-incident-event-management
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00520.warc.gz
en
0.940686
267
2.65625
3
Gaming fans may find that life in “SimCity” suddenly gets a lot more realistic this fall with the release of “SimCity Societies”, which has been designed to incorporate some of the harsh realities of global warming. Through a partnership between Electronic Arts (EA) and energy giant BP, the next-generation version of the bestselling game series combines city building with industry expertise on energy, electricity production and greenhouse gas emissions to highlight the impact of electricity generation on the carbon dioxide emissions linked with climate change, the companies said. The game is due Nov. 15 in Europe and North America. “Since their inception in 1989, ‘SimCity’ games have served as excellent creative and educational tools to convey complex subjects,” said Steve Seabolt, vice president of global brand development for the Sims label at EA. “With ‘SimCity Societies,’ we have the opportunity not only to demonstrate some of the causes and effects of global warming, but also to educate players how seemingly small choices can have a big global impact. “BP was one of the first major energy companies to publicly acknowledge the need to reduce carbon emissions and begin taking precautionary measures,” Seabolt added. “As such, they are the perfect partner to help educate people on this important social issue in ‘SimCity Societies.'” “SimCity Societies” will not force players to adopt one type of power or another for the cities they build; rather, they will be free to choose, just as in real life. Also like in real life, there are pros and cons associated with each option. The least expensive and most readily available buildings in “SimCity Societies” are also the biggest producers of carbon dioxide, for example, so players who choose to build cities dependent on them will see their carbon ratings rise. Once critical levels are reached, the game will issue alerts about the threat of droughts, heat waves and other natural disasters that may strike. Alternatively, players can take a greener approach by choosing from a variety of BP Alternative Energy low-carbon power options, which tend to keep citizens safer from disaster but also cost more and don’t produce as much power as the high-emissions options do. Informative real-world snippets about power production and conservation will also be available in-game, educating players about global warming both virtually and in reality. “The time was right for this partnership,” said Carol Battershell, vice president for BP Alternative Energy. “EA was developing the next iteration of the “SimCity” series at the same time that we were looking for opportunities to raise awareness about low-carbon power choices. “EA has a powerful reach to the next generation, and BP has a suite of low-carbon power alternatives,” Battershell added. “In our collaboration through this innovative game, we can provide education on the issues surrounding climate change, its association with carbon emissions and the ability to take early positive action through low-carbon power choices.” Gaming and virtual world technologies are increasingly being used to provide interactive tools for serious business applications such as training, collaboration and education, Michael Cai, director of broadband and gaming with Parks Associates, told TechNewsWorld. “This is a wider implementation of this whole phenomenon,” Cai said. “It’s also a lot more fun than reading text or looking at presentations,” he added. The Greater Good “I think this sounds great,” George Douglas, a spokesperson for the U.S. Department of Energy’s National Renewable Energy Laboratory, told TechNewsWorld. Increased awareness of the consequences of choices made in everyday life can ultimately change the choices people make, Douglas explained. Ultimately, he said, “the more people know about energy — how it’s made, how it’s used and the impacts of those possibilities — the better off we all will be.”
<urn:uuid:e585d8dd-45d9-457a-8b29-af9725213575>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/global-warming-strikes-simcity-59757.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00520.warc.gz
en
0.950777
853
2.578125
3
When you look at the origin of the word computer—“one who calculates”—you learn electronics aren’t necessarily a required component even though most of us would imagine the modern-day desktop or laptop when we hear the term. A computer is something that can handle data, and in this perspective, our brains are one of the most powerful computers that exist. There has been significant progress toward the creation of biological computers. Once they get perfected, it will change our world. What are biological computers? Biological computers are made from living cells. Instead of electrical wiring and signaling, biological computers use chemical inputs and other biologically derived molecules such as proteins and DNA. Just like a desktop computer, these organic computers can respond to data and process it, albeit in a rudimentary manner similar to the capabilities of computers circa 1920. While biological computers have a long way to go before they are as sophisticated as today’s personal computers, the fact that researchers have been able to get biological computers to complete a logic gate is a notable achievement. Potential of biological computers Once you’ve programmed a single biological cell, it’s extremely cost-effective to grow billions more with only the cost of the nutrient solutions and a lab tech’s time. It’s also anticipated that biocomputers might actually be more reliable than their electronic counterparts. To illustrate, think about how our bodies still survive even though millions of our cells die off, but a computer built from wires can stop functioning if one wire is severed. In addition, every cell has a mini-factory at its disposal, so once it’s been programmed, it can synthesize any biological chemical. Instead of what’s done today when bioengineers map genes and try to uncover their secrets, they can just program cells to do the job they need them to do — for example, program cells to fight cancer or deliver insulin to a diabetic’s bloodstream. Challenges of biocomputing Although biocomputing has similarities with biology and computer science, it doesn’t fit seamlessly with either one. In biology, the goal is to reverse engineer things that have already been built. Biocomputing aims to forward engineer biology. Experts in computer science are accustomed to machines executing programmed commands; when dealing with biological environments in what is known as a “wet lab,” organisms might react unpredictably. The culprit could be the cell’s programming, or it could easily be something external such as the environmental conditions, nutrition, or timing. Biological computing in use today While biological computers aren’t as prolific as personal computers, there are several companies working to advance this very young field. The founders of Synthego, a Silicon Valley startup, aren’t biologists. They are brothers and software engineers who used to work for SpaceX building rockets but thought there was potential in taking what they knew about agile design to gene-editing tools. The company creates customized CRISPR kits for scientists from a selection of approximately 5,000 organisms available in Synthego’s genome library. Ultimately, this can cut down the time it takes for scientists to do gene edits. Microsoft’s foray into biological computing is called Station B. The company partnered with Princeton University and two UK companies, Oxford BioMedica and Synthace, on the new research system that can analyze volumes of biomedical data with a set of integrated computer programs. This analysis is then used to guide scientists on the best way to proceed with research, such as editing DNA in a certain way. The hope is that this system will ultimately lower the cost of gene-therapy products to bring them to many more patients. Using CRISPR (DNA sequences found within e.g. bacteria), scientists were able to turn a cell into a biological computer. It was programmed to take in specific genetic codes and perform computations that would produce a particular protein. This milestone could eventually lead to having powerful computers in cells that could eventually detect and treat diseases. Imagine in the future that these cells could be programmed to scan for biomarkers that indicate the presence of disease. If all criteria are met, these same cells could mass-produce proteins that could help treat the disease. A microtissue might have billions of cells, all with their own “dual-core processor.” The computing power this would allow is on par with today’s digital supercomputer. The work in biocomputing thus far has focused on DNA-based systems because, at this point, genetic engineering is understood enough (even if all of its secrets aren’t known) to make progress possible. There are many more biological systems to tackle, such as those based on nerve cells. The future is expected to include using the knowledge gleaned from developing biocomputers for DNA-based systems and apply it to neurochemistry.
<urn:uuid:fd2cbdf5-b0b4-489c-b846-f64ad4e8d198>
CC-MAIN-2022-40
https://bernardmarr.com/what-is-biological-computing-and-how-it-will-change-our-world/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00520.warc.gz
en
0.951495
1,006
3.734375
4
Water use in data centers can be a murky issue. Traditionally, water has been used to provide necessary cooling services to data halls by using up excess heat to evaporate the water – much like our bodies do when we sweat. However, this consumes large volumes of water (on the order of millions of gallons per month for modern data centers), making that water inaccessible for other human or natural needs. So, the decision to “burn water” instead of electricity is an important trade-off. Here is a deep dive into the rationale behind this decision, how it is becoming outdated, and why we have made a splash, diving into water-free cooling for our data centers. Providing cooling services is done at the expense of either electricity or water consumption. In the past, the production of electricity itself often consumed of large amounts of water – typically fossil fuels were burned to evaporate water into steam, which was used to generate electricity. The water used in the generation of electricity is often referred to as “embodied water.” Therefore, a data center cooled by electricity was still responsible for consuming a significant amount of water when that electricity was generated. Whether offsite at electrical plants or onsite at the data center, water consumption was about the same and “came out in the wash.” The “Belly-Flop” – How this Assumption is Outdated In the past, this rationale was a reasonable assumption. However, as modern power plants become more water efficient and our electrical grid transitions to renewable power sources (and leading companies make their own faster transitions), there is less water consumed for that electrical generation. Electricity sources like solar and wind power are effectively water-free. Now that there is less embodied water in the electricity, the assumption that embodied water is roughly equal to the onsite consumption of water (for cooling) no longer holds. Ouch. Understanding that the choice between electricity and water for data center cooling makes a difference in the overall amount of water consumed, we at CyrusOne are leading the way on how to consider data centers’ impact on the world’s water. We are committed to build all of our new facilities without a reliance on water consumption-based cooling. By doing so we are making a splash in the industry; after all, we are making sure there is enough left water to do so. Stay tuned for our first case study where we took a facility that was using cooling water and transitioned it to water-free cooling and the impact on total water consumed onsite and off.
<urn:uuid:d5f51383-4190-4a20-bb90-ab9247cd1f5f>
CC-MAIN-2022-40
https://cyrusone.com/blog-post/embodied-water/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00520.warc.gz
en
0.972725
535
3.265625
3
The EU’s highest court says your privacy must be respected – at almost all costs. The fine line between harnessing the power of data and the vast amounts of information we create with every mouse click, button press and keyboard tap on a given basis and keeping everyone in the world safe is a different one to traverse. Would-be terrorists are brought down through electronic surveillance on a near-daily basis worldwide. Without the ability to monitor communications, security agencies across the globe would be flying blind on what are the latest threats to their countries. Yet civil liberties are vital. The ability to say what we want, without fear of being arrested, is important – particularly in the digital age. A recent ruling by the EU’s highest court appears to set the standard for where a user’s privacy lies – in Europe at least. In early October, the European Court of Justice ruled that bulk data collection or retention regimes in the UK, France and Belgium was essentially illegal, drawing in too much data compared to EU-wide law. What are the current EU rules? At present, EU law applies every time a national government asks telecommunications providers to process data, up to and including when data is collected for national security reasons. The law has established rules and safeguards outlining the collection and retention of data, and countries who are in the EU must abide by those rules. However, the three countries in question were doing something different, the court judgment alleges. They were collecting data in bulk, and treating everyone as a potential suspect, whose data must have been hoovered up in the event that it needed to be analysed if they did something wrong at some point in the future. In the UK, for instance, security and intelligence agencies like GCHQ, MI5 and MI6 were gathering and processing data from telecommunications providers en masse. In France similar things were happening. Years of court action come to a conclusion The initial opposition to such mass data collection came in the mid-2010s, through campaign groups like Privacy International, who aim to protect end users’ privacy. They appealed the cases through the relevant countries’ courts, and ended up in the highest court in Europe, where the judge ruled in their favour. The “judgment reinforces the rule of law in the EU,” says Caroline Wilson Palow, Legal Director of Privacy International. “In these turbulent times, it serves as a reminder that no government should be above the law. Democratic societies must place limits and controls on the surveillance powers of our police and intelligence agencies.” The rules as they were sketched out by the countries in question were going too far, reckons Wilson Palow. “While the police and intelligence agencies play a very important role in keeping us safe, they must do so in line with certain safeguards to prevent abuses of their very considerable power,” she says. “They should focus on providing us with effective, targeted surveillance systems that protect both our security and our fundamental rights." What the ruling means for your rights The decision was a momentous one, called a “landmark” by Hugo Roy, who fought the French case. “We hope now that the French Conseil d’ État will finally apply European human rights law standards to the French State,” he says. That means that the bulk data collection and retention regimes used in both countries must be brought in line with EU law. That has a significant impact on the rights of end users in all those nations. It essentially means that your online conversations and activity are protected under EU law, unless there is a relevant and real requirement to analyse what you’re up to online. It brings those three countries back into the realm of reality when it comes to safety and ensuring your fundamental rights are kept whole in a world where we increasingly live digitally.
<urn:uuid:6b97896a-4e12-4242-8b76-569fddf86308>
CC-MAIN-2022-40
https://cybernews.com/privacy/what-is-the-eus-landmark-ruling-on-privacy-and-what-does-it-mean-for-you/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00520.warc.gz
en
0.961379
794
2.515625
3
The digital footprint of society is expanding the world over into fragmented mediums (blogs, tweets, reviews etc) and technologies (mobile, web, cloud/SaaS etc). Data generated from mobile devices and the internet of things are the main contributors to this data explosion. While this provides organizations with significant business opportunities, it also presents several challenges in harnessing these information sources. India’s digital landscape maybe evolving quickly but the overall penetration remains low, with only one in five Indians using the internet (as in July 2014). Enterprises and businesses do have access to a veritable wealth of information. While some larger organizations have made a start in harnessing the information – telecom providers, online travel agencies, online retail stores are some of the industries that are using big data analytics to engage customers to a certain extent – most Indian companies are still learning how to collect and store big data. To put it simply, big data analytics is still in its infancy in India. Most companies are just learning to store the data collected. There are several challenges when it comes to the collection of data sets themselves. Past and current data is required to make the application of big data analytics really useful but there is a scarcity of past data in public and private sectors in India. The lack of historical data can be traced to the following: Late and slow computerization Healthcare, economic and statistical data, in both private and public sectors in India, is yet to be fully computerized. The main reason for this is the late adoption of IT in India. Unlike in the West, most industries in India made the transition from manual records to computerized information systems only during the last decade. Over the years, the state and central ministries have made the move towards e-governance. Efforts to deliver public services and to make access to these services easier are being made as well. While this is still a work in progress, huge amounts of data across many government sectors are yet to be digitized. Poor quality inputs Not only quantity, the quality of data being used for crunching also influences the quality of insights. If the signal-to-noise ratio is high, the accuracy of results may vary for less than optimum data samples. Public social media information that is available for most individuals from India lacks quality information about the users. Random facts and figures in individual profiles, sharing of spam content and fake social media accounts that are created for bots are very common in India. Social media sites are becoming increasingly vulnerable to spam attacks. Time spent by a captive audience on social media sites opens up windows of opportunity for online threats and spammers. Again, social media spam contributes to the signal-to-noise-ratio that defines the quality of big data. This comes in the way of generating appropriate results. Cultural and social influences In most Western markets, insights generated through big data can be applied across a wide consumer base. But given the extensive cultural and linguistic variation across India, any insight generated for a consumer based, say, in Chandigarh will not be directly applicable to a consumer based in Chennai. This problem is made worse by the fact that a lot of local data lives in regional publications, in different languages and has limited online visibility. Unstructured sources of data Big data in India is not structured. Most transactional data in the healthcare and retail segments are stored purely for book-keeping purposes. In most developed countries, user data is rich enough to provide demographic or group level markers that can be used to generate customized insights while maintaining individual privacy. The absence of such standard identifiers in Indian consumer data is one of the biggest bottlenecks in mapping transactional and social records in India. Handsets and internet connectivity Even though smartphones are driving the new handset market in India, feature phones still dominate everyday usage. Most connections in India are pre-paid and fewer than 10 percent of users have access to 3G networks. To add to it, internet connection speed is among the lowest in Asia. As a result, consumer data, especially retail enterprise data is limited. As more people in India make the move to smartphones and internet connectivity improves, there will be an increase in the amount of usable data generated. That said, organizations need to make a huge effort to improve the quality of enterprise data. The good news is, the key contributors to the promise of big data analytics in India are steadily gaining ground. An increase in social media users, efforts by enterprises, both public and private, for optimum collection and storage of transactional enterprise data, will contribute to better quality data sets, leading to the improved application of big data analytics. This article originally appeared here. Republished with permission from the author.
<urn:uuid:270390ff-8d90-49b5-b810-2da3c1001c17>
CC-MAIN-2022-40
https://www.crayondata.com/three-ways-to-overcome-the-lack-of-right-data-in-india/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00520.warc.gz
en
0.9277
964
2.875
3
According to Article 17 of the European Union’s General Data Protection Regulation (GDPR), all personal data that is no longer necessary must be removed and deleted. This aspect of the law, also known as “the right to erasure,” grants any user or customer the right to request that an organization deletes all data related or associated to them without undue delay, within 30 days. Moreover, the regulation carries heavy fines if a business does not comply. These guidelines and rules have been in effect for the entire EU since May 25 but now the EU government is cracking down further on international enterprises, attempting to extend the EU’s right to erasure laws to all websites, regardless of where the traffic originates from. Companies are beginning to fight back on this ruling because they believe it would place an undue burden on them and would significantly alter the way these companies currently use and hold private data. There are a several reasons why a data subject may request their private information be erased, such as: the original purpose for which that data was obtained has been fulfilled and there is no need to hold onto it any longer, the data was collected unlawfully, or the data subject is withdrawing their consent to use of their private information. When a right to erasure request is received, organizations must fulfill the request in a timely manner, following these steps: 1. Locate the person’s information. 2. Identify all processors that have used the personal information. 3. Identify any third-party companies that may have the person’s data. 4. Remove the personal data from the environment. 5. Respond to the person and confirm that all their data was erased from their infrastructure. This five-step list may seem simple but in actuality is a major challenge for international companies with hundreds of thousands, if not millions, of customers around the world. Many companies suffer from an acute lack of infrastructure visibility, leaving them with a limited idea of where their data is located, making it extremely difficult to know where to start if they were asked to delete specific information. It is clear that US privacy legislation is coming sooner than later, given California’s newly enacted privacy law, which will take effect January 2020. The bill raced through the State Legislature without opposition. As new data privacy laws begin to pop up across the US, here are some best practices that companies can follow to prepare and several policies to leverage so businesses can provide transparency to data subjects. - The company should conduct a full environment configuration audit to see the true layout of their infrastructure. Knowing exactly where data resides is step one in ensuring IT and security professionals can comply with GDPR. - Organizations, should set up a formal procedure for company employees to follow to ensure all data is saved where it should be. Determining and completing a hefty amount of right to erasure requests is difficult, especially in 30 days. - Another major GDPR challenge comes in the form of third-parties. Businesses should consider keeping an updated list of all third-parties that receive customer data, and which data they have access to – these are subprocessors. Identifying a key individual at each partner company to serve as a contact to communicate erasure requests, and dump and discard any data that is no longer in use will be extremely helpful. - A good tip is to regularly purge information using proactive retention policies and procedures since It’s not essential to hold onto private information that isn’t necessary. Lastly, don’t forget about your backups. Best practices dictate that organizations backup data and systems regularly in case they get destroyed or an outage occurs. Access to backups may be limited to administrators and key security individuals, but some organizations have easy access to the data they store in backup instances, even on a granular level. Per the right to erasure regulation, if your organization can easily delete individual subject data from backups without undue hardship, they will be required to do so to completely fulfill erasure requests. In other cases where backup tapes are stored at an off-site location and are securely overwritten, organizations may have a difficult time complying with an erasure request – instead, they may ensure that access is tightly controlled, and data will be destroyed in accordance with a documented data retention policy. Transparency within the organization and with customers is the cornerstone to the right to erasure compliance. Every organization, and every piece of data, will continue to require a case-by-case assessment to distinguish where the data is exactly stored and how to fully erase the information. IT and security practitioners should focus on their organization’s reasoning and validation posture if faced with audits. The organization should be able to appropriately justify that policies, procedures and efforts are in place to handle data erasure requests and personal data management as a whole.
<urn:uuid:9869cca8-27b3-430c-aad5-de59bf56e222>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2018/10/03/right-to-erasure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00520.warc.gz
en
0.952666
988
2.671875
3
Performing non administrative activities using an account having admin privileges is not considered a good security practice. Normally users are provisioned with multiple accounts one for normal day to day tasks and one for administrative tasks. There are multiple reasons which drive organizations to monitor and protect the use of privileged (admin) accounts. Today we look at these two major terminologies and tools – PAM (Privileged Access Management) and PIM (Privilege Identity Management), understand their key differences and use cases. About PAM (Privileged Access Management) Privileged Access management is a combination of tools and technology in order to secure, control, and monitor access to an organization’s critical information and resources. Privileged Access management subcomponents are shared access password management, privileged session management, vendor privileged access management and application access management. PAM works on principle of least privilege which means restriction on access rights and permissions and have permissions or rights which are bare minimum required for daily operations of users, programs, systems endpoints and processes. PAM Pros and Cons - Fine grained access control - Multi factor authentication - Single sign – on - Password vaulting - Auto discovery and customized reporting - Reduced malware infection and propagation - Session monitoring - Deliver temporary credentials to specific groups and users - Over provisioning of privileges - Lack of visibility and awareness of privileges users, accounts, assets and credentials - Hard coded / embedded credentials - Lack into visibility for application and service account privileges - Siloed IAM tool and processes How does PAM work? About PIM (Privileged Identity Management) Privileged Identity management (PIM) is wholesome monitoring as well as protection for admin / super users accounts in the organization. A privileged account is an administrative account having authorization to change configuration settings, permissions, adding users, downloading software etc. Privileged Identity management solution secure privileged accounts. These are super admins who have elevated permissions to access critical information. How does PIM work? PIM Pros and Cons - Provide just in time privileged access - Assigns time bound access to resources - Enforce multi factor authentication to activate roles - Control authentication into privileged accounts - Scheduled and event triggered password changes - Event and session logs are captured - Ability to record access to privileged accounts - Ongoing costs are high - Failing to Discover All Privileged Accounts Comparison Table: PAM vs PIM Below table summarizes the difference between the two: |Definition||A system used to protect, manage, monitor and control privileges||A system to manage, control, and monitor access to resources having admin/ super user access in the organization| |Technology||LDAP (Lightweight directory access protocol) and SAML||LDAP| |Features||-Isolation and scoping of privileges -Just in time administration (minimum time to retain privileges) -Provide time bound access to resources -Enforce multi factor authentication -Approval or denial of privileges based on policy -Secure administrative hosts -Approval / justification for privileges activation |-Identify and keep track of all privileged accounts -Define how super user accounts will be managed -Set up procedures and tools for super user account management -Access reviews to ensure user still need roles -Audit history for external / internal audits -Approvals to activate privileges |Applications||One Identity, Foxpass, Hitachi ID etc.||ManageEngine, Microsoft Azure, Okta identity cloud, Auth0 etc.| Download the comparison table: PAM vs PIM
<urn:uuid:ed237717-55dc-46db-83ab-d13f67511975>
CC-MAIN-2022-40
https://ipwithease.com/pam-vs-pim-detailed-comparison/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00720.warc.gz
en
0.854334
789
2.65625
3
Near Field Communication provides secure communication to all users. It promotes the transfer of data through safe channels as well as the encryption of sensitive information .In simple words, NFC is a method of wireless data transfer when devices are in close proximity to communicate without the need for an internet connection. The communication - Works automatically - Easy to use - Provides fast data transmission NFC is a standards-based technology secure two-way interactions between electronic devices. NFC near field communication provides contactless communication up to distances of centimeters. In this way there communications are inherently more secure because devices normally only come into contact and hence communication when the user intends this. NFC is a form of RFID, but it has a specific set of standards governing its operation, interface, etc. This means that NFC equipment, and elements from a variety of manufacturers can be used together. NFC Usage – NFC technology is leveraged across variety of applications which are enlisted below – - Mobile phones and PDAs, etc. - Personal computers and Laptops - POS machines - Parking meters - Vending machines - Applications in Hotels, offices and homes e.g. access doors etc. NFC does peer-to-peer communication and speeds up max to 424 Kbps. The below diagram shows how NFC stands on data Rate and distance graph in terms of peer to peer communication. Also shown are other Wireless/radio technologies in comparison to other completive technologies in market. Some abstracts on NFC technology comparison to other communication technologies in market – - Bluetooth: Although both Bluetooth and NFC can be used to transfer data, Bluetooth has been designed to transfer data over much greater distances. NFC is designed to be close proximity only. - Wi-Fi / IEEE 802.11: Wi-Fi is designed for local area networks, and is not a short range peer to peer technology. - RFID: Although RFID is very similar to NFC in many respects, RFID is a much broader technology. NFC is a specific case which is defined by standards enabling it to be interoperable. BENEFITS OF NFC FOR BUSINESS AND PUBLIC – Employee’s Communication and real time updates. NFC is an extremely effective way of improving two way communications between managers and their staff in the field. Whenever a field operative touches an NFC tag with their mobile phone the operational management team receive an instant and verifiable confirmation of the location of field staff. Field staff can benefit from automated reporting, which saves time and improves accuracy. NFC technology lets businesses operate and respond in real time. When an NFC tag is touched by a mobile phone, notification is sent in seconds, so operations managers always know exactly where their field personnel are, and what they are doing. NFC allows businesses to streamline the way people work at, and report from, remote locations. It also allows field based personnel to focus on what they do best, whilst managers benefit from greater visibility in field operations. Improves Customer Service Experience Taking the hassle out of paying at the store seems to be NFC’s driving force. Creating faster, more efficient ways to get through the checkout line is a goal of any company, and NFC card readers offer this service to customers. In addition to payment systems, NFC can be used to help customers find information. By placing NFC tags in product displays, a customer can wave his smartphone over it to learn more about a product or service that catches his interest. In addition to cutting down on wait times — something every customer appreciates — NFC would allow customers to pre-load coupons into their smartphone or collect store reward points automatically. Having everything in one place means a customer never misses an opportunity for savings because he forgot a coupon or his rewards card at home. Cashiers no longer have to scan separate coupons or type in complex discounts, thus cutting customer wait time down even further. Whether you work at a large corporation, run a small business, or fund a non-profit organization, NFC technology has several benefits that can help you with time management, employee tracking, and customer satisfaction. Data reporting by field operations staff becomes faster, more efficient and more accurate. The most well-known use of NFC technology is for contactless payment. Customers can swipe their smartphone over a card reader to make a purchase without fumbling through credit and debit cards or counting out cash. This technology allows the customer to load multiple cards and choose which one they wish to use for each transaction. Not only does this save time, but it also reduces the chances of losing a credit card that comes with carrying multiple cards around. From posters to museum displays to library books, an NFC tag can hold information that a user can then swipe their phone over to read.NFC tags are used to transmit information about famous artworks or display personalized student schedules and current event updates. NFC works with most contactless smart cards and readers, meaning it could easily be integrated into the public transit payment systems in cities that already use a smart card swipe .Swiping a smartphone not only allows the passenger access to the subway but also keeps track of the number of trips he has left. Passengers can come and go much faster and easily pay for extra trips. As advances in medicine and technology increase, the focus is on creating better healthcare systems. With NFC technology, hospitals can better track patient information and doctors’ notes in real-time.There will be an increase in the demand for data transfer between devices which are present outside the body (In Vitro) and inside the body (In Vivo). NFC becomes the natural choice for wireless communicating between two medical devices considering secure communication channel. Social networking is booming, and NFC tags are looking to get in on the action. From swiping a smartphone to check in at a location to bumping phones with a new friend to exchange contact information, NFC allows users to interact with each other and update their location and other info without any unnecessary log-ins or tapping through menu screens.
<urn:uuid:255e0e8a-56b9-44a6-a5cf-7889e46d491e>
CC-MAIN-2022-40
https://ipwithease.com/nfc-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00720.warc.gz
en
0.94283
1,245
3.203125
3
February 11, 2021 | Written by: Paul Nation and Blake Johnson Categorized: Quantum Computing Share this post: IBM Quantum is working to bring the full power of quantum computing into developers’ hands in the next two years via the introduction of dynamic circuits, as highlighted in our recently released Quantum Developer Roadmap. Dynamic circuits are those circuits that allow for a rich interplay between classical and quantum compute capabilities, all within the coherence time of the computation, and will be crucial for the development of error correction and thus fault tolerant quantum computation. However, there are many technical milestones along the way that track progress before we achieve this ultimate goal. Chief among these is the ability to measure and reset a qubit in the middle of a circuit execution, which we have now enabled across the fleet of IBM Quantum systems available via the IBM Cloud. Measurement is at the very heart of quantum computing. Although often overlooked, high-fidelity measurements allow for classical systems (including us humans) to faithfully extract information from the realm in which quantum computers operate. Measurements typically take place at the end of a quantum circuit, allowing, with repeated executions, one to gather information about the final state of a quantum system in the form of a discrete probability distribution in the computational basis. However, there are distinct computational advantages to being able to measure a qubit in the middle of a computation. Mid-circuit measurements play two primary roles in computations. First, they can be thought of as Boolean tests for a property of a quantum state before the final measurement takes place. For example, one can ask, mid-circuit, whether a register of qubits is in the plus or minus eigenstate of an operator formed by a tensor product of Pauli operators. Such “stabilizer” measurements form a core component of quantum error correction, signaling the presence of an error to be corrected. Likewise, mid-circuit measurements can be used to validate the state of a quantum computer in the presence of noise, allowing for post-selection of the final measurement outcomes based on the success of one or more sanity checks. Measurements performed while a computation is in flight can have some other surprising functions, too — like directly influencing the dynamics of the quantum system. If the system is initially prepared in a highly entangled state, then a judicious choice of local measurements can “steer” a computation in a desired direction. For example, we can produce a three-qubit GHZ state and transform it into a Bell-state via an x-basis measurement on one of the three qubits; this would otherwise yield a mixed state if measured in the computational basis. More complex examples include cluster state computation, where the entire computation is imprinted onto the qubit’s state via a sequence of measurements. Resetting a qubit Closely related to mid-circuit measurements is the ability to reset a qubit to its ground state at any point in a computation. Many critical applications, such as solving linear systems of equations, make use of auxiliary qubits as working space during a computation. A calculation requires significantly fewer qubits if, once used, we can return a qubit to the ground state with high-fidelity. With system sizes in the range of 100 qubits, space is at a premium in today’s nascent quantum systems, and on-demand reset is necessary for enabling complex applications on near-term hardware. In Figure 1, below, we highlight an example of the quality of the reset operations on IBM Quantum’s current generation of Falcon processors, on the Montreal system, by looking at the error associated with one or more reset operations applied to a random single-qubit initial state. Figure 1: we highlight an example of the quality of the reset operations on IBM Quantum’s current generation of Falcon_r4 processors by looking at the error associated with one or more reset operations applied to a random single-qubit initial state. Internally, these reset instructions are composed of a mid-circuit measurement followed by an x-gate conditioned on the outcome of the measurement. These conditional reset operations therefore represent one of IBM Quantum’s first forays into dynamic quantum circuits, alongside our recent results demonstrating an implementation of an iterative phase estimation algorithm. However, while the control techniques necessary for iterative phase estimation are still a research prototype, you can use mid-circuit measurement and conditional resets, today. We can incorporate both concepts illustrated here into simple examples. First, Figure 2 shows a circuit utilizing both mid-circuit measurements and conditional reset instructions for post-selection and qubit reuse. Figure 2: a circuit utilizing both mid-circuit measurements and conditional reset instructions for post-selection and qubit reuse. This circuit first initializes all of the qubits into the ground state, and then prepares qubit 0 (q0) into an unknown state via the application of a random SU(2) unitary. Next, it projects q0 into the x-basis with eigenvalues 0 or 1 imprinted on q1 indicating if the qubit is left in the |+> (0) or |-> (1) x-basis states. We measure q1, and store the result for later use as a flag qubit for identifying which output states correspond to each eigenvalue. Step 3 of the circuit resets the already-measured q1 to the ground state, and then generates an entangled Bell pair between the two qubits. The Bell pair is either |00>+|11> or |00>-|11> depending on if q0 is in the |+> or |-> state prior to the CNOT gate, respectively. Finally, in order to distinguish these states, we use Hadamard gates to transform the state |00>-|11> to |01>+|10> before measuring. Figure 3 shows the outcome of executing such a circuit on the seven-qubit IBM Quantum Casablanca system, where we see that that the measurement of the flag qubit value measured before (in bold) correctly tracks the expected Bell states generated at the output. Collecting marginal counts over the flag qubit value indicates the proportion of the initial random q0 state that was in the |+> or |-> state after the projection. For the example considered here, these values are ~16 percent and ~84 percent, respectively. The dominant source of the error in the result is dephasing due to the relatively long (~4㎲) duration of measurements on current generation systems. Future processor revisions will bring faster measurements, reducing the effect of this error. Next, we consider the computational advantages of using reset to reduce the number of qubits needed in a 12-qubit Bernstein-Vazirani problem (Fig. 3). As written, this circuit cannot be implemented directly on an IBM Quantum system, but rather requires the introduction of SWAP gates in order to satisfy the limited connectivity in systems such as our heavy-hex based Falcon and Hummingbird processors. Indeed, compiling this circuit with Qiskit yields a circuit that requires 42 CNOT gates on a heavy-hex lattice. The fidelity of executing this compiled circuit on the IBM Quantum Kolkata system yields a disappointing 0.007; the output is essentially noise. However, with the ability to measure and reset qubits mid-flight, we can transform any Berstein-Vazirani circuit into a circuit over just two qubits requiring no additional SWAP gates. For the previous example, the corresponding circuit is: And execution on the same system gives a vastly improved fidelity of 0.31; a 400x improvement over the standard implementation. This highlights how, with mid-circuit measurement and reset, it is possible to write compact algorithms with markedly higher fidelity than would otherwise be possible without these dynamic circuit building blocks. Mid-circuit measurement and conditional reset represent an important first step toward dynamic circuits — and one that you can begin implementing into your quantum circuits as we speak. We’re excited to see what our users can do with this new functionality, while we continue to expand the variety of circuits that our devices can run. We hope you’ll follow along as we implement our development roadmap; we’re working to make the power of dynamic circuits a regular part of quantum computation in just a few years. Quantum starts here
<urn:uuid:863bae6d-f5e5-4e9f-8494-d924f8a55685>
CC-MAIN-2022-40
https://www.ibm.com/blogs/research/2021/02/quantum-mid-circuit-measurement/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00720.warc.gz
en
0.896335
1,734
2.625
3
The information infrastructure of any organization evolves in the same manner as the software development does. Any change in the infrastructure has to surpass a series of stages in a defined lifecycle, until it finally reaches production. A typical lifecycle considers stages like development, staging or production. In the Denodo ecosystem, each one of these stages is called an environment. In terms of composition, an environment is defined as a set of servers, of the same or different type, working together for a common purpose. For example, an environment can be composed by several Virtual DataPort servers, one Scheduler server and one database server working as data cache. In addition, an environment is also composed by all the resources and data sources that the servers depend on. Inside an environment, servers are organized in one or several clusters, with a load balancer per cluster. All the requests that enter the cluster are preprocessed by the load balancer, who decides which is the final server that will process the queries among the set that conforms the cluster. That is the way organizations guarantee high availability in their systems. Take into account that, for this to work, all the servers in the same environment have to share the same metadata, since they operate on the same resources and data sources. However, each environment manages its own set of data sources and resources. Hence, the server’s metadata must be different among environments. The Solution Manager allows you to promote changes from one environment to the next one in the lifecycle. Since every environment has different needs, in terms of consistency or service interruption, the Solution Manager implements several strategies for deploy changes. Therefore, each environment can configure its own deployment strategy. All the servers in the same environment should have the same Denodo Platform version installed. A cluster is a group of Denodo servers that belong to the same environment. To guarantee high availability, the production environments are organized in one or several clusters behind a load balancer that decides which is the final server that will process the incoming requests. Moreover, production environments should provide low latency. Organizations meet this requirement with several clusters geographically distributed. For example, they may have one cluster in North America, another one in Europe and a third one in China. A typical structure of a cluster includes several Virtual DataPort servers, one Scheduler server and one database server working as data cache. All the Virtual DataPort servers in the same environment share the same metadata. This means that, before promoting changes from another environment, you have to define, at environment level, the properties required to execute a deployment on the Virtual DataPort servers. On the other hand, the metadata of the Scheduler server is shared at cluster level, since it references servers or data sources local to the cluster. Therefore, before promoting changes from another environment, you have to define, on each cluster, the properties required to execute a deployment on the Scheduler servers. The Solution Manager can work in two modes: Standard: This is the intended mode to work with on premises cluster, you have to manually add the cluster resources. This mode is explained in Standard Mode. Automated: all the resources are managed by the Solution Manager, you only have to set the desired capacities. This mode is explained in Automated Cloud Mode.
<urn:uuid:90e207f3-ce74-4999-9ce5-6f9fcf398fa1>
CC-MAIN-2022-40
https://community.denodo.com/docs/html/browse/8.0/en/solution_manager/administration/basic_concepts/basic_concepts
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00720.warc.gz
en
0.927812
689
2.640625
3
Today In History June 22 June is the 6thmonth of the year in the Gregorian calendar. In older versions of the ancient Roman calendar, June was the fourth month of the year. It became the sixth month when January and February were added to the calendar. It is the month that has the most amount of daylight hours of the year in the Northern Hemisphere and shortest amount of daylight hours in the Southern hemisphere. According to Gregory calendar, day number 365 in a year and if it is a leap year then the day number is 366. June 22 has its special significance in India and world history. There are 30 days in June and it does not start or end on the same day of the week as any other month. Another belief is that the month’s name comes from the Latin word ‘iuniores’ which means “younger ones”.Here you will find two important events that happened today in world history on June 22. SECOND ARMISTICE AT COMPIEGNÈ – JUNE 22, 1940 The Second Armistice at Compiègne was signed at 18:50 on 22nd June 1940 between Nazi Germany and France. Following the decision of German victory in the Battle of France (10 May–21 June 1940), it started a German occupation zone in Northern France that encompassed all English Channel and Atlantic Ocean ports and left the remainder “free” to be governed by the French. Adolf Hitler deliberately picks the Compiègne Forest as the site to sign the armistice because of its symbolic role as the site of the 1918 Armistice with Germany that signaled the end of World War I with Germany’s surrender. By 22 June, the German Armed Forces (Wehrmacht) had lost 27,000 dead, more than 111,000 wounded and 18,000 missings, against French losses of 92,000 dead and more than 200,000 wounded. The British Expeditionary Force had lost more than 68,000 men. Hitler decided to sign the armistice within the same rail carriage where the Germans had signed the primary armistice in 1918. In the same railway carriage during which the 1918 Armistice was signed (removed from a museum building and placed on the precise spot where it was located in 1918), Hitler sat in the same chair during which Marshal Ferdinand Foch had sat when he faced the defeated German representatives. After taking note of the reading of the preamble, Hitler during a calculated gesture of disdain to the French delegates – left the carriage, as Foch had done in 1918, leaving the negotiations to his High Command of the Armed Forces Chief, General Wilhelm Keitel. THE BATTLE OF OKINAWA ENDS – JUNE 22, 1945 The Battle of Okinawa, codenamed Operation Iceberg, was fought on the Ryukyu Islands of Okinawa and was the biggest amphibious operation in the Pacific War of World War II. The 82-day-long battle lasted from early April until 22nd June 1945. After an extended campaign of island hopping, the Allies were approaching Japan and planned to use Okinawa, a large island only 340 mi (550 km) far away from mainland Japan, as a base for air operations on the planned invasion of Japanese mainland (coded Operation Downfall). The battle has been mentioned as the “Typhoon of Steel” in English, and tetsu no ame (“rain of steel”) in Japanese. The nicknames refer to the ferocity of the fighting, the intensity of kamikaze attacks from the defenders of Japanese, and to the sheer numbers of Allied ships and armored vehicles that assaulted the island. The battle resulted in the highest number of casualties within the Pacific Theater during World War II. Japan lost over 100,000 troops killed or captured, and therefore the Allies suffered quite 50,000 casualties. Simultaneously, 10,000 local civilians were killed, wounded, or committed suicide. The atomic bombings of Hiroshima and Nagasaki caused Japan to surrender just weeks after the cease of the fighting at Okinawa.
<urn:uuid:1f473e86-2533-4229-9776-03ff4d6232f2>
CC-MAIN-2022-40
https://areflect.com/2020/06/22/today-in-history-june-22/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00720.warc.gz
en
0.971388
865
3.75
4
There are various different reasons why organizations need to archive their emails nowadays. Emails can contain valuable intellectual property that needs to be protected against loss. Intellectual property, a set of ideas, inventions, and designs, is the thing which gives your business value. For example, Google’s intellectual property is the secrets of their search algorithm. The term intellectual property can include intangible property such as patents, trademarks and copyrights. These are registered in government coffers, where the government is responsible for enlisting such properties. If we take, for example, the case of a new sorting algorithm or a new chip design, those detailed design documents become a matter of public record where details of that invention will be noted so that someone cannot steal or copy it. In the case that these are stolen or copied, the rights holder can claim infringement. But trade secrets take on many different formats, such as emails and documents attached to emails. Regardless of what system you use for messages, Exchange or Zimbra, they contain the complete chronological history of the development of your product from conception, to its release, all the way to its revision. The importance of a reliable archive Technologies have changed a lot over the years. As a result of this, these documents have been stored in different repositories over the years. Originally, it was stored in shared drives. Following this it was stored in Lotus Notes, then SharePoint. Data should be migrated as a company switches from one platform to another. However, there is the risk that the document or email you wrote 7 years ago and saved on a shared mount point on the LAN could, accidently, go missing. There lies the importance for a reliable system. In addition to this, losing archived documents and their attachments could potentially subject the company to significant regulatory and legal risk. Legislation related to document retention The government has specific requirements for document retention. These requirements exist in the EU but are stricter in the US. As a consequence of the Enron bankruptcy, the Sarbanes-Oxley (SOX) act was passed. This was so companies could document the accuracy of their financial statements. In terms of health care, reform came in the shape of the Health Insurance and Patiently Portability Act (HIPPA). As a result of the recent Recession and the collapse of Lehman brothers came Franks-Dodds, which is an update to Gramm-leach-Bliley. The reasoning behind all of this legislation is to make it obligatory for companies to keep electronic records so that they can produce them in the case of litigation, accusations of fraud or whatever dispute a company has with stockholders, stakeholders, or regulators. If you happen to be accused of tampering with any electronic records, it is possible you could face jail time of up to 20 years. Sox record retention requirements is 5 years, for HIPAA it is 6. However, to avoid breaking litigation legislation, it is best to keep a permanent archive. Protection of intellectual property You should not only protect the blueprint for a product that needs protection, you should also protect its evolution. In the case of your company bringing action against a competitor for patent infringement or copyright violations, you will require email to document the trail that led to the development of this product. The emails between executives, customers and vendors will help the attorneys make the case that the competitor is profiting through another’s intellectual property. From discovery to e-discovery E-discovery is the new phrase that has replaced what attorneys used to call discovery. Archive is becoming more and more crucial. Failure to maintain an archive could constitute a breach of regulations or even result in contempt of court. There are a number of different archive email systems. One method is the copying of PST and NSF data files to long term storage, then the importing of this data back online when you are looking for something from a few months or a few years ago. The drawbacks to this method is that it can prove inflexible and quite awkward. This method is comparable to exporting an Oracle database to archive format and then importing it back when you are looking for something that is offline. A superior method of archive email is to sort it in a manner that appears to be not offline at all to the user. This is precisely what an archive email cloud vendor does. The benefit of this kind of configuration is that it lets users search the archive and retrieve documents into the active email folders. Using a cloud email archiving system such as ArcTitan will automatically put you in compliance with the rules for off-site, secure, and tamperproof archives. Benefits to keeping a protected archive - Your company may need the documents kept in the archive in the case of lawsuits. For example unlawful dismissal, product liability, criminal complaints. - The archive is also vital in the case of vendor or contract disputes and issues surrounding product warranties. These are almost always found in emails, e.g. invoices, scanned contracts, and agreements. - The archive is also important if your company were to lose the technical details of how to do something today that may have been done 5 years ago, when the employee who designed that was still a part of the company. In summary, there is a wide array of reasons showing that organizations need to archive emails. Therefore you should aim to reduce risk to your business by putting your email archive in the secure cloud with a company that focuses on that such as ArcTitan.
<urn:uuid:d82fbcae-d9c5-4ae2-9a8a-bc52bb273ea0>
CC-MAIN-2022-40
https://www.arctitan.com/blog/reasons-organizations-archive-emails/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00720.warc.gz
en
0.95393
1,117
2.515625
3
Cyber-attackers are continuously on the lookout for sophisticated ways to penetrate an organization's systems. One effective way for attackers to appear legitimate in face of security solutions is by exploiting logic flaws in the functionality of existing systems - also known as design vulnerabilities. By exploiting design vulnerabilities, attackers can gain access to highly secure systems leading to data theft, disruption of critical infrastructure and more. They also cannot be easily detected, and being designed as part of legitimate functionality makes fixing them all the more difficult. Download this whitepaper to learn more about: - How to identify the differences between design vulnerabilities and typical security vulnerabilities - Malware that exploit design vulnerabilities - Some of today's better known design vulnerabilities
<urn:uuid:0ab7bb91-ac48-49b2-8e97-c40aea98217d>
CC-MAIN-2022-40
https://www.bankinfosecurity.com/whitepapers/vulnerable-by-design-destructive-exploits-keep-on-coming-w-2370
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00720.warc.gz
en
0.955508
143
2.859375
3
Every day we use cloud services not even aware of that. All kinds of storage, streaming services, social networks, and portals are all powered by cloud solutions. Cloud technologies are attractive because of their cost-effectiveness, performance, reliability, and scalability. Migration to a cloud platform addresses several issues related to resource capacity, compliance with legal requirements, and employee mobility. Cloud infrastructure consists of hardware and software elements. The main physical components are networking equipment, servers, and storage. It also includes a hardware abstraction layer – a hypervisor, which enables virtualization There are three types of cloud deployment models: private, public, and hybrid clouds. In this article, we will observe, how each type of cloud differs from the other, and which one is suitable in different business scenarios. This is the most common form of cloud deployment model, in which the provider provides access to resources over the Internet. In this case, you do not have to worry about the cost of hardware or keeping it up to date. With built-in tools, public cloud infrastructure is easy to manage, even non-IT professionals can cope with this task. You can create additional virtual machines, delete the existing ones, configure isolated and routed networks, and more. Public does not mean shared data. Virtual machines from different customers are isolated from each other. Public cloud implies that your data may be physically stored on the same physical server as other companies' data, but they do not have access to it. One can never say for sure, which physical hardware your virtual machines will be, because when stored in a cluster, virtual machines are moved between servers for load balancing and better fault tolerance. What makes the cloud public is the allocation of resources from a common public pool, but user data is protected. Since cloud providers can guarantee your uptime across the globe, public cloud is popular among geographically distributed companies. Also, with the pay-as-you-go billing model, the public cloud responds well to unpredictable usage and scalability. This is a model when the cloud environment is dedicated to one tenant. It does not matter where the infrastructure is physically located. It is called private if the equipment is located on company premises or in a third-party data center. Cloud providers offer such a solution. For example, Cloud4U has a Private Cloud 2.0 solution. The advantages of the solution: - high level of security; - full isolation of infrastructure; - hardware control; - enterprise-class hardware (HP blade servers, NetApp storage systems); - rapid resource scaling; - 24x7 support. This is a combination of private and third-party public cloud environments. Hybrid cloud allows deploying workloads in both environments and moving between them. When your capacity is insufficient, you can use the external one. For example, if on-premises storage transfers a large amount of data to the public cloud for processing. Hybrid clouds enable you to increase capacities in case of peak loads. To summarize, each of these models is cloud-based. All the dedicated computing power is accessible via the Internet from any device. However, in a private cloud, you get the equipment that belongs only to you, and in a public one, the resources are virtual. Differences between the private cloud, hybrid cloud, and public cloud models You might think that all three types differ only in architecture, being almost identical in other parameters. However, that is not correct. Here are several variables to consider when making a choice. Elasticity and scalability. In terms of the potential to quickly allocate the capacity you need, the public cloud outperforms the private one, as you can have almost unlimited resources. If scalability of resources is important to you, choose the public cloud. Services availability and continuity. In a public cloud, even in the event of a failure, data will not be lost. In a private cloud, you need to set up backups and organize the distribution of data across 2-3 data centers. It is complicated and expensive. Cloud providers have all the hardware and software solutions to protect data and maintain the customer's service sustainability. This is included in the service price. There are also additional features: load balancers, business continuity and disaster recovery services. They can be easily connected via the control panel. If data loss or service unavailability is critical for you, but you do not want to pay too much, choose a public cloud. Software and hardware. Public cloud providers offer customers up-to-date hardware and software. Latest technologies that make the infrastructure more user-friendly or increase its performance – this is what the cloud provider focuses on. With a private cloud infrastructure, this will be the responsibility of the customer. All of the above-mentioned does not mean public clouds have no drawbacks. They create a dependency on an Internet connection, which must be stable and fast to use cloud resources at any time. Besides, virtualization slightly affects the configuration of resources. And, of course, it is necessary to pay a monthly fee for the consumed resources. Hybrid solutions are convenient because they allow you to leverage the benefits of both types of cloud platforms, distributing data across different cloud environments and reducing the virtual infrastructure costs. However, there is another challenge: You have to "match" everything correctly and without compromising security. Cloud Deployment Models – which one to choose Clouds are convenient. But how much will it cost a company? The answer to this question depends on how you deploy your infrastructure. You can deploy a private cloud in two ways. Building on your own facilities, and then maintaining it yourself Renting part of the data center and equipment from a provider. This involves expenses on the equipment and preparing a cloud platform. You can choose open source solutions, but then you need experts skilled to work with this platform and be able to modify it to meet your needs. A public cloud is easier to deploy. All you need to do is select the necessary service in the administration panel, and then you can migrate applications and data to it. This work can be handled by the in-house IT department or the provider's specialists. The cost of hybrid solutions is determined by the cost of private infrastructure and resources rented from the public cloud. Cloud4U cloud solutions Our platform allows you to create private, hybrid, and public clouds. We draw your attention to the Private Cloud 2.0 solution, which gives you the right level of security and performance at a reasonable cost. It is a cloud model that combines hybrid and private models. If you have any concerns regarding the choice of solution and selecting a pool of resources, our managers are ready to help and answer your questions. Migration challenges or any other technical issues can be solved with the help of Cloud4U technical support. Call +44 20 80 89 80 01 or use online chat on our website.
<urn:uuid:983e7be6-a86e-47d4-a9f7-df1416abb882>
CC-MAIN-2022-40
https://www.cloud4u.com/blog/cloud-models-public-private-hybrid/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00720.warc.gz
en
0.933944
1,417
2.59375
3
The children’s novel about a girl’s journey back home was published all the way back in 1900. But in 1963, a high school teacher challenged the plot’s simplicity. Could it be that the novel was actually aimed at economic policies? Watch how this teacher’s theory changed the way we view a classic novel. Ask me your digital question! Navigating the digital world can be intimidating and sometimes downright daunting. Let me help! Reach out today to ask your digital question. You might even be on my show!
<urn:uuid:0400d901-3d00-4855-b465-6921af844cd6>
CC-MAIN-2022-40
https://www.komando.com/video/komando-picks/what-s-the-wonderful-wizard-of-oz-really-about/676069/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00720.warc.gz
en
0.935609
115
2.71875
3
Ansible is an open-source automation tool used for IT tasks such as VM provisioning and application deployment. Automation through Ansible simplifies complex IT tasks, thereby not just making developers’ and system administrators’ jobs more manageable but allowing them to focus on other tasks that add value to an organization. In other words, it frees up time and increases efficiency. Ansible uses a simple syntax written in YAML called playbooks. YAML is a human-readable data serialization language. It is incredibly simple. Automating VM Provisioning using Ansible on Vmware : Ansible provides various modules to manage VMware infrastructure, which includes datacenter, cluster, host system and virtual machine. Using Ansible with VMware allows organizations to enable a simple self-service IT model across all environments. Out of the box, Ansible ships over fifty VMware modules supporting most use cases, including: - Managing vSphere guests(Virtual Machines) - VM template and snapshot management - vSwitches, DNS settings, firewall rules and NAT gateway rules Integration with Infoblox Traditionally after VM provisioning by Ansible, multiple teams interact with each other to ensure that the newly provisioned VM has been assigned a correct IP address and A and PTR DNS records have been created. This approach mostly is manual and is error prone. Integrating Infoblox with Ansible during VM provisioning streamlines this entire process. After the VM gets provisioned, Ansible makes a REST call (using native URI module) to Infoblox for an IP address, after assigning this IP address to the VM ansible makes a second REST call (using native URI module) to Infoblox for creating A and PTR records. The following flow chart sums up the entire process of automating VM provisioning using Ansible and Infoblox. Integrating Infoblox with Ansible speeds up the process of commissioning a VM. It automates the VM provisioning process in terms of IP address management and DNS record creation.
<urn:uuid:22f2dffc-8ed7-421d-9ced-99808b78e656>
CC-MAIN-2022-40
https://blogs.infoblox.com/community/using-infoblox-and-ansible-to-automate-vm-provisioning-on-vmware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00120.warc.gz
en
0.866742
432
2.59375
3
The State Department's six-year-old wiki-based knowledge-sharing tool is growing steadily, providing information to 60,000 department employees while costing next to nothing to run. After returning home to America in June 2011 from a foreign service assignment in Moscow, Eric Brassil filled out paperwork he thought would ensure he received a short-term allowance and temporary housing while he settled into a new job. But after a brief vacation, he went to work in the State Department’s Office of eDiplomacy only to discover that the application process for a service transfer allowance had changed, which sent him to the department’s intranet in search of answers. Thanks to Diplopedia, the State Department’s internal knowledge-sharing tool based on the open-source MediaWiki platform, Brassil found an article with the information he needed to learn the new processes, and then updated the article with his own findings to help other State Department employees. “I added step-by-step what I had learned and the actual process to get this allowance,” said Brassil, now a business practices adviser in the Office of eDiplomacy. “The material was there, but I sort of wrote it in plain language so whoever was coming back to Washington, D.C., would then know how to fill out the paperwork and go through the process,” he said. “I was able to help quite a lot of people.” In fact, Brassil’s updates have been read by close to 11,000 people on Diplopedia, which is run on the State Department’s unclassified intranet and is available to about 60,000 employees who have access to the network. Diplopedia was introduced in 2006 under the ownership of the Office of eDiplomacy, part of the Bureau of Information Resource Management, as a collaboration and knowledge-sharing tool. From humble beginnings — it began with just a handful of articles — Diplopedia has grown by leaps and bounds. As of November, it boasted nearly 6,000 editors, which means that one in 10 employees who are authorized to use Diplopedia have contributed to its content — and 18,000 articles on a wide range of topics, including biographies of foreign dignitaries and an acronym finder that contains 1,200 terms and their definitions. It is an encyclopedia of unclassified foreign affairs knowledge that increases efficiency at the State Department and reduces the time it takes for employees to find information quickly and make updates in real time, said Bruce Burton, senior adviser in the Office of eDiplomacy. “It has enabled people to collaborate across geographical and organizational boundaries,” Burton said. Diplopedia users can learn about the State Department’s many offices and bureaus, and the tool enhances the department’s internal enterprise search function. In fact, keyword searches often lead users to Diplopedia articles. Tiffany Smith, deputy chief of the Office of eDiplomacy’s Knowledge Leadership Division, said one of Diplopedia’s most common uses is as a how-to tool. It provides a single place where users can get answers to their questions without going through a chain of command or wasting time chasing outdated resources. Diplopedia was put to another beneficial use when it helped coordinate crisis management during the 2010 earthquake in Haiti, Smith said. Diplopedia is maintained by the equivalent of 1.5 full-time employees. Combined with the free open-source software that powers the wiki, that minimal support translates into a very low-cost tool that upwards of 60,000 people can use on any given day. By way of comparison, NASA’s Lessons Learned Information System, a knowledge management project launched in 1994, cost $782,000 to operate in 2011 and was criticized in a March 2012 Inspector General report as being of “diminishing and questionable value” because “NASA program and project managers rarely consult or contribute to LLIS.” FCW's sister publication GCN covered the report. “Our main goal is to help people,” Smith said. “We’re not just an IT resource but a resource for anyone dealing with problems.” A read-only version of Diplopedia is mirrored to an unclassified, closed interagency network that allows employees at other agencies to use Diplopedia as a resource but not make edits. A smaller version of Diplopedia called Diplopedia-S exists on a separate classified network and is available only to American government personnel with security clearances. Linda Green, new media adviser in the Office of eDiplomacy, said officials are planning to upgrade Diplopedia in the near future to enhance the user experience. The search capability will be improved, Diplopedia’s software will be upgraded, and the website will be redesigned to fit the capabilities of the tool. “It’s going to be even better in the future,” Green said. NEXT STORY: IT Dashboard hampered by budget stalemate
<urn:uuid:280711fb-85c0-40b7-9ab1-c3f571ec6b66>
CC-MAIN-2022-40
https://fcw.com/workforce/2012/12/diplopedia-low-cost-high-engagement/205562/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00120.warc.gz
en
0.954415
1,071
2.546875
3
The listen function places a socket in a state in which it is listening for an incoming connection. int listen( _In_ SOCKET s, _In_ int backlog ); - s [in] - A descriptor identifying a bound, unconnected socket. - backlog [in] - The maximum length of the queue of pending connections. If set to SOMAXCONN, the underlying service provider responsible for socket s will set the backlog to a maximum reasonable value. If set to SOMAXCONN_HINT(N) (where N is a number), the backlog value will be N, adjusted to be within the range (200, 65535). Note that SOMAXCONN_HINT can be used to set the backlog to a larger value than possible with SOMAXCONN. - SOMAXCONN_HINT is only supported by the Microsoft TCP/IP service provider. There is no standard provision to obtain the actual backlog value. If no error occurs, listen returns zero. Otherwise, a value of SOCKET_ERROR is returned, and a specific error code can be retrieved by calling WSAGetLastError. |WSANOTINITIALISED||A successful WSAStartup call must occur before using this function.| |WSAENETDOWN||The network subsystem has failed.| |WSAEADDRINUSE||The socket's local address is already in use and the socket was not marked to allow address reuse with SO_REUSEADDR. This error usually occurs during execution of the bind function, but could be delayed until this function if the bind was to a partially wildcard address (involving ADDR_ANY) and if a specific address needs to be committed at the time of this function.| |WSAEINPROGRESS||A blocking Windows Sockets 1.1 call is in progress, or the service provider is still processing a callback function.| |WSAEINVAL||The socket has not been bound with bind.| |WSAEISCONN||The socket is already connected.| |WSAEMFILE||No more socket descriptors are available.| |WSAENOBUFS||No buffer space is available.| |WSAENOTSOCK||The descriptor is not a socket.| |WSAEOPNOTSUPP||The referenced socket is not of a type that supports the listen operation.|
<urn:uuid:7d3a9cac-e562-4ca3-90ff-96475e9f4101>
CC-MAIN-2022-40
https://www.aldeid.com/wiki/Listen
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00120.warc.gz
en
0.817227
532
2.71875
3
Scientific application developers have masses of computing power at their disposal with today’s crop of high-end machines and clusters. The trick, however, is harnessing that power effectively. Earlier this year, Louisiana State University’s Center for Computation & Technology (CCT) released its approach to the problem: an open-source runtime system implementation of the ParalleX execution model. ParalleX aims to replace, at least for some types of applications, the Communicating Sequential Processes (CSP) model and the well-established Message Passing Interface (MPI), a programming model for high-performance computing. The runtime system, dubbed High Performance ParalleX (HPX) is a library of C++ functions that targets parallel computing architectures. Hartmut Kaiser — lead of CTT’s Systems Technology, Emergent Parallelism, and Algorithm Research (STE||AR) group and adjunct associate research professor of the Department of Computer Science at LSU — recently discussed ParalleX with Intelligence in Software. Q: The HPX announcement says that HPX seeks to address scalability for “dynamic adaptive and irregular computational problems.” What are some examples of those problems? Hartmut Kaiser: If you look around today, you see that there’s a whole class of parallel applications — big simulations running on supercomputers — which are what I call “scaling-impaired.” Those applications can scale up to a couple of thousand nodes, but the scientists who wrote those applications usually need much more compute power. The simulations they have today have to run for months in order to have the proper results. One very prominent example is the analysis of gamma ray bursts, an astrophysics problem. Physicists try to examine what happens when two neutron stars collide or two black holes collide. During the collision, they merge. During that merge process, a huge energy eruption happens, which is a particle beam sent out along the axis of rotation of the resulting star or, most often, a black hole. These gamma ray beams are the brightest energy source we have in the universe, and physicists are very interested in analyzing them. The types of applications physicists have today only cover a small part of the physics they want to see, and the simulations have to run for weeks or months. And the reason for that is those applications don’t scale. You can throw more compute resources at them, but they can’t run faster. If you compare the number of nodes these applications can use efficiently — an order of a thousand — and compare that with the available compute power on high-end machines today — nodes numbering in the hundreds of thousands, you can see the frustration of the physicists. At the end of this decade, we expect to have machines providing millions of cores and billion-way parallelism. The problem is an imbalance of the data distributed over the computer. Some parts of a simulation work on a little data and other parts work on a huge amount of data. Another example: graph-related applications where certain government agencies are very interested in analyzing graph data based on social networks. They want to analyze certain behavioral patterns expressed in the social networks and in the interdependencies of the nodes in the graph. The graph is so huge it doesn’t fit in the memory of a single node anymore. They are imbalanced: Some regions of the graph are highly connected, and some graph regions are almost disconnected between each other. The irregularly distributed graph data structure creates an imbalance. A lot of simulation programs are facing that problem. Q: So where specifically do CSP and MPI run into problems? H.K.: Let’s try to do an analogy as to why these applications are scaling-impaired. What are the reasons for them to not be able to scale out? The reason, I believe, can be found in the “four horsemen”: Starvation, Latency, Overhead, and Waiting for contention resolution — slow. Those four factors are the ones that limit the scalability of our applications today. If you look at classical MPI applications, they are written for timestep-based simulation. You repeat the timestep evolution over and over again until you are close to the solution you are looking for. It’s an iterative method for solving differential equations. When you distribute the data onto several nodes, you cut the data apart into small chunks, and each node works on part of the data. After each timestep, you have to exchange information on the boundary between the neighboring data chunks — as distributed over the nodes — to make the solution stable. The code that is running on the different nodes is kind of in lockstep. All the nodes do the timestep computation at the same time, and then the data exchange between the nodes happens at the same time. And then it goes to computation and back to communication again. You create an implicit barrier after each timestep, when each node has to wait for all other nodes to join the communication phase. That works fairly well if all the nodes have roughly the same amount of work to do. If certain nodes in your system have a lot more work to do than the others — 10 times or 100 times more work — what happens is 90 percent of the nodes have to wait for 10 percent of the nodes that have to do more work. That is exactly where these imbalances play their role. The heavier the imbalance in data distribution, the more wait time you insert in the simulation. That is the reason that MPI usually doesn’t work well with very irregular programs, more concretely — you will have to invest a lot more effort into the development of those programs — a task not seldom beyond the abilities of the domain scientists and outside the constraints of a particular project. You are very seldom able to evenly distribute data over the system so that each node has the same amount of work, or it is just not practical to do so because you have dynamic, structural changes in your simulation. I don’t want to convey the idea that MPI is bad or something not useful. It has been used for more than 15 years now, with high success for a certain class of simulations and a certain class of applications. And it will be used in 10 years for a certain class of applications. But it is not well-fitted for the type of irregular problems we are looking at. ParalleX and its implementation in HPX rely on a couple of very old ideas, some of them published in the 1970s, in addition to some new ideas which, in combination, allow us to address the challenges we have to address to utilize today’s and tomorrow’s high-end computing systems: energy, resiliency, efficiency and — certainly — application scalability. ParalleX is defining a new model of execution, a new approach to how our programs function. ParalleX improves efficiency by exposing new forms of — preferably fine-grain — parallelism, by reducing average synchronization and scheduling overhead, by increasing system utilization through full asynchrony of workflow, and employing adaptive scheduling and routing to mitigate contention. It relies on data-directed, message-driven computation, and it exploits the implicit parallelism of dynamic graphs as encoded in their intrinsic metadata. ParalleX prefers methods that allow it to hide latencies — not methods for latency avoidance. It prefers “moving work to the data” over “moving data to the work,” and it eliminates global barriers, replacing them with constraint-based, fine-grain synchronization techniques. Q: How did you get involved with ParalleX? H.K.: The initial conceptual ideas and a lot of the theoretical work have been done by Thomas Sterling. He is the intellectual spearhead behind ParalleX. He was at LSU for five or six years, and he left only last summer for Indiana University. While he was at LSU, I just got interested in what he was doing and we started to collaborate on developing HPX. Now that he’s left for Indiana, Sterling is building his own group there. But we still tightly collaborate on projects and on the ideas of ParalleX, and he is still very interested in our implementation of it. Q: I realize HPX is still quite new, but what kind of reception has it had thus far? Have people started developing applications with it? H.K.: What we are doing with HPX is clearly experimental. The implementation of the runtime system itself is very much a moving target. It is still evolving. ParalleX — and the runtime system — is something completely new, which means it’s not the first-choice target for application developers. On the other hand, we have at least three groups that are very interested in the work we are doing. Indiana University is working on the development of certain physics and astrophysics community applications. And we are collaborating with our astrophysicists here at LSU. They face the same problem: They have to run simulations for months, and they want to find a way out of that dilemma. And there’s a group in Paris that works on providing tools for people who write code in MATLAB, a high-level toolkit widely used by physicists to write simulations. But it’s not very fast, so the Paris group is writing a tool to covert MATLAB to C++, so the same simulations can run a lot faster. They want to integrate HPX in their tool. ParalleX and HPX don’t have the visibility of the MPI community yet, but the interest is clearly increasing. We have some national funding from DARPA and NSF. We hope to get funding from the Department of Energy in the future; we just submitted a proposal. We expect many more people will gain interest once we can present more results in the future.
<urn:uuid:b267ed6f-0dcd-4792-8ebd-f7a5aef225f0>
CC-MAIN-2022-40
https://intelligenceinsoftware.com/IsParalleXThisYear%E2%80%99sModel/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00120.warc.gz
en
0.945843
2,049
3.1875
3
As technology advances, the risk of facing cyberattacks increases. Hackers have become more sophisticated in their attack methods, and organizations have had to take extra steps to mitigate the risk of experiencing a cyber incident. One of the most common types of cyberattacks is phishing. It's a social engineering tactic designed to trick human victims into sharing revealing information or downloading malicious malware to their devices. Being able to identify phishing scams helps all types of organizations stay vigilant in their cybersecurity efforts. Let's explore the ins and outs of common phishing scams and how you can identify them. Most Common Phishing Attacks and How to Identify Them Below are the most widespread examples of phishing scams organizations will experience. It can be challenging to identify phishing scams but distinguishing between them is a skill someone in every industry can benefit from. Phishing is a broad category, and the types of phishing listed below are subsets of this overarching term. Below are some of the subsets of phishing and how you can identify them. 1. Mass-Marketing Email Phishing Likely the most common type of phishing, mass-marketing emails are sent out to millions of users worldwide. Someone tries to send an email where they pose as another person and trick the recipient into performing a malicious activity, such as logging into a fraudulent website or opening an attachment ridden with malware. These types of phishing attacks typically include an email with a subject line to ensure users can trust the source who sent the email. Any emails you receive and open should be from someone you know, such as a coworker or manager, as other emails could contain malware. Be sure to scan through your emails carefully, look for suspicious subject lines and never open any attachments from suspicious emails. Keep in mind that not all phishing scams rely on email, while some phishing emails are specifically targeted at one individual or organization. This is what's called spear phishing. The term spear-phishing extends the fishing analogy because attackers aim their attack directly at one individual in an organization. One way attackers will use spear phishing is by sending emails to recipients who recently attended a conference within their industry, for example. The attacker will make it seem like they represent the organization that ran the conference and send malicious emails to those in attendance. Because these emails may seem legitimate, it's crucial to check exactly who sent the email and ensure they are from a reputable organization. Vishing, also known as "voice phishing," is a tactic very similar to spear phishing. One notable attack was on Emma Watson, a British entrepreneur, where she lost £100,000 due to vishing. In this case, Watson received a call from someone she believed represented a worker from her financial institution. The caller persuaded her to move money into another account by giving her a false sense of security. If the vishing target truly believes the person on the other end of the call, it's easy for hackers to trick them into sharing passwords or additional sensitive information. It's always recommended that you only accept calls from known sources. Be aware of the questions a bank would never ask you — if they ask you strange questions, such as your password or username, don't turn it over. Whaling is also similar to spear phishing, but they target high-level members of an organization. C-suite executives and top management need to watch out for whaling emails, as they are most likely to be targeted. Upper management is more susceptible to whaling scams because their credentials typically give more access to company resources than an average employee. Whaling scams are also known as CEO or CFO fraud. Some attackers will pose as lower-level or entry-level employees and send emails with a sense of urgency, asking for passwords to various software or company resources, like HR data. Upper management needs to be extra vigilant in avoiding these types of scams. 5. Business Email Compromise Last but not least, a business email compromise (BEC) targets specific employees in an organization's financial or accounting departments. They will pose as CEOs or other top management or executives and request information from these employees. Attackers will gain access to an executive's email account and send fraudulent emails to members of an organization with access to critical assets and payment information. Employees working with money in an organization must never provide information or wire money to unauthorized accounts. It's good practice to use authentication methods to ensure money transfers are going to legitimate employees or clients. Be on the lookout for these types of phishing scams, as they're becoming more common and sophisticated. All kinds of employees should have basic cybersecurity training to help them identify these scams and avoid compromising an organization's assets. Identify and Avoid Different Types of Phishing Scams In today's digital world, no business is immune to the various phishing attacks listed above. When you can identify them, the risk of falling victim to these attacks is mitigated. Review these types of scams with your team to protect your organization from being digitally attacked.
<urn:uuid:892f0ee8-fc41-4fd5-b137-67e55c043181>
CC-MAIN-2022-40
https://www.drchaos.com/post/how-to-identify-the-most-common-phishing-attacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00120.warc.gz
en
0.960638
1,047
3.109375
3
A Windows 10 file search lets you search for information on your Windows computer using the built-in search functionality of the Windows 10 operating system. Finding your files in Windows 10 is quick, and there are several ways you can do it. Our guide will detail everything you need to know. What Is a Windows 10 File Search Anyway? A Windows 10 search is a way for users to quickly search through their computer for a specific file or anything else they need to find, such as an app or hard-to-find setting. You’ll be able to carry out narrow searches to find particular files and even search across the web if required. It’s a simple yet powerful tool that can save a lot of time, so it’s worth using. We’ve all been in a situation where an important file—typically one created some time ago—is needed again, but we can’t find it, whether it’s stuck inside a folder or has got moved to someplace else. Windows 10 has a dedicated feature for just that. How Windows 10 File Search Works Windows 10 allows you to search for content using the search tool on the taskbar. In a sense, it’s like searching for information when you use a web browser, except that it’s targeted at all the files on your computer instead. It’s possible to search for files directly using File Explorer too. You type a keyword into the search field, and File Explorer suggests files for you based on the search term you used. The File Explorer part of the search feature was added via an update, and almost all Windows 10 users can use it right away. Both ways of searching are helpful, but File Explorer was explicitly designed to search for files. The image below shows what a general search looks like using the taskbar at the bottom of the Windows 10 OS: Alternatively, this is what using File Explorer looks like: You’ll be able to view frequently used folders and recently used files to help narrow down your search using the File Explorer route. You can also search using the top right search box to narrow things down further if required. Example #1: You Need To Find a Specific File Let’s say you’re looking for a specific file on your computer but haven’t got the first idea of where to find it. You could try aimlessly opening up your folders and attempting to find it, true, but most of us don’t have the luxury of spending hours doing that. By using the search functionality on Windows 10, you’ll be able to quickly write in a precise search term and find what you’re looking for in a matter of minutes. While searching for files this way won’t always bring you the results you want—it’s not a 100% guarantee—it gives you a much greater chance of finding the elusive file or at least narrowing down where you should look. Best of all, it’s built straight into the Windows 10 OS, so there’s nothing else you need to download or install. Example #2: You Want To Check if You Moved Files and Folders to Another Device Most of us don’t just have a single device for our files these days. Whether it’s a second computer or something more portable like an external hard drive (HDD), files get moved around all the time and then forgotten about, and down the line, that can catch us out when we need them. For instance, you might have moved some of your files that were taking up space on your central computer to another device with more room—it happens all the time. Using File Explorer will give you a good indication of whether your files are on the device you’re using or not, and by typing in your terms, you’ll be able to narrow down your search. Employing it can save a lot of time—the trick is using the search functionality the best way you can, which we’ll look at in the next section. How To Get Started With Windows 10 File Searching It’s all very well just searching, but there are some key steps you can take to maximize your chances of finding what you need. Step 1: Understand the Difference Between Searching From the Taskbar and Searching with File Explorer We mentioned earlier the two main ways of searching for your files using the Windows 10 OS, and it’s important to understand how to do both. To start and perform a regular search from the taskbar, you need to click on the bar at the bottom of your screen on the left-hand side where the magnifying glass icon is, like so: Keep in mind that this search bar sometimes gets hidden, and only the magnifying glass icon will show up—in that case, you need to click the icon itself. After doing that, type in the name of a document or a few keywords you think might help locate the file. Under where it says best match, you’ll see the most relevant results for documents across both your PC and in cloud storage via OneDrive. That’s it for the first method. The second way of file searching is via File Explorer. To do so, open up File Explorer from the taskbar—click the folder icon—or right-click on the Start menu, as shown below: Either way, you’ll end up looking at the File Explorer, which will list your most recent files, like so: Both ways to search have their advantages—for example, the File Explorer method is frankly a bit easier to learn, but the taskbar method allows you to search more thoroughly and customize your searches, as we’ll see in the next step. Step 2: Learn How To Search by Categories The menu that appears using the first search method features several categories, from Email and Web to Apps and Documents. There’s even a drop-down menu that includes Folders, People, Settings, and more. By clicking on a category, you’re helping filter down the results and make the search process more manageable. Here’s what it looks like on the Windows 10 OS: In this case, as we’re using the search feature to find files, we want to click where it says Documents. From there, the search window shows you the direct results in two different panes. The first pane shows the documents found, and the second on the right shows you more details about a particular document, including the last time you modified it, the author, and the file’s exact location. Here, we can click on the document to go straight to the document’s location or copy its path. You can also speed things up further by typing the category into the search box yourself. To do so, just type the category name, followed by a colon and some chosen keywords, like so: documents: business invoice for October. This is a powerful and flexible way of searching for specific apps, settings, and emails, too, so it has multiple uses. Step 3: Learn How To Change Search Settings and Control Your History Sometimes we need to control our search to ensure it has the best chance of finding what we need. The good news is that you can actually adjust the search settings yourself, and it’s easier to do than you might think. Using the taskbar search method again, we need to click on the search box and then the three-dot icon in the top-right corner. From there, we need to click where it says Search settings. Doing so will take you to a new page that looks something like this: We can customize the search results to include or exclude adult content on this settings page and choose from strict, moderate, or no filtering options. We can also adjust our Cloud content search, deciding whether we want to include content from Outlook and OneDrive during our searches or not. Perhaps most significant of all is the ability to adjust our search history. You can choose whether Microsoft collects certain information related to searches or not and can fully disable Windows from viewing your device and search history altogether. As an extra, you can also view and clear any search history you have with Bing. Finally, under where it says Searching Windows, we are able to exclude specific folders from our search. Step 4: Know Your Windows 10 File Search Best Practices By now, you’ll have a good understanding of Windows 10’s file search functionality, the two primary methods to carry it out, and just how useful it can be. With that said, we’ve listed some best practices to help you get that little bit more out of your searches: - Keep in mind that libraries won’t show up in File Explorer unless you want them to. Select the View tab > Navigation pane > Show libraries to add libraries to the left pane. Libraries are groups of stored content and do not replace your folders—they are handy to have around. Quite a few of the more helpful search options get hidden away and stay out of sight, so spend some time exploring to improve your searches on Windows 10. - Most of the time, you won’t know the exact file name of what you’re searching for but might know part of it—that’s to be expected. Something worth using here is what’s called a “wildcard syntax.” A wildcard syntax is a symbol, typically a * or ?, that takes the place of an unknown character or set of characters. For example, “c?mp” matches both “camp” and “comp.” - Using File Explorer, you can resize the search box if you find it’s too small—by default, it will be for most users. To do so, move your cursor to the box’s left edge until it turns into an arrow with two heads on either side, then click-and-hold while dragging to the left or right. - When using File Explorer, you can type in any of these date-related parameters in the search bar before a query: date, datemodified, dateaccessed, datecreated, and datetaken. Be sure to include the colon at the start, or the search command won’t work. - If your content gets indexed in advance, your computer can return your search results much faster, regardless of how you search. You can adjust indexing options under the Searching Windows tab under Search settings. The first time Windows runs the indexing process, it can take a few hours to complete, depending on the amount of data, so we don’t recommend doing it until you have a quiet moment and time to spare. - When you perform a File Explorer search, the Search tab appears on the ribbon, providing access to numerous search tools that get grouped in different sections. These tools include the ability to search subfolders, by date modified, see recent searches, save your search, and even search by file size alone. For the latter, you can click on any of the values in the drop-down menu to choose a size range to search by—this goes up to 4GB and beyond. - It’s best to use descriptive file names to help out your future searches whenever possible. Windows 10 can support file names up to 260 characters long, which is far lengthier than previous versions could, so make good use of it. Tagging your documents also helps, and adding category and subject metadata tags improves the results of a search. - If you’ve had a lot of trouble finding a particular file or folder, it’s best not to leave it like that, as you may struggle again in the future. Instead, move and rename the file after finding it—you may want to move it to a related folder for easier access. Having a hard drive or USB memory stick dedicated to specific files and folders can help keep things organized.
<urn:uuid:a79c958b-7220-4fd4-850a-74e41e5d98fc>
CC-MAIN-2022-40
https://nira.com/windows-10-file-search/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00120.warc.gz
en
0.896518
2,498
2.921875
3
A lot of us use a mobile device (a phone or tablet) so frequently that we consider it to be our primary computer. In a lot of ways, mobile device security is the same as workstation security, except for the fact that the device stays with us wherever we go. The convenience of taking our data everywhere with us, though, can come at the cost of security. Here are three simple ideas for how to keep your mobile devices—which often carry data more personal than the rest of our devices—secure: 1) keep your OS and apps up to date; 2) keep your device physically secure from theft and damage; and 3) manage passwords and sensitive information well. But first… the bad news. Mobile devices are prone to attacks by state actors, intelligence agencies, and other very capable attackers. A lot of the security vulnerabilities that we hear about are very effective and stay unknown for a long time. In other words, if state agencies are after you, there’s little to be done to stop them from accessing data that they want. Both hardware and software vulnerabilities tend to be found after they’ve been exploited, but once they are made public there’s typically something to do about them: update your device’s software. This should be done automatically for most mobile devices running Android or iOS, usually when WiFi connected and charging. If you have changed this setting for any reason, it’s time to turn it back on. The updates to your apps and operating system are the only way to know that you are using a secure version of that software; any known vulnerabilities will get an update when it’s possible for the developers to fix it. It may seem obvious, but keeping your phone safe starts with keeping it physically safe. This may be obvious as a way of stopping yourself from accidentally breaking or losing it, but it may not be so obvious for how it keeps your data safe. Personal data breaches can occur when you need the device repaired and have to turn it over to a manufacturer or other hardware repair company. Even a company that prides itself on its security may use contractors who may not always be as reputable. A more common problem is losing a device or having it stolen. If you use the “Find my device” feature on Android or the “Find My iPhone” feature on iOS, you can rest a little more easily when it happens, since they’ll help you find a lost device or disable a stolen one. Most platforms encourage you to do automatic backups of your data to an iCloud or Google account, which will help you feel better if you have to put a phone in “lost mode.” Passwords and Authentication In case a malicious actor ends up with your phone, you want to make sure that your user authentication (screen unlocking) is strong—be it a fingerprint or an unlock code. If your device is lost or stolen, a reasonably strong unlock code can stand in the way of an unsophisticated attacker simply opening your phone or tablet and gaining access to everything. Under no circumstances should a mobile device—at least, one that is logged into websites or apps as you—go without a lock screen code. Depending on how you set up the rest of your passwords and logins, getting past the lock screen could grant access to all of the rest of your personal information. A lot of devices also use the unlock code to encrypt all of the data on the device, meaning that the code is used to securely store all of the pictures and documents on it. Typically, this helps prevent someone from being able to plug your phone into a computer to break into its files. It’s one extra step of protection, at least it is when it’s done correctly. If you aren’t currently using encryption on your device, consider enabling it the next time your phone needs the operating system installed.-Written by Derek Jeppsen on Behalf of Sean Goss and Crown Computers Team
<urn:uuid:5a5bdf5f-d08d-4964-85a7-9d121bd9097b>
CC-MAIN-2022-40
https://www.crowncomputers.com/item/98-staying-one-step-ahead-with-mobile-device-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00120.warc.gz
en
0.939789
824
2.640625
3
A data warehouse is a modern solution that allows you to collect, store, and analyze all data types quickly and easily. With a data warehouse, you’ll have everything you need to make informed decisions about your business. Please read this post to get more information on it. A data warehouse is a data repository structured for reporting and analysis. It usually contains historical data that has been cleansed and transformed to meet the needs of the business. A data warehouse is often used in conjunction with Business Intelligence (BI) tools to allow users to perform complex data analyses. BI tools can include reporting tools, OLAP cubes, and dashboards. Few important aspects - The data warehouse is a system that stores “big data.” - It’s used to store and manage large amounts of information about business operations. - The goal of a data warehouse is to provide fast access to the most relevant information for decision-making. - A data warehouse can be created using a relational database management system (RDBMS) or an online analytical processing (OLAP) tool. - RDBMS includes MySQL, Oracle Database, and Microsoft SQL Server; examples of OLAP tools include Cognos TM1 and Hyperion Essbase. - Data warehouses are sometimes called enterprise data warehouses because they help companies make decisions at all levels of the organization. Where is it used? You know that a data warehouse is a core component of Business intelligence. It is also called an Enterprise data warehouse (EDW). It is used for reporting and analysis. It stores old data and also uses real-time data to generate business reports. Below are the familiar sectors where the data warehouse is used. - Public region: The data warehouse collects intelligence in government offices in this area. It is also used to monitor and analyze each individual’s health records and tax records in government offices. - Bank sector: It helps the banking sector control and investigates available resources on desks. - Hospitality Industries like hotels and restaurants: In this sector, data warehouse helps promote themselves and attract target customers. - Health care: In this area, the warehouse helps to generate patient treatment reports. - Airlines: Here, the warehouse is used for analyzing the works assigned to the airline crew. - Insurance: In this sector, the warehouse helps trace market fluctuations. How can a data warehouse benefit an organization? A specific business purpose can be analyzed with the data collected here. Suppose the business wants to understand the machine downtime and how it can reduce. In that case, data can be collected from the data warehouse to understand the various times or situations during which the machines stopped working, the reasons behind the same, and how this can be reduced. Data from different sources are integrated to provide cooperative data. For instance, if a company wants to do budgeting for the next quarter, a data warehouse will have all the information required. The entire data set is available in one source, from incurred to depreciation costs. The company utilizes the historical data stored in the system to extract relevant reports and understand the overall organization’s health. But data such as the employee database, which includes addresses and phone numbers, must not be included as they are subject to change. Once data is entered, it remains the same. Therefore, the firm must ensure that information is highly protected and that there is no alteration. If any modifications are made, it will affect the reports and analysis. 5. Improved data quality Helps to improve data quality by providing consistent, accurate data and fixing insufficient data. Disadvantages of data warehouse Cost v/s Benefit A data warehouse is an IT project, and it consumes more person-hours and more money from the budget. Moreover, its implementation and maintenance are costly. Hence the cost to benefit ratio is meager. However, if the organization is small or medium, it may affect its revenue. We know that data warehouses are software applications for service. Its primary concern of it is the security of data. You have to be more sure that the people who handle and analyze the customer data are the employees that your company trusts. Because leaking the customer’s data within the organization may cause problems for executives and affect the relationship between the company and the customer. The data imported into the data warehouse is often static data set that have less flexibility. They have less ability to generate a particular solution. Warehouses are subjected to ad hoc queries that are highly difficult due to their most minor processing and query speed. Miscalculation of ETL processing time The entire data warehouse development process is the extraction, cleaning, and loading of consolidated data into the warehouse takes more time. But usually, organizations do not guess the time required for the ETL process. As a result, it leads to a backlog of work. Levels of data warehouse architecture It comprises several levels. A few of them are as mentioned below: - Data Source Layer - Data Extraction Layer - Staging Area - ETL Layer - Data Storage Layer - Data Logic Layer - Data Presentation Layer - Metadata Layer - System Operations Layer Types of data warehouse architecture There are three types of architecture in it. Single tier architecture: It is rarely used architecture. It reduces the amount of data stored by avoiding repetition. In this type of architecture, only the source layer is available. Thus, the single-tier consists of the source, data warehouse, and analysis layers. The two-tier architecture consists of a data staging area or ETL (extraction, transformation, and loading) and the source layer. This layer helps to merge diversified data into one standard schema. This type of architecture consists of the source layer, data staging layer, data warehouse layer, and analysis layer. The three-tier architecture contains a reconciled layer and the data staging and source layer. The source layer contains multiple sources in this architecture, and the data warehouse layer has data warehouses and data marts. The role of a reconciled layer is to generate a standard data model for the entire enterprise. This reconciled layer can also use to do some operational works like reporting. This architecture consists of the source, data staging, reconciled, data warehouse, and analysis layers. Types of data warehouse The following three are the main types of data warehouses. 1. Enterprise Data Warehouse (EDW): It helps to provide decision support service throughout the enterprise and also helps to classify data according to the subject. 2. Operational Data Store: It helps to store records of employees. 3. Data Mart: It helps to collect data directly from sources. Data warehouse tools Following are the few popular tools for data warehouse - Amazon Redshift - Microsoft Azure - CData Sync - SAP HANA - Amazon RDS - Amazon S3 - Maria DB Difference between Database(DB) and Data Warehouse(DW) |Collects data for multiple transactions||Transfers and stores accumulated data for analytical purposes| |Developed for write or read access||Developed for the accumulation and recapture of large data sets| |Made for quick record and recapture of data||made for a more straightforward analysis of data collected and stored from multiple databases| Data warehouse history In the 1950s American government and businesses started using punch cards to store computer-generated data. They were being used till the 1980s. In the 1960s, slowly disk storage systems came into the picture, and in 1964 the systems became popular, called ‘magnetic storage’ for data. IBM was the first company that designed and started using the floppy disk drive. Later is called the hard disk drive. In 1966, IBM designed its DBMS(database management system) called ‘information management system’. It contained the following features. - Ability to find out the exact location of data - Ability to solve the problem of locating more than one unit of data in the same place - Ability to delete data - Ability to access the data rapidly - Ability to allocate the place when data stored cannot fit in the specified location. In 1970, online applications came into the picture. People know that data can be directly accessible and shared between computers. After that, people started using their personal computers. It changed the way of doing work. At the same time, 4GL technology was invented. The combination of personal computers and 4GL technology gave complete freedom to the end-user. It allows end-users to access their data efficiently and rapidly by controlling the computer system. But they found the following problems. - They got misled by incorrect data. - Old data is not at all useful - Confusion because of duplicated data As a solution to these problems, the rational database was used in the 1980s. It used SQL (structured query language)as its language. Businesses started assigning personal computers to the employees and widely used office applications( ms word, ms excel, ms office). In the year 1990, significant changes took place. That is the usage of the internet. Internet became very popular, and conflict started because of globalization, computerization, and networking. In 2000, businesses needed good integration between systems and consistent data to get the accurate business information required for proper decision-making. Because of expanded databases and application systems, getting consistent data became difficult. Hence a data warehouse is developed by businesses. The data warehouse is a term that has been around for more than two decades. It is one of the most essential and powerful tools in modern business intelligence, but many people don’t know what it is or how to use it. Please read this blog post to learn about data warehousing and its benefits!
<urn:uuid:b6b7b325-ef70-4f07-8625-50ab52540fb1>
CC-MAIN-2022-40
https://www.erp-information.com/data-warehouse.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00120.warc.gz
en
0.925454
2,119
3.171875
3
Spear phishing is a form of phishing attack that is targeted at an individual or a small group. Unlike broader phishing attacks that use pretexts that apply to many people (such as issues with online accounts or failed delivery notifications), spear phishing emails are based on in-depth research into a particular target. For example, a spear-phishing email may be designed to imitate a legitimate unpaid invoice from an organization’s supplier. By sending a realistic email to the right person and including the attacker’s payment details instead of the supplier’s, the phisher has a much higher probability that the target will fall for the phish and send money to the attacker. The Spear Phishing Threat Spear phishing campaigns pose a major threat to companies because they are growing increasingly common and sophisticated. Business Email Compromise (BEC) attacks are a form of spear phishing in which an attacker masquerades as senior management and instructs an employee to send a payment to a particular vendor. BEC attacks alone cost an estimated $1.8 billion in 2020 of the estimated $4.1 billion in cybercrime-related losses. Why is it Important to Protect from Spear Phishing? Phishing attacks are a commonly used attack vector because they are simple and effective to perform. A phishing attack is designed to trick a human into doing the attacker’s job for them rather than attempting to gain access and execute malware by exploiting a vulnerability in an organization’s cyber defenses. According to Verizon’s 2021 Data Breach Investigation Report (DBIR), phishing attacks are involved in over a third (36%) of data breaches. BEC and phishing attacks are the costliest causes of data breaches with average price tags of $5.01 and $4.65 million respectively. Phishing emails are also one of the most common delivery vectors for malware. Spear phishing attacks are effective and extremely expensive for companies, and many employees simply cannot detect a sophisticated phishing attack. Protecting against the spear phishing threat requires companies to deploy security solutions that identify and block phishing attacks before they reach employees’ inboxes. How to Protect Against Spear Phishing Spear phishing attacks are tailored to their target, making them more difficult to detect than general phishing campaigns. However, companies can take several actions to help protect themselves against spear-phishing attacks, including: - Email Scanning: Spear phishing emails use a variety of techniques to appear legitimate such as spoofing sender addresses. Scanning emails for potential indicators of phishing can help to detect and block these attacks. - Employee Cyber Awareness Training: Phishing emails are designed to trick users into taking actions that hurt them or their organization. Training employees on the warning signs of phishing emails and how to properly respond to them is essential to managing the spear phishing threat. - Malicious URL Detection: Spear phishing emails commonly contain malicious URLs designed to direct recipients to pages that steal login credentials or install malware. Organizations should deploy email security solutions that identify and block emails containing links to known-bad URLs. - Relationship Monitoring: Spear phishing emails commonly break normal patterns of communication between people within an organization. By developing a relationship graph and identifying anomalous messages, an anti-phishing solution can flag emails that are likely to be spear-phishing attacks. - Sandboxed Attachment Analysis: Phishing emails often have malicious attachments designed to look like legitimate files (such as invoices). Automatically inspecting these files within a sandboxed environment allows malicious files to be detected and scrubbed from emails before they reach a recipient’s inbox. - Use MFA When Possible: Phishing attacks are often designed to steal a user’s login credentials for corporate systems or other login accounts. By enforcing the use of multi-factor authentication (MFA) wherever it is available and implementing it for corporate resources, an organization can limit the value of compromised credentials and the risk that they pose to the business. Spear Phishing Protection with Check Point Phishing attacks are a major threat to corporate cybersecurity, enabling cybercriminals to steal users credentials, plant malware on corporate systems, and steal money from companies. Spear phishing campaigns are a more targeted and sophisticated version of this, making phishing emails seem more realistic and difficult to detect and block. The authenticity of spear-phishing emails makes them difficult for employees to identify, and cybersecurity awareness training alone is an inadequate anti-phishing strategy. Training efforts must be backed with anti-phishing solutions that identify and block attempted spear phishing attacks before they reach an employee’s inbox where the company can be compromised by a thoughtless click on a link or opening a malicious attachment. Check Point, along with Avanan, provides robust protection for companies against a range of phishing threats. To learn more about how Check Point and Avanan’s Harmony Email and Office uses state of the art techniques to identify and block spear phishing campaigns, you’re welcome to sign up for a free demo.
<urn:uuid:a7c84f56-6acc-42cb-934d-6f721d59a9da>
CC-MAIN-2022-40
https://www.avanan.com/blog/what-helps-protect-from-spear-phishing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00120.warc.gz
en
0.934917
1,044
2.859375
3
Cloud Infrastructure Definition Cloud infrastructure is a collective term used to refer to the various components that enable cloud computing, including hardware, software, network devices, data storage and an abstraction layer that allows users to access virtualized resources. How Does Cloud Infrastructure Work? The cloud environment is enabled by a process known as virtualization. Put simply, virtualization is the process of making a “virtual version” of a physical asset, such as a piece of hardware or software. Once created, virtual resources are then abstracted, meaning that they are separated from the physical asset that they are linked to and re-provisioned in the cloud. Automation software and other tools are then used to create an interface that allows users to access cloud resources on demand via the internet. Why Use Cloud Infrastructure? Cloud services have become a necessary component for most organizations’ long-term strategic growth plans. The cloud makes it possible to store, analyze and access huge amounts of data, which are required to enable various intelligent automation technologies, including artificial intelligence (AI) and machine learning (ML) applications. From an IT perspective, shifting to the cloud also offers important cost savings and efficiencies. This is because organizations are not required to purchase or maintain traditional onsite infrastructure elements or dedicate staff to their operation. Further, in many cases, cloud infrastructure is shared by several users, which also drives costs down for each party. Finally, a cloud-based model is highly scalable, meaning that businesses can easily and quickly add or remove storage or computing resources based on their real-time needs. 2022 CLOUD THREAT REPORT Download this new report to find out which top cloud security threats to watch for in 2022, and learn how best to address them.Download Now What Are the Components of Cloud Infrastructure? Cloud infrastructure consists of four main components: As with a traditional on-premises IT infrastructure, a cloud infrastructure requires physical hardware. Common hardware components include servers, routers, firewalls, endpoints, CPU, RAM, load balancers and other networking equipment. These hardware components can be located virtually anywhere and are networked together within the cloud environment. One of the most notable components at the hardware level are servers. Put simply, a server is a device that is programmed to provide services to customers. This category includes: web servers, which host digital content online; file servers, which store data and other assets; and mail servers, which provide the foundation for email communication. Virtualization is the creation of a virtual environment that enables IT services not bound by hardware. In the case of the cloud infrastructure, virtualization software abstracts data storage and computing power away from the hardware, thereby allowing the users to interact with the cloud infrastructure through their own hardware via a graphical user interface (GUI). Cloud storage services are off-site file servers that take the place of traditional physical data centers. Like on-premises databases, cloud storage services store and manage data; typically third-party data storage services also back up stores. In this model, users can access data through the internet or a connected cloud-based application. Typically organizations leverage a third-party service provider, such as Amazon Simple Storage, Google Cloud Storage or Microsoft Azure, to host cloud data storage centers and related services. Because cloud resources are delivered to users over the internet, there must be a networking component that connects those resources to the user. Networking services include hardware components, such as physical wiring, switches, load balancers and routers, as well as the virtualization layer that ensures cloud services are available and accessible to users remotely on demand. What Are the Types of Cloud Architecture? There are three main types of cloud architecture: Public Cloud Architecture A public cloud model is one in which infrastructure is hosted by a third-party service provider and shared by multiple customers or tenants. While each tenant maintains control of their account, data and applications hosted in the cloud, the infrastructure itself is common to all customers. The public cloud model tends to be the most affordable, because the cost of the platform is shared among a group of users. However, it is also associated with greater risk since each tenant is responsible for maintaining the security of its data and users. A breach in one account can jeopardize security across all public cloud users. Private Cloud Architecture As the name suggests, a private or single-tenant deployment model is one in which the cloud infrastructure is offered via the private cloud and is used exclusively by one customer. In this model, cloud resources could be managed by the organization or the third-party provider. While this model is generally far more expensive than a public option, it is often leveraged by companies, organizations or government agencies that manage or store sensitive information such as personal data, financial transactions or intellectual property (IP). Using the private cloud grants these organizations more control and enhanced security of their data, as well as the ability to comply with any relevant government or industry regulations. Hybrid Cloud Architecture Organizations are increasingly leveraging a hybrid cloud environment that combines elements of a public cloud, private cloud, and on-premises infrastructure into a single, common, unified architecture. This model grants organizations the option to deploy applications and services on a private or public cloud depending on the application use case, presence of sensitive data or regulatory requirements. The hybrid environment grants organizations increased flexibility and cost efficiencies, while also providing enhanced security. To learn more about the differences between public, private and hybrid cloud deployment, read our related Cybersecurity 101 article: Public vs. Private Cloud Cloud Infrastructure Delivery Models There are three delivery models for cloud services: - Software as a service (Saas) - Platform as a service (PaaS) - Infrastructure as a service (IaaS) Software as a service (SaaS) is a cloud-based delivery model that allows users to access a software application through an internet-connected device. In the SaaS model, a third-party vendor manages all aspects of the software application, including coding, hosting, monitoring, updating and security, as well as the purchase and maintenance of the associated hardware, such as servers and databases. Since SaaS solutions are delivered over the internet, customers generally do not need to download or install the software to use the service. This means that users can access the application or their data from virtually anywhere with an internet connection, assuming all other system requirements and security protocols are met. Platform as a service (PaaS) is a cloud computing model in which a third-party cloud provider maintains an environment for customers to build, develop, run and manage their own applications. In a PaaS model, the vendor typically provides all infrastructure, including hardware and software, needed by developers. This allows the customer to circumvent costly IT infrastructure investments, as well as the need to purchase software licenses and development tools. Infrastructure as a service (IaaS) is a cloud computing model in which a third-party cloud service provider (CSP) offers virtualized compute resources such as servers, data storage and network equipment on demand over the internet to clients. In the IaaS model, each computing resource is offered as an individual component or service and can be scaled up or down according to the organization’s needs. This significantly reduces or negates the need for physical servers, as well as an on-premises data center, and grants the organization much-needed flexibility to manage variable business needs quickly and cost effectively. IaaS vs. PaaS The key difference, technically speaking, between PaaS and IaaS is that the PaaS vendor will provide and maintain the software, hardware and tools used on the platform, while in an IaaS model, these components are the responsibility of the customer. Another critical distinction is related to how the PaaS or IaaS solution is used. The PaaS environment is used almost exclusively for software and application development. It is essentially an interface for developers to access software and development tools in a remote setting. Securing Cloud Infrastructure with CrowdStrike CrowdStrike has redefined security with the world’s most advanced cloud-native platform that protects and enables the people, processes and technologies that drive modern enterprise. Powered by the CrowdStrike Security Cloud, the CrowdStrike Falcon® platform leverages real-time indicators of attack, threat intelligence, evolving adversary tradecraft and enriched telemetry from across the enterprise to deliver hyper-accurate detections, automated protection and remediation, elite threat hunting and prioritized observability of vulnerabilities. Learn more about CrowdStrike’s cloud security solutions below:
<urn:uuid:5328fe82-6216-4ba3-8e48-8c27c72d3220>
CC-MAIN-2022-40
https://www.crowdstrike.com/cybersecurity-101/cloud-security/what-is-cloud-infrastructure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00120.warc.gz
en
0.927942
1,826
3.375
3
The ssh command line utility is a staple for people who work on remote systems. ssh stands for “secure shell,” so as you may expect one of its most common uses is as a remote shell. While that is perhaps its most common use, it isn’t the only, or most interesting, thing you can do with ssh. Creating a Connection In order to do anything over ssh, you first need to establish a connection to a remote server. There are a number of command line arguments that you can use with the ssh command line utility, but I’ll leave it to man ssh to discuss the majority of them. The most basic commandline arguments are ssh address where “address” is the hostname or IP address of the server you want to connect to. Here is an example of connecting to a remote system for the first time: dink:~ jmjones$ ssh 192.168.1.20 The authenticity of host '192.168.1.20 (192.168.1.20)' can't be established. RSA key fingerprint is 24:1e:2e:7c:3d:a5:cd:a3:3d:71:1f:6d:08:3b:8c:93. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.1.20' (RSA) to the list of known hosts. Earlier I said that “ssh” stands for “secure shell.” ssh is very concerned about security. The message “The authenticity of host ‘192.168.1.20 (192.168.1.20)’ can’t be established” shows this security focus. This message just means my ssh client doesn’t know the remote server. I use the word “client” here and throughout this article because the ssh command line utility initiates the network connection and that makes it, by definition, a network client. After informing me that it didn’t know the remote server, the utility then asked me if I wanted to continue connecting. I answered “yes” because I knew that the server I was connecting to was the server I really intended to connect to. Typically, it is safe to answer “yes” to this question. The danger, though, is that some bad person with questionable motives might be impersonating the server you are attempting to connect to. After I answered “yes” to continue connecting, my ssh client updated the file $HOME/.ssh/known_hosts with the following text: 192.168.1.20 ssh-rsa ^4rsa5jmjones6cd7jmjones8^/^9cd10^+9^11yc12yc13rsa14AAAAB15^+^16rsa17 The next time I connect to the same server, my ssh client will check the “known_hosts” file to see if this really is the same server. If the information that the server passes back to my client doesn’t match what is in the “known_hosts” file, I will see error like this: dink:~ jmjones$ ssh 192.168.1.20 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed. The fingerprint for the RSA key sent by the remote host is 24:1e:2e:7c:3d:a5:cd:a3:3d:71:1f:6d:08:3b:8c:93. Please contact your system administrator. Add correct host key in /Users/jmjones/.ssh/known_hosts to get rid of this message. Offending key in /Users/jmjones/.ssh/known_hosts:1 RSA host key for 192.168.1.20 has changed and you have requested strict checking. Host key verification failed. I’ll pick back up with the prior example, the one in which I answered “yes” to continue. After answering “yes,” I was prompted for a password. Here is the remainder of that interaction: firstname.lastname@example.org's password: Be careful. No mail. Last login: Tue Dec 30 06:36:20 2008 from dink jmjones@ezr:~$ I typed in the password and my ssh client dropped me into an interactive shell on the remote server. You can see the tell-tale signs of logging into a Linux server: the “message of the day” (aka MOTD), a message regarding having no waiting email, a message of when I logged in last, and a shell prompt. At this point, it was as if I were logged in locally to the server. Continued from Page 1. What if I don’t want to type in my password each time I login? Or, what if I’m a sysadmin and I want my server harder to crack than guessing a password? You can use a public/private key pair to make logging into a server both more secure and easier. In order to use a public/private key pair, you have to create it. You can do so from a command line by using the ssh-keygen utility. There are many options that you can pass to ssh-keygen including the type of key, the filename you want it to create, and a comment for the key file, but you can also just roll with the defaults. Here is the result of calling ssh-keygen with no arguments: dink:~ jmjones$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/Users/jmjones/.ssh/id_rsa): Created directory '/Users/jmjones/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /Users/jmjones/.ssh/id_rsa. Your public key has been saved in /Users/jmjones/.ssh/id_rsa.pub. The key fingerprint is: fe:e9:fa:f5:e2:4e:a1:6c:9e:9e:20:a4:cc:ec:4f:62 jmjones@dink The key's randomart image is: +--[ RSA 2048]----+ || || || || | . S.| |+ o . . . .| |E o o + o| |o o . = *..| |... .=Xoo.. | +-----------------+ I accepted the default “id_rsa” as my key file. I also accepted the default of not putting a passphrase on the file. If I had chosen to add a passphrase to the file, I would be prompted for the password each time I used it. Two files were created in $HOME/.ssh as a result of running ssh-keygen: dink:~ jmjones$ ls -l ~/.ssh/ total 16 -rw------- 1 jmjones staff 1675 Dec 30 17:37 id_rsa -rw-r--r-- 1 jmjones staff400 Dec 30 17:37 id_rsa.pub “id_rsa” is my private key. I don’t want anyone to get access to this file, otherwise they could pretend that they are me. Notice that the permissions are more restrictive on “id_rsa” than on “id_rsa.pub.” “id_rsa.pub” is my public key. I can circulate this file to anyone that I am interested in connecting to. Don’t worry; no one can reverse it and determine what your private key is. If I want to use this key with the server in the previous examples, I would place the contents of my public key (“id_rsa.pub”) into the file “$HOME/.ssh/authorized_keys” on the remote server. In order to set this up, I typically ssh to the remote server and copy/paste the contents of my local “id_rsa.pub” file to the remote “authorized_keys” like this: jmjones@ezr:~$ echo "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAw4DTUeLXZbjjNhR+AaW9^102rsa103^+ pw5jDw/JpSAdFaQR/Vl6Kpzf9MD1KAEpyd8RaxLa+RQ== jmjones@dink" > ~/.ssh/authorized_keys jmjones@ezr:~$ ls -l ~/.ssh/ total 4 -rw-r--r-- 1 jmjones jmjones 400 2008-12-30 17:48 authorized_keys jmjones@ezr:~$ After which, I am no longer prompted for a password to login. Here, I log out of the server, then ssh back in: jmjones@ezr:~$ logout Connection to 192.168.1.20 closed. dink:~ jmjones$ ssh 192.168.1.20 Be careful. No mail. Last login: Tue Dec 30 17:50:26 2008 from dink Notice that my ssh client didn’t prompt me for a password. Now, anytime I want to connect to this server, I just ssh in and I will be instantly connected. Executing Remote Commands I mentioned earlier that after sshing to a remote server, you are dropped into a shell. This is the default behavior, but it isn’t the only thing you can do. Another useful way of using an ssh client is to execute commands on a remote server without typing it into an interactive shell on the remote server. To state it another way, you can specify what command you want to run on the remote system when you execute the ssh utility on your local system. For example, if I wanted to see if a process is listening on port 25 on the remote system, I could do it like this: dink:~ jmjones$ ssh 192.168.1.20 netstat -ltpn | grep 25 (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp0 0 127.0.0.1:250.0.0.0:*LISTEN - The syntax is “ssh address command.” I could do the same thing to check disk usage, see which processes are running, or copy files around. And since I setup authorized_keys, it’s not much more overhead to execute commands remotely than to execute them locally. Why not just log in and run the commands interactively? Because you would lose the benefit of scriptability. Executing commands on a remote system can now become part of a shell script. And those shell scripts can run under cron. Now the possibilities for getting work done on remotes systems is an open horizon. ssh is an essential tool. In its most common use, it allows you to interactively manipulate a shell on a remote server. This is certainly indispensable for remote system administration. It also lets you simplify and increase the security of the authentication process by using authorized keys. Finally, it allows you to execute shell commands on the remote system without being in the interactive shell. This article was first published on EnterpriseITPlanet.com.
<urn:uuid:3b0154ae-1f42-4d0b-8934-e9559e4b54d4>
CC-MAIN-2022-40
https://www.datamation.com/security/mastering-ssh-connecting-executing-remote-commands-and-using-authorized-keys/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00120.warc.gz
en
0.878073
2,967
2.796875
3
Useful Tips for Selecting PoE Cables The selection of proper PoE cable is of great significance to realize an effective and reliable network transmission. PoE technology allows a single PoE network cable to provide the required communication and electrical power to a variety of devices. This article will give you some guidance for your Ethernet cable selections in PoE deployment. Power Over Ethernet Standards Power over Ethernet (PoE) stands for a proven method of delivering DC power over the same twisted pair cabling used for LAN data transmission. Check What Is Power over Ethernet (PoE)? for further learning. The IEEE (Institute of Electrical and Electronics Engineers) standards for Power over Ethernet are 802.3af, 802.3at, and 802.3bt presented as follows: PoE technology with explosive growth rates has been widely adopted in various applications—PoE IP surveillance cameras, PoE-enabled Voice over IP (VoIP) phones, Wireless Access Points (WAPs), IP PoE based lighting, Point-of-Sale (PoS) etc. See how FS PoE cables function in a network scenario by connecting FS S3150-8T2FP PoE switch to powered devices (PDs). However, without the right choice of cabling and network design, PoE can't realize the maximum utilization or even some connectivity issues will arise. Cabling standards bodies are working to expand the potential of PoE while addressing safety and performance issues. Consequently, picking proper Power over Ethernet cable is crucial. PoE Cable Selection Considering Factors Choosing the right cable is the key to network quality and reliability. What should be taken into consideration when choosing PoE network cables? There are several factors that need to be considered when selecting the cable type used for PoE applications. Conductor resistance (DCR) in PoE applications results in heat generation in the cable. Typically, Cat6 and Cat7 have larger conductor sizes than Cat5e patch cables. Cables with a larger conductor size can reduce more conductor resistances. Generally speaking, the heat generated in the cable will be reduced with the same ratio of the conductor resistance reduction. Cat6 cables tend to have about 80% of the DCR of Cat5e, thus only about 80% of the heat generation. The larger the conductor size of the cable, the better. Cable construction is also a factor causing the temperature rise of a cable. Copper cable can be divided into UTP (unshielded twisted pair cable) and STP (shielded twisted pair cable) two types based on cable structure. Usually, cables with metallic or foil shields are proven to dissipate more heat than UTP cables. Higher heat dissipation leads to cooler cable. When using Cat6 F/UTP cable, more than 40% heat can be dissipated compared to Cat6 UTP. If allowed, picking Cat7 S/FTP cable with a foil shield around each pair can dissipate more heat than Cat6 and Cat6 F/UTP. Further Learning: Shielded vs Unshielded Cat6a: How to Choose? The previous two factors will affect the cable temperature to some degree. Cables with high-temperature ratings allow for a higher amount of power to be dissipated. Typical temperature ratings for cables are 60°C, 75°C and 90°C. If the temperature of a cable rises, the electrical performance will be degraded. And it's not good for the cable's physical performance and longevity. Normally Speaking, shielded cables are less likely to be affected by temperature than UTP cables. When selecting PoE network cables, make sure that you are comparing apples to apples. Copper clad aluminum vs. pure copper cables, the former use aluminum instead of copper wire. Some people may choose the copper clad aluminum cable (CCA cable) on account of the tight budget, which may lead to network issues from using inferior materials to transmit the signal. The CCA cables have much higher DC resistance than copper cables. If the resistance is not compensated, the voltage drop will be greater for any channel length. Longer lengths will exceed TIA's channel DCR requirements, limiting the voltage available to the device. Higher resistance causes radiant heat to build up faster, and this may cause damage to the device. 100% copper network cabling is a safer and reliable choice for PoE applications. The amount of power that the PoE device requires for operation can't be ignored when selecting PoE cables. The power requirement will dictate which IEEE standard to follow and what the minimum category cabling to be used. Although each standard regulates a minimum category of cabling, other factors are important to be considered including voltage drop and heat dissipation. Voltage drop determines how much of the supplied power reaches the receiving device. The energy that is lost over the length of the cable transforms to heat and is referred to as heat dissipation. Excessive heat build-up can cause an increase in attenuation as well as premature aging of the cabling jacket. Data Transmission Requirements Another factor to consider is the data transmission requirement (e.g., 1000BASE-T, 10GBASE-T) of the device(s) being utilized. Devices such as megapixel IP cameras may require higher grade PoE cables in order to deliver the video signal as well as the required power. The last factor is the cable installation configuration which has a large effect on the heat dissipation ability. Heat will be kept within the cable as high thermal resistance and high conductor temperature occur with large cable bundles or other installation factors. The larger the cable bundle size, the higher the temperature, no matter what cable category and construction structure. Here provides several specific installing tips for PoE cabling: Get well-prepared before deploying, and never just wing it. Check your network devices to verify that they are PoE compliant. Make use of different media in the whole cabling design. Do not run cable near devices that generate electrostatics. The PoE cable installation is not a one-and-done, please prepare for the future upgrades. Think about your budget for the whole cabling installation, and find a cost-effective solution from a reliable supplier. FS: A Trustworthy PoE Cable Supplier After considering the abovementioned factors, finally there comes the selection of the network cable provider. High-quality and high-reliability PoE cables are what a qualified supplier should offer. FS encompasses a wide range of high-quality Cat5e, Cat6, Cat6a, and Cat7 PoE cables with shielded or unshielded type options. All of the Ethernet patch cables have passed strict Fluke testings including the Fluke patch cord test, Fluke channel test, Fluke permanent link test to guarantee high performance. FS Assured Program for Ethernet Cables offers more detailed info on FS's PoE Ethernet cables.
<urn:uuid:cbbabd38-85ae-4fd2-8cd6-ce3fcbec915a>
CC-MAIN-2022-40
https://community.fs.com/blog/how-to-choose-cables-for-power-over-ethernet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00120.warc.gz
en
0.900236
1,454
2.53125
3
What Are Internal Cyber Security Threats? Shockingly, around 22% of cyber security incidents are caused by internal threats. However, companies too often neglect to consider the risk of internal threats, even though they can result in critical data breaches. Internal cyber security threats are threats posed by individuals that originate within an organisation itself. They can be current employees, former employees, external contractors or vendors. Essentially anyone who has access to company devices or data. This form of data breach involves an internal attacker accessing sensitive company information with malicious intent. Attackers can include both current and former employees. There are many forms of data misuse by individuals that can pose a threat to organisations. They often rely on a user having access to networks and assets to disclose, modify and delete sensitive information. Some of this information could include: - Organisations security practices - Login credentials - Customer & employee data - Financial records Due to the nature of internal cyber security threats, traditional preventative security measures are often rendered ineffective. Why Do People Carry Out Internal Security Attacks? Individuals that pose a threat to an organisation may have very different goals from external cybercriminals. The main motivations of internal threats include: Fraud: The theft, modification or destruction of company data with the goal of deception. Espionage: Stealing information for another organisation (generally a competitor). Sabotage: The use of legitimate access to a company’s network/assets to damage or destroy the company’s functionality. Intellectual Property Theft: The theft of a company’s intellectual property, with the intention of either selling or utilising the property. Revenge: Employees who have been fired or otherwise made unemployed by a company may seek to damage the company’s reputation by accessing sensitive information. It’s important to note that not all internal threats are carried out by malicious parties. Many times internal threats arise from employees who unintentionally or carelessly expose sensitive company information. This is why employee training and education are critical in combating the risk of data breaches. There are numerous ways in which employees can inadvertently contribute to data breaches: Phishing or social engineering victims: Phishing involves an attacker sending fake communications to an employee, usually posing as a legitimate company. The user is then persuaded to supply credentials or details, through a fake login page or directly. By releasing sensitive credentials or data, users can inadvertently provide 3rd party criminals access to private systems. You can learn about the most common types of phishing attacks here. Using unauthorised devices: The use of unauthorised devices can pose a huge risk for security teams, especially given the difficulty in monitoring them. USB sticks are an example of a seemingly harmless device that employees might not consider to be a breach of security. However, an infected USB drive has the ability to provide remote access to 3rd party hackers who can then attempt to access sensitive company data. Using unauthorised software: As with unauthorised devices, employees may choose to use 3rd party software for legitimate business purposes. The threat arises from illegitimate or pirated software that can include malware and backdoors allowing access to attackers. Loss of company devices: The loss of unsecured/unencrypted company hardware is an extremely common cause of data leaks. Heathrow Aiport was fined £120,000 for “Serious” data protection failings when an employee lost an unencrypted USB storage device containing highly sensitive information. Improper Access Control: Managing access control is vital in combatting insider threats. Whether it’s managing internal users’ access, third-party access or revoking ex-employees’ access, managing access is critical. The process of managing access control can easily be overlooked but can cause huge issues if incorrectly implemented. How Can You Prevent Internal Threats? Generally, internal threats can be avoided by thorough company-wide policies, procedures and technologies that help prevent privilege misuse and mitigate the damage it can cause. The core policies that a company should focus on the reduce the risk of internal threats include: Regular Enterprise-Wide Risk Assessments: Knowing what your critical assets are, their vulnerabilities and the potential threats posed can give a great insight into how to enhance your IT security infrastructure. Combine this with the prioritisation of risks to continuously develop security. Documentation of Policies and Enforcement: Generally policies and regulations should be accurately documented to ensure efficient security software deployment. Policies should be created to personalise what access certain employees may have to avoid the risk of all employees accessing confidential & sensitive data. Access can often be assigned on a departmental basis. The most effective policies to focus on include General Data Protection Regulations, password management, and third-party access policies. Physical Security: A professional security team guided by your instructions can help greatly reduce the risk of internal threats. There are many layers to physical security which can help prevent malicious people from entering areas within an organisation that they should not have access to: - Mantraps: An individual wanting to access a specified area must go through an initial door into a holding room. Within this room, they are inspected from a window or camera before the second door is unlocked. - Turnstiles/Gates: This efficient control is very common in office buildings and requires employees to tap their ID pass on a reader, which will unlock the gate and allow them to pass through. - Electronic Doors: These secure doors should be used throughout the facility, to limit the areas that a person can access, based on their role. Only allowing certain people in specific areas not only reduces the risk of malicious activity but can also help find the person accountable as the list of potential suspects is much shorter. Monitoring controls can be implemented to provide real-time monitoring and give security personnel the ability to detect and respond to intruders or internal threats: - CCTV: This enables monitoring from multiple interconnected cameras across your site. This gives security teams expanded visibility of on-site activity. - Security Guards: While it’s of the utmost importance to have stringent policies in place, there also needs to be a team that is trained in their use and maintenance so they can fully utilise the security controls and respond to incidents. - Intrusion Detection Systems: These systems have several different triggers that can generate alerts or set off alarms, including thermal detection, sound detection, and movement detection. An example of this would be a sound detection system that can recognise the sound of glass smashing (such as an intruder breaking a window to gain access to the building) and trigger an alarm. Security controls that act as deterrents include warning signs and barbed wire. Their purpose is to deter potential attackers and make them less likely to attempt to gain entry: - Warning Signs: Signs such as “DO NOT ENTER” and “You Are Trespassing” can be enough to make people turn around, as they have been informed that any further activity may be illegal. - Fences: Chain-link metal fences are very common practice, with barbed or razor wire on top. This creates a barrier that can’t be climbed over and requires more effort for attackers to bypass, slowing them down, and giving more time for them to be detected. - Security Lighting: Lighting is used to prevent low visibility areas caused by darkness, which could allow an intruder to bypass security controls such as CCTV and Security Guards. Lighting the areas in conjunction with cameras is a great deterrent and monitoring solution. Monitor and Control Remote Access from all Endpoints: Deploy and properly configure wireless intrusion detection and prevention systems, as well as a mobile data interception system. Regularly review whether employees still require remote access and/or a mobile device. Ensure that all remote access is terminated when an employee leaves the organisation. Harden Network Security: Configuration of a firewall specifically designed for your organisation can help mitigate the risks of internal threats. This can include blacklisting all hosts and ports and then whitelisting only the ones that are required improving monitoring capabilities and reducing the movement of an internal threat. Configuring and implementing a DMZ (demilitarised zone) will ensure no critical systems interface directly with the internet. Segmenting a network is another effective method as this helps to prevent users from freely traversing a network. Recycle Hardware and Documentation Properly: Before discarding or recycling a disk drive, completely erase all of its data to ensure it is no longer recoverable – insiders may attempt to recover deleted data if not erased in the correct manner. If you are wanting to dispose of an old hard drive that could have potentially contained sensitive information destroying it physically would be the best approach to take. Threat Awareness & Security Training for all Employees: Train all new employees and contractors in security awareness before giving them access to any computer system. This should be set up as a standard procedure. Train and test your employees against social engineering attacks and sensitive data left out in the open. A good example would involve performing your own phishing attacks on their mailboxes or conducting social engineering attacks. Encourage employees to report security issues and train them on how they can help reduce internal threats. Consider offering incentives that reward those who follow security best practices. Unfortunately, it’s difficult to entirely eliminate the risk of internal threats completely however implementing an internal threat detection solution is the strongest defence. Develop Employee Termination Process: Develop a strong knowledge base or automated procedure for the termination of employees’ access to organisation systems. How Can Aspire Help with Internal Security Attacks? Here at Aspires Security Operations Centre, we utilise a managed EDR (Endpoint Detection & Response) system. This system allows us to continually monitor endpoints and servers on a 24/7 basis 365 days a year and immediately act upon known and recognised threats utilising machine learning and custom IOC (Indicator of Compromise) rules set up by our analysts and engineers. This gives you the upper hand against those internal threats allowing us to analyse and act before any damage is caused. The platform is constantly being developed in such a fast-moving industry, our threat hunters provide us with the intel so we can be proactive to reduce the amount of reactivity required. We utilise Crowdstrike which is an industry leader when it comes to EDR platforms.
<urn:uuid:29c9f3c8-f264-4a54-9f3c-5600c2b69d96>
CC-MAIN-2022-40
https://www.aspirets.com/what-are-internal-threats-cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00321.warc.gz
en
0.930809
2,122
2.828125
3
The voice of the author Spending some time on Wikipedia rereading the definition of cognitive computing created by the Cognitive Computing Consortium, I found on reflection a number of keys to the definition’s uniqueness buried with the text. The key paragraph from my perspective is:“Cognitive systems differ from current computing applications in that they move beyond tabulating and calculating based on preconfigured rules and programs. Although they are capable of basic computing, they can also infer and even reason based on broad objectives.” I specifically highlight the word “preconfigured.” That to me means the conscious application of a person or team’s perspective/understanding to the process. Most of the time that is very helpful. The perspective the team brings to the processes they are working on helps organize information or tag it so others can find or leverage it more effectively. Yet in years of creating information solutions based on a search, databases and other engines, I’ve learned that the ability to find the important content and data is limited due to the incomplete understanding on the part of future information seekers of “the voice of the author” (VOTA). What is VOTA? It is the ability to understand the terms and relationships used by the author (the creator of a document, a database, a query, etc.) in relationship to all other information being analyzed. That is a major challenge we see in search-based systems today. Consider the context of any given search. We have the predefined rules—taxonomies, tags, dictionaries, synonym lists, etc. We have a more or less organized collection of documents created over years by many different authors, and we have the query terms used by the searcher. Within each of those core elements, we always find different usage and even definition of key terms—all this variation coming about because of the different relationships to the subject matter assumed by the authors. Let’s look at a real-world example of the interaction of VOTA with that kind of discovery and retrieval environment. Consider, for example, a new chemical engineer who is looking for any reports or lab works done in the past on a chemical compound she needs to work on today. The system’s predefined rules and her terms miss the best documents available because the “authors” of those key documents used terms conceptually the same but not “known” to the system’s experts or the new chemist. That is a common shortcoming of “traditional” search tools, but here is a situation where adding machine learning algorithms to identify the latent relationships of terms and how they are used can now make the system adaptive and enable the chemist to leverage the best knowledge available. E-discovery software has been on that path for years, ahead of most other segments, but it still has a long way to go before showing the kinds of inference and reasoning capabilities referred to in the definition of cognitive systems. Lawyers give the system examples of documents responsive to the case (e.g., project X) and also examples of documents non-responsive to the case (e.g., fantasy football emails). The machine learning engines find all similar documents (and emails/IMs, which today form the bulk of the documents) and perform other optimizations so the legal team can focus their time on what could be the smoking gun in the case. When the system can recommend what document potentially contains the smoking gun for a case, we cross the line to a true cognitive computing system. Another example is the frustrating job of resume search. Most systems today are basically matching on keywords and using parsed fields (name, address, phone, etc.) to act as filters. Yet, what if the whole job description or resume could be used as a query and the VOTA was fully understood so that great candidates or jobs are the top results, not virtually random entries on a list? A new generation of job boards and application tracking systems (ATS) are leveraging machine learning add-ons to do just that. For example, your best developer just left. Yes, there is a job description but it would be easier to just hand the recruiter the resume of the person who left and say find me more people like this person. And the recruiter feeds the whole resume to the system to find other people with the same skills and experiences. Update building blocks We forget many times that in most of our existing information-based solutions, core context, relationships and language understandings are preconfigured. And those preconfigured “insights” have the biases, perspectives and current understanding of relationships of the experts that created them at the time. (Sad to say, in many companies those building blocks are not updated or maintained, and the users do not understand why the solution quality continues to slide.) That is what is core to human learning. When giving talks on conceptual understanding, I use the following example: What is the first word that comes to mind when you hear the word “coffee”? Turn to the person next to you and tell them. In most cases no one hears the same word but understands it right away. In a handful of cases, a simple explanation has the light bulb go on and the person’s conceptual understanding of “coffee” has been expanded. (I am still surprised how few people know what a French Press is.) Having a key part of the cognitive computing solution leverage machine learning approaches is critical. Only by leaving behind the rules-based methods common today can cognitive solutions identify relationships in the terms used and recognize different words being used to convey the same concept in different content—understanding the VOTA, the ultimate translator. To be truly adaptive, interactive, iterative and contextual, key components of an overall solution must identify, leverage and expand “its” understanding of concepts so the solutions can really be a “partner” to humans.
<urn:uuid:94f50321-e112-40b3-9a6c-021c4a05ddb5>
CC-MAIN-2022-40
https://www.kmworld.com/Articles/ReadArticle.aspx?ArticleID=113671
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00321.warc.gz
en
0.934412
1,222
2.640625
3
The device people use to communicate online – a smartphone, desktop, or tablet – can affect the extent to which they are willing to overshare intimate or personal information about themselves, according to University of Pennsylvania researchers. Do smartphones alter what people are willing to disclose about themselves? A study suggests that they might. The research indicates that people are more willing to reveal personal information about themselves online using their smartphones compared to desktop computers. For example, Tweets and reviews composed on smartphones are more likely to be written from the perspective of the first person, to disclose negative emotions, and to discuss the writer’s private family and personal friends. Likewise, when consumers receive an online ad that requests personal information (such as phone number and income), they are more likely to provide it when the request is received on their smartphone compared to their desktop or laptop computer. Why do smartphones have this effect on behavior? Co-author Shiri Melumad explains that “Writing on one’s smartphone often lowers the barriers to revealing certain types of sensitive information for two reasons; one stemming from the unique form characteristics of phones and the second from the emotional associations that consumers tend to hold with their device.” First, one of the most distinguishing features of phones is the small size; something that makes viewing and creating content generally more difficult compared with desktop computers. Because of this difficulty, when writing or responding on a smartphone, a person tends to narrowly focus on completing the task and become less cognizant of external factors that would normally inhibit self-disclosure, such as concerns about what others would do with the information. Smartphone users know this effect well – when using their phones in public places, they often fixate so intently on its content that they become oblivious to what is going on around them. The second reason people tend to be more self-disclosing on their phones lies in the feelings of comfort and familiarity people associate with their phones. Melumad adds, “Because our smartphones are with us all of the time and perform so many vital functions in our lives, they often serve as ‘adult pacifiers’ that bring feelings of comfort to their owners.” The downstream effect of those feelings shows itself when people are more willing to disclose feelings to a close friend compared to a stranger or open up to a therapist in a comfortable rather than uncomfortable setting. As Co-author Robert Meyer says, “Similarly, when writing on our phones, we tend to feel that we are in a comfortable ‘safe zone.’ As a consequence, we are more willing to open up about ourselves.” The analysis: Smartphone pushing you to overshare? The data to support these ideas is far-ranging and includes analyses of thousands of social media posts and online reviews, responses to web ads, and controlled laboratory studies. For example, initial evidence comes from analyses of the depth of self-disclosure revealed in 369,161 Tweets and 10,185 restaurant reviews posted on TripAdvisor, with some posted on PCs and some on smartphones. Using both automated natural-language processing tools and human judgements of self-disclosure, the researchers find robust evidence that smartphone-generated content is indeed more self-disclosing. Perhaps even more compelling is evidence from an analysis of 19,962 “call to action” web ads, where consumers are asked to provide private information. Interacting with firms Consistent with the tendency for smartphones to facilitate greater self-disclosure, compliance was systematically higher for ads targeted at smartphones versus PCs. The findings have clear and significant implications for firms and consumers. One is that if a firm wishes to gain a deeper understanding of the real preferences and needs of consumers, it may obtain better insights by tracking what they say and do on their smartphones than on their desktops. Likewise, because more self-disclosing content is often perceived to be more honest, firms might encourage consumers to post reviews from their personal devices. But therein lies a potential caution for consumers–these findings suggest that the device people use to communicate can affect what they communicate. This should be kept in mind when thinking about the device one is using when interacting with firms and others.
<urn:uuid:0c80a268-01d6-4761-b2ab-7f04a075b502>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2020/05/06/smartphone-overshare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00321.warc.gz
en
0.951949
868
3.125
3
In a new milestone, Dr. Euan Ashley and his colleagues Stanford School of Medicine used AI to sequence and analyze the genome of a hospitalized patient in 5 hours and 2 minutes. This is the fastest DNA sequencing technique ever developed for genetic diagnostics. These researchers hold the record for the fastest genetic diagnosis in a critical care setting – 7 hours 18 minutes! For comparison, traditional genetic sequencing and testing takes weeks. The record was certified by the National Institute of Science and Technology’s Genome in a Bottle group and is documented by Guinness World Records. Genome sequencing allows scientists to see a patient’s complete DNA makeup including information about inherited diseases. Accelerating genetic diagnosis is important because it can help doctors diagnose rare genetic diseases. Once doctors know the specific genetic mutation, they can tailor treatments in a very precise way. This new method can guide clinical management, improve prognosis, and reduce costs for patients. The ultrarapid technique accelerated every step of genome sequencing workflow and resulted in rapid and reliable results. One of the patients in the study was a 3-month-old full-term infant who was having epileptic seizures. Magnetic resonance imaging of the brain revealed no abnormalities. The researchers used the new method and in 8 hours and 25 minutes they identified a likely pathogenic heterozygous variant in CSNK2B. This variant and gene are known to cause a neurodevelopmental disorder with early-onset epilepsy. The researchers made a definitive diagnosis of a CSNK2B-related disorder called Poirier–Bienvenu neurodevelopmental syndrome. Since they diagnosed the disorder in 8 hours, the baby did not have to undergo additional diagnostic testing and the doctors were able to begin treatment. The study entitled Ultrarapid Nanopore Genome Sequencing in a Critical Care Setting was published in the New England Journal of Medicine. The research was led by Dr Euan Ashley, professor of medicine, genetics and biomedical data science at Stanford School of Medicine. Dr. Euan and his team collaborated with NVIDIA, Oxford Nanopore Technologies, Google, Baylor College of Medicine, and the University of California. The authors described the method they used to rapidly sequence and analyze of the genomes of twelve patients. Five of the patients in the study received a diagnosis. To achieve super-fast sequencing speeds the researchers needed new hardware. The researchers used nanopore sequencing on Oxford Nanopore’s PromethION Flow Cells to generate more than 100 gigabases of data per hour, and NVIDIA GPUs on Google Cloud to speed up the base calling and variant calling processes. Genomic data overwhelmed the lab’s computational systems and they weren’t able to process the data fast enough. They had to completely rethink their data pipelines and storage systems. Graduate student Sneha Goenka found a way to funnel the data straight to a cloud-based storage system where computational power could be amplified enough to sift through the data in real time and the Stanford Research Computing Center advised the team on the network and bandwidth requirements needed to complete the project. AI algorithms scanned the incoming genetic code for errors that might cause disease and compared the patients’ gene variants against publicly documented variants known to cause disease. Oxford Nanopore Technologies contributed to the cost of sequencing and reagents, Google contributed to the cost of cloud computing, and NVIDIA contributed to the cost of Parabricks. This is an excellent example of the application of high performance computing to healthcare. The estimated costs of this approach including DNA extraction, library preparation, sequencing, and computation range from $4,971 – $7,318. Since the daily cost of critical care is more than $10,000, rapid genome sequencing diagnostics can significantly reduce costs. Because of the significant cost savings, rapid genome sequencing diagnostics has been reimbursed by several insurance companies. The study was conducted at two hospitals in Stanford California between December 2020 and May 2021. In less than six months the team enrolled and sequenced the genomes of 12 patients. The team’s diagnostic rate was 42% which is 12% higher than the average rate for diagnosing mystery diseases. - Researchers enrolled 12 patients who were generally representative of persons living in the United States with respect to race, ethnic group, and sex. - Researchers obtained an initial genetic diagnosis in 5 of the patients. The shortest time from arrival of the blood sample in the laboratory to the initial diagnosis was 7 hours 18 minutes. - After establishing a diagnosis in Patient 1, they updated their bioinformatics framework to permit the transfer of terabytes of raw signal data to Cloud storage in real time and distributed the data across multiple Cloud computing machines to achieve near real-time base calling and alignment. This step reduced the postsequencing run time by 93%, from 7 hours 21 minutes to 34 minutes. - Flow cells were washed and reused until exhaustion to reduce the sequencing cost per sample. - Libraries were bar-coded in Patients 1 through 7 to prevent carryover from one sample to the next. - After processing the sample obtained from Patient 7, they benchmarked and adopted a bar-code–free method to rapidly generate genome sequences. Removing the bar-coding process accelerated sample preparation by 37 minutes, to an average of 2.5 hours, and enabled them to load a greater amount of patients’ DNA into each flow cell (333 ng vs. 155 ng) and increase pore occupancy (to 82% from 64%) - Their sequencing workflow generated 173 to 236 Gb of data per genome using 48 flow cells, with an alignment identity of 94% and 46 to 64× autosomal coverage. Half the sequencing throughput was in reads that were 25 kb or longer. - Small variants and structural variants called after the reads were aligned to the GRCh37 human reference genome, which generated a median of 4,490,490 single-nucleotide variants and small insertions and deletions. - Custom filtration and prioritization of variants with an ultrarapid scoring system substantially decreased the number of candidate variants for manual review to a median of 29 (range 16 to 53) for small variants and 22 (range 11 to 37) for structural variants - Each initial diagnosis was immediately reviewed by study and bedside physicians, and a consensus was reached as to whether the proposed variant represented the primary cause of the patient’s presentation. - Diagnostic variants were identified in 5 of the 12 patients, who ranged in age from 3 months to 57 years. - The findings were immediately confirmed by a laboratory certified by the Clinical Laboratory Improvement Amendments process and informed clinical management including sympathectomy, heart transplantation, screening, and changes in medication, for each of the 5 patients or their family members. Dr. Euan Ashley Dr. Ashley was born in Scotland and graduated with 1st class Honors in Physiology and Medicine from the University of Glasgow. He completed medical residency and a PhD at the University of Oxford before moving to Stanford University where he trained in cardiology and advanced heart failure, joining the faculty in 2006. His group is focused on the science of precision medicine. In 2010, Dr. Ashley led the team that carried out the first clinical interpretation of a human genome. The article became one of the most cited in clinical medicine that year and was later featured in the Genome Exhibition at the Smithsonian in DC. Over the following 3 years, the team extended the approach to the first whole genome molecular autopsy, to a family of four, and to a case series of patients in primary care. They now routinely apply genome sequencing to the diagnosis of patients at Stanford hospital where Dr Ashley directs the Clinical Genome Program and the Center for Inherited Cardiovascular Disease. In 2021, his first book The Genome Odyssey – Medical Mysteries and the Incredible Quest to Solve Them was released. Dr Ashley has a passion for rare genetic disease and was the first co-chair of the steering committee of the Undiagnosed Diseases Network. He was a recipient of the National Innovation Award from the American Heart Association and the NIH Director’s New Innovator Award. He is part of the winning team of the $75m One Brave Idea competition and co-founder of three companies: Personalis Inc ($PSNL), Deepcell Inc, and SVExa Inc. He was recognized by the Obama White House for his contributions to Personalized Medicine and in 2018 was awarded the American Heart Association Medal of Honor for Genomic and Precision Medicine. He was appointed Stanford Associate Dean in 2019. Margaretta Colangelo is Co-founder and CEO of Jthereum an enterprise Blockchain company and President of U1 Technologies an enterprise software company. She has published over 300 articles on AI and DeepTech. Margaretta serves on the advisory board of the AI Precision Health Institute at the University of Hawaii Cancer Center. She is based in San Francisco.
<urn:uuid:1260345c-c6a5-4efc-8080-763b1fd8398d>
CC-MAIN-2022-40
https://www.aitimejournal.com/@margaretta.colangelo/stanford-researchers-use-ai-to-sequence-and-analyze-dna-in-5-hours
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00521.warc.gz
en
0.944483
1,823
2.921875
3
When people hear the words “identity theft,” what typically comes to mind are fraudulent credit card charges or illicit bank withdrawals. But the reality of identity theft is more complex. In fact, financial identity theft is only one type of identity crime, and others can be more difficult to detect. Learn about six different types of identity theft in order to better protect yourself and your loved ones. Financial Identity Theft: New account fraud jumped by 88% in 2019 While microchips in credit cards have reportedly helped curb in-store fraud, experts say that mobile and online transactions are now the low-hanging fruit. As a result, card-not-present fraud, a scam in which the credit card is not physically used such as for online or phone transactions, has ballooned in recent years. New account fraud, in which a thief opens a brand-new account in the victim’s name, jumped by 88 percent in 2019. The most common types of new accounts that scammers open are online accounts such as eBay or Amazon, checking or savings accounts, and credit card accounts. Tax Identity Theft: Thieves Aim to File a Fake Tax Return Before the Victim Does According to the IRS, tax-related identity theft is one of the most common tax scams. Tax identity theft occurs when a criminal uses the victim’s Social Security number to file a fraudulent tax return. Victims may not know the crime has happened until the IRS rejects their tax return as a duplicate filing. In recent years, the IRS has advised taxpayers to file as early as possible—and with good reason. The IRS accepts only one tax return per Social Security number, so if a taxpayer can file their authentic tax return before a potential criminal can file their fraudulent one, they may be able to beat an identity thief to the punch. On the other hand, if a criminal succeeds in filing their fraudulent return first, it could take months for the victim to resolve the issues. Medical Identity Theft: Fictitious Medical Information Could Plague the Victim for Years According to the World Privacy Forum, medical identity theft can cause great harm to its victims, as it often results in falsified information being entered in the victim’s medical records that can plague their medical and financial lives for years. Medical identity theft is when a criminal submits fraudulent claims to the victim’s health insurance or Medicare or uses the victim’s information to get treatment, prescription drugs, medical devices, or other benefits. It can lead to tens of thousands of dollars in damages. If a criminal gets treatment in the victim’s name, erroneous medical records could cause treatment delays, incorrect prescriptions, or misdiagnoses. It could even affect the victim’s ability to get medical care and insurance benefits in the future. Despite this risk, experts say that medical identity theft is the least studied and most poorly documented of identity theft crimes. Employment Identity Theft: Thieves Seeking Employment—Or Unemployment Benefits In employment identity theft cases, scammers may file fraudulent unemployment claims, or alternatively, they may use a falsified or stolen ID to get a job using the victim’s identity. Victims may learn of employment fraud when their employer asks why they have applied for jobless benefits or receive a W-2 or 1099 from an unfamiliar employer. In many cases, fraudulent unemployment payments are deposited into bank accounts controlled by the scammers. However, payments are sent to the victim's legitimate bank account instead. In that case, the criminals may contact the victim by phone, email, or text message and impersonate an unemployment official in an attempt to get them to transfer the funds. According to the Internet Theft Resource Center (ITRC), anyone can be a victim of unemployment benefits fraud, and individuals both with and without a current position have been impacted. Child Identity Theft: Children Can Be the Perfect Mark for Identity Thieves Many parents assume that their children are safe from identity theft because of their young age and lack of credit history. However, for identity thieves, children can be the perfect mark. While thieves may target adults for the money in their accounts, a child represents a clean slate for opening new lines of credit. The identity theft of a child can go undetected for years—or even decades. And the consequences can be devastating. When the child becomes a young adult and seeks independence, they may have problems with banks, landlords, utility companies, and potential employers due to their negative credit history. In many cases, identity theft of a child takes place in the child’s own home—or close to it. It’s estimated that 60 percent of child identity fraud victims personally know the perpetrator. Criminal Identity Theft: An Identity Thief’s Crimes Can Become the Victim’s Though rare, criminal identity theft can occur when someone cited or arrested for a crime uses the victim’s name and identifying information, resulting in a criminal record in the victim’s name. Criminals may steal a victim’s identity to commit a crime, enter a country, get special permits, hide their own identity, or commit acts of terrorism. An identity thief, using the victim’s name or personal information, may sign a citation or be required to appear in court. When neither thief nor victim appear in court, the judge may issue an arrest warrant, or a criminal record may be created in the victim’s name. The victim is often unaware that they have a criminal record until they are arrested because of an outstanding warrant, are denied employment, or fired from their current job after a criminal background check. It can be difficult to resolve criminal identity theft because the victim often appears to be the criminal. Better Protect Yourself and Your Loved Ones from Identity Theft Fortunately, there are steps you can take to better protect yourself and your family from identity theft. Download the white paper Your Guide to Identity Theft to learn more about the various types of identity theft, common warning signs, how identity theft typically happens, and steps people can take to better protect themselves. How to Report an Incident According to the FTC, if you or a loved one believe you have been the victim of identity theft, report it immediately at IdentityTheft.gov, the federal government’s resource for identity theft victims. Below are additional steps to report other specific types of identity theft. - For possible tax identity theft, the IRS recommends responding immediately to any IRS notice. If an e-filed return was rejected because of a duplicate filing, or if the IRS instructs you to do so, complete IRS Form 14039, Identity Theft Affidavit (PDF). - For possible medical identity theft, the US Department of Health and Human Services recommends contacting their fraud hotline and Medicare.gov’s web page on reporting fraud. - For possible employment identity theft, report it to your employer and the state unemployment benefits agency. - For a possible confidence or romance scam, report it to the Internet Crime Complaint Center and local FBI field office.
<urn:uuid:9888926e-1af2-4201-b34b-5d42bef593fd>
CC-MAIN-2022-40
https://www.idwatchdog.com/identity-theft-types
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00521.warc.gz
en
0.923834
1,441
2.546875
3
What is the CVSS scoring system? The Common Vulnerability Scoring System (aka CVSS) is an open industry standard for assessing the severity of computer system security vulnerabilities. The CVSS provides a numerical (0-10) representation of the severity of an information security vulnerability. CVSS scores are commonly used by Information security (InfoSec) teams as part of a vulnerability management program to provide a point of comparison between vulnerabilities and to prioritize remediation of vulnerabilities. What is the difference between CVSSv2 and CVSSv3? Authors of CVSSv3 worked to introduce scoring changes that more accurately reflected the reality of vulnerabilities encountered in the wild. The three major metric groups – Base, Temporal, and Environmental each remained the same, but with changes within both the Base and the Environmental groups. In the Base group, several changes were made: - Confidentiality, Integrity, and Availability metrics were each changed to have scoring parameters of None, Low, or High. - The Attack Vector metric added the Physical (P) value, which indicates a vulnerability where the adversary must have physical access to a system in order to exploit the vulnerability. - A new metric, User Interaction (UI), was added. This metric indicates whether or not the cooperation of a legitimate user is needed to conduct an exploit. - Another new metric, Privileges Required (PR) was added to indicate that administrative or other escalated privileges on the target machine must be achieved in order to successfully exploit the system. In the Environmental group, the biggest change was that the environmental metrics in v2 were completely replaced with what’s known as a Modified Base Score. Essentially, each of the Base metrics may be modified by the organization to reflect differences between their situation and environment vs others. Holm Security severity levels: - 0: Info - 0,1–2,0: Low - 2,1–5,0: Medium - 5,1–8,0: High - 8,1–10: Critical What does Holm Security support? Holm Security support both CVSSv2 and CVSSv3 Why some CVEs are missing the CVSS v3 score? -CVSS v3.0 was first released in June 2015 which means all previously disclosed Vulnerabilities only have CVSS v2. -In June 2019 The CVSS v3.1 was released which means CVEs disclosed between 2015 and 2019 can only have a v3.0 score. -All CVEs after June 2019 are having v3.1 scores Insight into main changes to CVSS 3.1 compared to CVSS 3.0 Version 3.1 focuses on clarifying and improving the existing standard. The most significant modifications are explained below: CVSS measures severity, not risk This version highlights that the CVSS is designed to measure the severity of a vulnerability and, therefore, must not be used as the only tool to assess risk. The CVSS v3.1 specification document now clearly states that the CVSS Base Score represents only the intrinsic characteristics of a vulnerability that are constant over time and are common to different user environments. To carry out a systematic risk analysis, this base score must be complemented with a contextual analysis taking advantage of the temporal and environmental metrics, and with other external factors not considered by the CVSS as exposure and threat. Changes in Attack Vector and Modified Attack Vector The descriptions of the values (Network, Adjacent, Local and Physical) of the Attack Vector (AV) metric are reformulated to make them more familiar to CVSS suppliers and general consumers, avoiding references to the OSI model. A guide section for the use of this metric is also included when resources are behind a firewall. The value Adjacent (A) of the Attack Vector (AV) metric, of the Base Score group of metrics, as defined in CVSS 3.0 caused ambiguity in the case of logically adjacent or trusted networks (MPLS, VPN, etc.). To address this inaccuracy, the definition of Adjacent is extended, including these limited-access networks. Please read more about the CVSS versions here (external link):
<urn:uuid:f8bbf36a-8b4f-4e8b-854b-f84bde45f4ed>
CC-MAIN-2022-40
https://support.holmsecurity.com/hc/en-us/articles/4864503965340-Which-Common-Vulnerability-Scoring-System-CVSS-version-is-used-
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00521.warc.gz
en
0.937149
867
2.59375
3
Introduction Several time synchronization mechanisms can be used in a network. The most common standards are Network Time Protocol (NTP) and Precision Time Protocol (PTP). NTP, which is the older, well-known protocol, is currently in its fourth version. NTP was primarily developed to achieve accuracy in the submillisecond range and is widely implemented for network […] Everything You Have To Know About PTP or Precision Time Protocol?
<urn:uuid:76d2692f-3911-4abe-9720-4433d815b24e>
CC-MAIN-2022-40
https://moniem-tech.com/tag/precision-time-protocol/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00521.warc.gz
en
0.915903
88
3.515625
4
Question 387 of 952 Which two are advantages of static routing when compared to dynamic routing? (choose two) A. Security increases because only the network administrator may change the routing tables. B. Configuration complexity decreases as network size increases. C. Routing updates are automatically sent to neighbors. D. Route summarization is computed automatically by the router. E. Routing traffic load is reduced when used in stub network links F. An efficient algorithm is used to build routing tables, using automatic updates. G. Routing tables adapt automatically to topology changes.
<urn:uuid:fac017de-a325-43b9-a32a-e99cc7e195f9>
CC-MAIN-2022-40
https://www.exam-answer.com/cisco/200-125/question387
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00521.warc.gz
en
0.803787
152
2.90625
3
One of the most famous malware variants in existence today, ransomware – which enables a cybercriminal to deny a victim access to their files until a ransom has been paid – has become a major focus of cybercriminals and cyber defenders alike. Ransomware works by using encryption algorithms, which are designed to ensure that only someone with access to the decryption key can reverse the transformation applied to the encrypted data and restore the original, usable version. A victim is motivated to pay a ransom by the loss of access to valuable data, and, upon payment of the ransom, all the ransomware operator must do is provide a short decryption key to restore access to all of the encrypted data. The theory behind ransomware is fairly simple and does not vary much from one variant of ransomware to another. However, the specifics of how ransomware is used by cybercriminals can be very different for different groups and attack campaigns, and has evolved significantly over the past few years. According to Checkpoint Research, the number of organizations impacted by ransomware globally has more than doubled in the first half of 2021 compared with 2020, and the healthcare and utilities sectors are the most targeted sectors since the beginning of April 2021. The success of double extortion in 2020 has been evident, particularly since the outbreak of the Covid-19 pandemic. While not all instances – and their outcomes – are reported and publicized, statistics collected between 2020 and 2021 illustrate the assault vector’s importance. The average ransom price has risen by 171% in the last year, to almost $310,000. Ransomware attacks that have taken place at the end of 2020 and the beginning of 2021 point toward a new attack chain – essentially an enlargement to the double extortion ransomware approach, incorporating an extra, unique threat to the process – A Triple Extortion attack Famous attacks in 2021- Microsoft Exchange hack, Colonial Pipeline network, City of Tulsa, JBS Meat Company, Fujifilm In late 2019 and early 2020, a new trend emerged in ransomware attacks. Instead of restricting themselves to encrypting a victim’s files, ransomware authors began stealing sensitive data from their targets as well. Ransomware variants stealing user data include Ako, CL0P, DoppelPaymer, Maze, Pysa, Nefilim, Nemty, Netwalker, Ragnarlocker, REvil, Sekhmet, and Snatch. This move was in response to organizations refusing to pay ransom demands after falling victim to a ransomware infection. Although the cost of remediating a ransomware attack is often higher than the demanded ransom, best practice dictates that ransoms should not be paid since they enable the cybercriminals to continue their operations and perform additional attacks. By stealing data from infected computers before encrypting it, ransomware operators could threaten to expose this data if the victim refused to pay the ransom. Depending on the type of data collected and leaked, this could cause an organization to lose competitive advantage in the marketplace or run afoul of data protection laws, such as the General Data Protection Regulation (GDPR), for its failure to protect the customer data entrusted to it. 2019 was famous as the year in which ransomware operators switched their focus to critical institutions. In the first three quarters of 2019 alone, over 621 hospitals, schools, and cities in the United States were victims of ransomware attacks by Ryuk and other ransomware variants. These attacks had an estimated price tag in the hundreds of millions of dollars and resulted in cities being unable to provide services to their residents, and hospitals being forced to cancel non-essential procedures in order to provide critical care to patients. This new approach to ransomware took advantage of the importance of the services that these organizations provide. Unlike some businesses, which could weather degraded operations while recovering from an attack, cities, schools, and hospitals needed to restore operations as quickly as possible and often had access to emergency funds. As a result, ransomware attacks against these organizations were often successful and continue to occur. Unlike most ransomware attacks that target random individuals and businesses, Ryuk ransomware was a highly targeted attack. The cyber criminals behind this operation targeted victims whose businesses would be majorly disrupted even by a small amount of downtime. Ryuk was designed to encrypt company servers and disrupt business until the ransom was paid rather than steal or compromise an individual’s data. Targeted victims included newspapers, including all Tribune papers, and a water utility company in North Carolina. Affected newspapers had to produce a scaled-down version of the daily news that didn’t include paid classified ads. Ryuk infected systems through malware called TrickBot and remote desktop software. After blocking access to servers, Ryuk demanded between 15-50 Bitcoins, which was about $100,000-$500,000. In addition to disabling servers, infecting endpoints, and encrypting backups, Ryuk disabled the Windows OS system restore option to prevent victims from recovering from the attack. When the malware was discovered, patches were created to thwart the attack, but they didn’t hold. The moment servers went back online, Ryuk started reinfecting the entire network of servers. Experts from McAfee suspect Ryuk was built using code originating from a group of North Korean hackers who call themselves the Lazarus Group. Although, the ransomware required the computer’s language to be set to Russian, Belarusian, or Ukrainian in order to execute. Like Ryuk, PureLocker was designed to encrypt entire servers and demand a ransom to restore access. The malware has been specifically designed to go undetected by hiding its malicious behavior in sandbox environments and mimicking normal functions. It also deletes itself after the malicious code executes. PureLocker targeted the servers of large corporations attackers believed would pay a hefty ransom. After a thorough analysis, cryptographic researchers from Intezer and IBM X-Force named this ransomware PureLocker because it’s written in the PureBasic programming language. Writing malware in PureBasic is unusual, but it gave attackers a serious advantage: it’s difficult to detect malicious software written in PureBasic. PureBasic programs are also easily used on a variety of platforms. PureLocker is still being executed by large cybercriminal organizations. Experts believe that PureLocker is being sold as a service to cybercriminal organizations who have the knowledge required to target large companies. Strangely, ransomware-as-a-service (RaaS) is now a “thing.” Cybersecurity experts aren’t sure exactly how PureLocker is getting onto servers; adopting a zero-trust approach to network security is the best way to protect against unknown threats. REvil is malware from a strain called GandCrab that won’t execute in Russia, Syria, or several other nearby countries. This indicates its origin is from that area. Like PureLocker, REvil is believed to be ransomware-as-a-service and security experts have said it is one of the worst instances of ransomware seen in 2019. Why is REvil so bad? With most ransomware attacks, people can ignore the ransom demand and cut their losses. However, those behind the attack threatened to publish and sell the confidential data they encrypted if the ransom wasn’t paid. In September 2019, REvil shut down at least 22 small towns in Texas. Three months later, on New Year’s Eve, REvil shut down Travelex – a UK currency exchange provider. When Travelex went down, airport exchanges had to go old school and create paper ledgers to document exchanges. Cybercriminals demanded a $6 million ransom, but Travelex won’t confirm or deny paying this sum. REvil exploits vulnerabilities in Oracle WebLogic servers and the pulse Connect Secure VPN. On March 1, 2019, ransomware attacked Jefferson County’s 911 dispatch center and took it offline. County jail staff members also lost the ability to open cell doors remotely, and police officers could no longer retrieve license plate data from their laptops. Without a working 911 system, the entire city was left vulnerable to the secondary effects of this ransomware attack. Dispatchers didn’t have access to computers for two weeks. The videoconferencing system that allowed inmates to connect with family members also went down. Guards had to escort inmates to family visits in person, which increased the risk to their safety. The city paid the $400,000 ransom and was able to restore their systems. On April 10, 2019, the city of Greenville, NC was attacked by ransomware named RobinHood. When most of the city’s servers went offline, the city’s IT team took remaining servers offline to mitigate the damage. This attack wasn’t the first time RobinHood made its rounds. In May 2019, the city of Baltimore was hit hard. The city had to spend more than $10 million to recover from a RobinHood attack. Although the ransom was only $76,000, it cost the city $4.6 million to recover data and all the city’s systems were non-functional for a month. The city suffered $18 million in damages. In 2018, ransomware fell in popularity as rises in the value of cryptocurrency drove a surge in cryptojacking. Cryptojacking malware is designed to infect a target computer and use it to perform the computation-heavy steps required to “mine” Bitcoin and other Proof of Work (PoW) cryptocurrencies and receive the rewards associated with finding a valid block. However, this isn’t to say that ransomware was completely inactive in 2018. In August 2018, the Ryuk ransomware (one of the leading ransomware threats today) was discovered for the first time “in the wild”. The emergence of Ryuk was part of a shift in how ransomware operators made their money. Attacks like WannaCry targeted quantity over quality, attacking as many victims as possible and demanding a small ransom from each. However, this approach was not always profitable as the average person lacked the know-how to pay a ransom in cryptocurrency. As a result, ransomware operators had to provide a significant amount of “customer service” to get their payments. In 2018 and beyond, ransomware operators have become more selective in their choice of targets. By attacking specific businesses, cybercriminals could increase the probability that the data encrypted by their malware was valuable and that their target was capable of paying the ransom. This enabled ransomware operators to demand a higher price per victim with a reasonable expectation of being paid. 2017 was the year when ransomware truly entered public awareness. While ransomware has been around for decades, the WannaCry and NotPetya attacks of 2017 made this type of malware a household name. These ransomware variants also inspired other cybercriminals and malware authors to enter the ransomware space. WannaCry is a ransomware worm that uses the EternalBlue exploit, developed by the NSA, to spread itself from computer to computer. Within a span of three days, WannaCry managed to infect over 200,000 computers and cause billions in damages before the attack was terminated by a security researcher targeting its built-in “kill switch”. NotPetya is an example of a famous variant that actually isn’t ransomware at all, but rather wiper malware that masquerades as ransomware. While it demanded ransom payments from its victims, the malware’s code had no way to provide the malware’s operators with a decryption key. Since they didn’t have the key, they couldn’t provide it to their victims, making recovery of encrypted files impossible. Ransomware has proven to be an extremely effective tool for cybercriminals. The loss of access to their data has motivated many organizations to pay large ransoms to retrieve it. The success of ransomware means that it is unlikely to go away as a threat to organizations’ cybersecurity. Protecting against this damaging malware requires deployment of a specialized anti-ransomware solution.
<urn:uuid:e5df5ad3-268c-478e-bc4f-68ab4d90f5d6>
CC-MAIN-2022-40
https://www.checkpoint.com/cyber-hub/threat-prevention/ransomware/recent-ransomware-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00521.warc.gz
en
0.960778
2,495
2.515625
3
What is Referential Integrity and Why do You Need it? 1 Apr, 2022 | IDS What is Referential Integrity? Data quality is any company’s most valuable asset. The purpose of this article is to provide best data quality management practices for creating a database with referential integrity. Referential integrity is a term used in database design to describe the relationship between two tables. It is important because it ensures that all data in a database remains consistent and up to date. It helps to prevent incorrect records from being added, deleted, or modified. Referential integrity is a constraint on the database design that makes certain that each foreign key in a table point to a unique primary key value in another table. This will ensure that data is not lost, and it will also help you maintain data quality. What are the Causes of Inconsistent Database Data? Referential integrity is violated when there is more than one primary key value for a foreign key. If referential integrity is not enforced, then it may lead to data inconsistency and data loss. The following are some of the reasons why this constraint is violated: → Primary keys are not properly enforced → Foreign keys are not properly enforced → Database design is incorrect. Why do we Have Referential Integrity in the First Place? Referential integrity is a data quality concept that ensures that when you make changes to data in one place, those changes are reflected in other related records. This is done by enforcing a rule that says that the foreign key in one table can only refer to the primary key of another table. This means that if you change the information in one column, it will automatically be updated in all other related columns. A primary key constraint violation is a constraint where the primary key of the table referenced cannot be null. Best Practices for Creating Databases with Referential Integrity Referential integrity is usually enforced by creating a foreign key in one table that matches the primary key of another table. If referential integrity is not enforced, then you may encounter data redundancy and inconsistencies. The first step to creating a database with referential integrity is to identify all tables and their respective keys. You can do this using the data quality tools within the iData toolkit. The next step would be to decide what type of relationship exists between these tables. There are three types of relationships: One-to-One, One-to-Many, and Many-to-Many. Once you have decided on the type of relationship, you can then create the appropriate relationships between your tables, again using iData. The following are some best practices for creating referential integrity: → Create primary and foreign keys for each table → Ensure that the data types are matching → Ensure that there are no duplicate entries → Make sure to not create circular relationships. What is Database Normalization? Database normalization is a process for organizing the data in a relational database, so that it is easy to query and easy to update. The data is stored in separate tables to avoid data redundancy and improve efficiency - and for this reason, database normalization and referential integrity are closely linked, as referential integrity also ensures that updates are applied consistently across multiple tables. There are three steps to database normalization: 1) First Normal Form - The first step of database normalization is called first normal form. It states that all columns (attributes) must be atomic and cannot be broken down further. 2) Second Normal Form - The second step of database normalization is called second normal form. This step requires that all non-key attributes must be dependent on the key attribute. 3) Third Normal Form - The third and final step of database normalization is called third normal form. It states that all non-key attributes must depend on two or more key attributes. Why Checking your Database Normalization & Referential Integrity is Vital in 2022 Database normalization and referential integrity are required to ensure that the data is organized in a way so that it can be accessed and used by any user. Because referential integrity is fundamental to the way in which data is connected in a relational database, it is a vital component before any transformation, such as an ERP or PMS migration. Data needs to be connected so that it can be joined, or linked, and so that changes to one piece of data automatically propagate throughout the system. This type of integrity is vital for business success and for preventing incomplete data sets.
<urn:uuid:23ff6085-486d-4c3a-ac2c-712eccd154ab>
CC-MAIN-2022-40
https://intelligent-ds.com/blog/what-is-referential-integrity
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00521.warc.gz
en
0.905747
928
3.59375
4
You might have seen recent news reports regarding the Log4j vulnerabilities that are being actively exploited by malicious actors around the world. What is Log4j and how is it being exploited? Log4j, part of the Java programming language, uses code to help software applications track activities. It is one of the most popular logging libraries used online and is open-source and free, so the library touches nearly every part of the internet. Many of the internet services that pretty much run modern life use Java and Log4j. Companies that many popular apps and websites rely on (such as Google, Amazon, and Microsoft) have been affected, as well as giant software programs used by millions (such as IBM and Oracle). Also at risk of being exposed to the vulnerability are any devices that connect to the internet – so not only computers, TV’s, security cameras, and other smart devices. On December 9, 2021, a vulnerability in Log4j software was discovered that gives hackers easy access to whatever systems and services they are trying to get into by asking the program to log a line of malicious code. It also gives ransomware attackers a new way to break into networks and lock out the owners. The vulnerability is easy for bad actors to take advantage of while being hard for owners of effected systems to find or see if they have already been compromised. How severe is the vulnerability? “The Log4j vulnerability is the most serious vulnerability I have seen in my decades-long career,” said Jen Easterly, U.S. Cybersecurity and Infrastructure Security Agency (CISA) director. Within the first week after discovering the vulnerability, there were more than 100 hacking attempts per minute. According to experts, it’s the biggest vulnerability we’ve encountered regarding the number of services, websites, applications, and devices exposed. “It’s ubiquitous. Even if you’re a developer who doesn’t use Log4j directly, you might still be running the vulnerable code because one of the open source libraries you use depends on Log4j,” said Chris Eng, chief research officer at cybersecurity firm Veracode. How is it being addressed? Computer programmers and security experts at affected companies have been working around-the-clock to develop and release patches and stop any potential problems. However, at the same time, hackers are working just as hard to exploit the Log4j vulnerability before it gets patched. To help with transparency and ensure the public has accurate information, the CISA is setting up a website to provide updates such as affected products and how they have been compromised by Log4j hackers. Click here for more. It’s too soon to tell how big the impact will be. Though some are calling Log4j the most serious security breach in history, it will really depend on how fast affected companies respond by rolling out patches. What should I do? With the pressure currently on affected companies to come up with fixes, as consumers it’s important to update devices, software, and apps to the most recent versions, and download patches and updates quickly when prompted in the coming days and weeks. As experts in Cybersecurity, we understand the growing threats of the cybersecurity epidemic and how they can affect you and your business. Talk to the security experts at Merit Technologies today to learn more about how we can help keep your business safe and secure in an ever-evolving cyber world.
<urn:uuid:6d1ea951-881b-40c1-8c7f-8075b4694435>
CC-MAIN-2022-40
https://merittechnologies.com/insights/log4j-vulnerability-what-is-it-and-what-do-i-need-to-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00521.warc.gz
en
0.955992
711
3.25
3
The plan aims to use AI to boost competitiveness and productivity and address issues such as social inequality. The Brazilian government has taken another step towards the creation of public policies around artificial intelligence (AI). A national AI strategy will be created as a response to the worldwide race for leadership in the field and the need to discuss the future of work, education, tax, research and development as well as ethics as the application of related technologies becomes more pervasive. Read More: The Third Wave of CPM Automation: How Artificial Intelligence will Transform Forecasting & Reporting A public consultation has been launched to gather input around how AI can solve the country’s main issues, identify priority areas of focus for the development and use of the technologies, as well as limits for it. According to the summary on the purpose of the consultation, which ends on January 31, 2020, the government understands that AI can bring improvements to the country’s competitiveness and productivity, as well as the provision of public services, quality of life and to reduce social inequality in the southern hemisphere’s biggest economy. Brazil adheres to the Organisation for Economic Co-operation and Development (OECD)’s human-centred AI Principles, which provide for recommendations around areas such as transparency and explainability. In light of these guidelines, the debate around the Brazilian AI strategy will initially discuss six vertical themes: qualifications for a digital future; workforce; research, development, innovation and entrepreneurship; government application of AI; application in the productive sectors and public safety. Read More: Squirrel AI Learning Attends the Web Summit to Talk About the Application and Breakthrough… In addition, three common themes to all areas will be discussed: legislation, regulation and ethical use, as well as international aspects and AI governance. According to a study carried out by consulting firm Ducker Frontier on behalf of Microsoft, Brazil could achieve a GDP increase of 7.1% with full adoption of artificial intelligence technologies. The study suggested that the predicted increase in GDP considering full AI adoption is higher than the 2.9% growth rate projected by the World Bank and the International Monetary Fund (IMF) until 2030. Read More: Can Artificial Intelligence Software be trusted to Analyze Human Emotions Accurately? This GDP growth would be accompanied by a four-fold increase in the country’s productivity levels, reaching a compound annual growth rate of up to 7% per year in the period to 2030, compared to the 1,7% annual growth estimated by the World Bank and the IMF. Also last month, the Brazilian government announced it will create a network of eight research facilities focused on artificial intelligence. One of the labs will focus on edge AI technology in areas such as cybersecurity and will involve the Brazilian Army. The other seven centers will work on applied AI. Four of these venues will be working on the technology in line with the national Internet of Things plan. Read Also: Enterprise Artificial Intelligence Market to Reach $53.06 Bn, Globally, by 2026 at 35.4% CAGR:…
<urn:uuid:d6e9d6b6-86ad-478f-af8a-e7bfd27b99fb>
CC-MAIN-2022-40
https://enterprisetalk.com/news/brazil-to-create-national-artificial-intelligence-strategy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00521.warc.gz
en
0.944844
630
2.578125
3
Data deduplication is a data compression technique involves redundant copies of data being removed from a system. It is administered in both data backup and network data schemes, and enables the storage of a unique model of data within either a database or broader information system. Data deduplication is also known as intelligent compression, single instance storage, commonality factoring or data reduction. Data deduplication works by examining and then comparing incoming data pieces with already stored data. If any specific data is already present, deduplication algorithms remove the new data and replace it with a reference to the data already in place. For example, when an old file is backed up with some changes, the previous file and applied changes are added to the total data segment. However, if there is no difference, the newer data file is discarded, and a reference is created. Data deduplication is one technology that storage vendors rely on to make better use of storage space; the other is compression. These storage features are usually clumped into a larger category, called data reduction. All of these systems help to reach the same goal, increased storage efficiency. With proper deduplication techniques, businesses can effectively store more data than their overall storage capacity might suggest. As an example, a business with 15 TB of storage, when combined with proper deduplication and compression techniques, can get a 4:1 reduction benefit, meaning it would be possible to storage 60 TB on a 15 TB data array. Data Deduplication Case Study Consider this scenario as a practical example of deduplication benefit: an organization is running a virtual desktop environment with hundreds of identical workstations all stored on an expensive storage array that was purchased specifically for support. The organization is running hundreds of copies of Windows 8, Office 2013, ERP software, and any other tools that users might require. Each individual workstation image consumes, say, 25 GB of disk space. With just 200 such workstations, these images alone would consume 5 TB of capacity. With deduplication, just one copy of these individual virtual machines can be stored. Everytime the engine discovers a piece of data that is stored somewhere else in the storage environment, the storage system saves a small pointer in the data copy’s place, thereby freeing up the blocks that would normally be occupied. Data Deduplication Types As you might expect, different vendors handle deduplication in different ways. In fact, there are two primary deduplication techniques that deserve discussion: Inline Deduplication occurs the moment that data is written to storage. While the data is in motion, the deduplication engine tags the data sequentially. This process, while effective, does create computing overhead. The system has to repeatedly tag incoming data and then swiftly identify whether or not that new fingerprint matches something in the system. If so, a flag pointing to the existing tag is written. If it doesn’t, the block is saved without changes. Inline deduplication is a major feature for many storage devices and, while it does introduce overhead, it’s not too problematic, providing far more benefits than costs. Post-Process Deduplication, also known as Asynchronous Deduplication, occurs when all data is written entirely, until, at regular intervals, the deduplication system goes through and tags all the new data, removes multiple copies, and replaces them with flags pointing to the original data copy. Post-process deduplication lets businesses utilize their data reduction service without stressing about the repeated processing overhead caused by inline deduplication. This process lets businesses schedule deduplication, so that it can happen during off hours. The largest downside to post-process deduplication is that all data is stored in its complete form (often called fully hydrated). Because of this, the data requires all of the space that non-deduplicated data needs. Only after the scheduled deduplication process does size decrease occur. For businesses using post-process dedupe, there needs to be a larger overhead of storage capacity at all times. Client-side Data Deduplication is a data deduplication technique that is used, for example, on a backup-archive client to remove redundant data during backup and archive processing before the data is transferred to the server. Using client-side data deduplication can reduce the amount of data that is sent over a local area network. Hardware-Based Deduplication versus Software-Based Deduplication Functionally built deduplication appliances lower the processing burden associated with software-based products. These hardware-based deduplication systems can also add deduplication into forms of data protection hardware, like backup appliances, VTLs , or NAS storage. Even though software-based deduplication can effectively eliminate redundancy at its source, hardware-based methods prioritize data reduction at the storage level. Because of this, hardware-based deduplication won’t bring bandwidth savings obtained by deduplicating at the source, but this problem is offset by increased compression speeds. Hardware-based data deduplication brings high performance, scalability and relatively nondisruptive deployment. It is best suited to enterprise-class deployments rather than SME or remote office applications. Software-based deduplication is for the most part less costly to run, and doesn't require any significant changes to a businesses physical network infrastructure. However, software-based deduplication can often be more difficult to install and maintain. Agents have to be installed to allow for communication between the local site and backup server running the same software. Even as disk capacities continue to increase, data storage vendors are constantly seeking methods by which their customers can cram ever-expanding mountains of data into storage devices. After all, even with bigger disks, it just makes sense to explore opportunities to maximize the potential capacity of those disks. Deduplication will always have major positive effects on overall storage usage, thus lowering costs, but it is important to know which type of deduplication method is needed in order to correctly maximize efficiencies. Some methods reduce bandwidth requirements, others reduce localized storage dependencies, and others integrate directly with cloud computing services. How Barracuda Can Help Barracuda Backup's deduplication simplifies data protection and reduces overhead, media, and network costs. With three-stage, variable length deduplication, it enables efficient long-term storage of protected servers while reducing backup time.
<urn:uuid:11275dee-e0c0-4099-9368-1da22b534732>
CC-MAIN-2022-40
https://www.barracuda.com/glossary/data-deduplication
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00521.warc.gz
en
0.912972
1,367
3.546875
4
As you learn about Internet security, privacy, and digital parenting, you may encounter words that you’re not familiar with. This glossary has simple, short definitions. Adware: Software that automatically displays unwanted ads, to make money for the adware creator. Anti-malware: Software that prevents and/or removes malware (malicious software) from a device. The term antivirus is often used for software that’s actually anti-malware, because it fights not only viruses but other forms of malware. Authentication: Proving that you are who you say you are. Bad actor: A person, group, or organization that is acting maliciously. Blacklist: A list of disapproved items. A system that uses a blacklist allows all items that are not on the blacklist. Opposite approach of a whitelist. Cloud: Internet-based; using remote servers. A cloud service is an online service. Cloud storage is online storage. Cookie: A file that contains information which identifies you to a website, so that it can keep track of who you are. Cookies are simply part of how users interact with websites, and aren’t inherently a privacy risk. But when cookies are used to track users around the Web, they are considered a privacy risk. Confidential: Secret, private. Credentials: Proof that you are who you say, or that you have the right to access something. For example, your username and password. Credential stuffing: When hackers take login credentials they’ve acquired for one account and try them on other accounts, assuming that the owner has used the same login credentials for multiple accounts. Cryptography, cryptographic, crypto-: Having to do with encoding data to make it secret and not easily readable. Cyber-: A prefix meaning related to computers or computing. For example, cybersafety, cybersecurity, cybercrime, cyberspace, cyberbullying, cyberstalking. Cyberbullying: Bullying done with digital devices, such as through social media or messengers. Generally refers to behavior targeting minors, whereas cyberstalking refers to behavior targeting adults. Cyberstalking: Repeated harassment using digital devices. Generally refers to behavior targeting adults, whereas cyberbullying refers to behavior targeting minors. Dark Web (or Dark Net or Darknet): Websites that aren’t indexed by search engines, so you can’t get to them through Google or other search engines. They can’t be visited using a normal browser, and typically require a Tor browser to visit, and that you know the website address or click a link to it. The Dark Web is home to an underground trade in illegal goods and services (though not all Dark Web content is illegal). Data: Facts; pieces of information. Computers and digital devices store and process data. “Data” is a plural word (the singular is “datum”), but data is commonly used as singular. Deep Web: Websites and webpages that aren’t indexed by search engines, so you can’t get to them through Google or other search engines. You can get to them using a normal browser, as long as you know the website address or click a link to it. There are good and bad, legal and illegal sites in the Deep Web. Defense in depth: Using multiple security layers to increase your overall security. Device: Generic term for computing hardware, such as computer, phone, or tablet. Digital parenting: A broad term dealing with the intersection of parenting and technology; helping kids use digital tech in a safe, healthy, wise way. DNS (Domain Name System): The system that translates a domain (for example, defendingdigital.com) to an IP (Internet Protocol) address (for example, 220.127.116.11). This translation is necessary because humans use domains, but computers use IP addresses. Domain name: Main part of an Internet address. For example, defendingdigital.com, wikipedia.org, irs.gov. Encrypt, encryption: To encode data to make it secret and not easily readable. Data that’s not encrypted is unencrypted (also called plain text or cleartext). Undoing/reversing encryption is decryption. End-to-end encryption (E2EE): Encryption that keeps data secret along the entire path from sender to intended recipient, so that only the intended recipient can see/hear it. It keeps data encrypted while in transit (traveling) and at rest (in storage). This prevents not only hackers, but also governments and even the companies transmitting the data from seeing it. Filter: To restrict access to. Usually used in the context of an Internet filter, Web filter, or content filter, which disallows access to particular websites, images, videos, and other content. Fingerprinting: Identifying users based on the characteristics of their device or browser, such as operating system (OS), browser extensions, language, and installed fonts. Often done by web advertisers and other third-party trackers. Grooming: When a predator forms a relationship with a victim and earns the victim’s trust, preparing to exploit the victim. Usually done by an adult to a minor. Predator’s goal may be sexual exploitation (online or in-person), human trafficking, or radicalization. Hack, hacking, hacker: A person who maliciously breaks into computer systems, networks, and digital devices. Originally hacker was a positive term (you may have heard of life hack as meaning finding a shortcut), and cracker was the corresponding negative term (meaning to crack into). Over time hacker has evolved into a mostly negative term, though the term white hat hacker survives as meaning a person who hacks with good intent, to find vulnerabilities before malicious people do. HTTPS (Hypertext Transfer Protocol Secure): Technology that creates a secure, encrypted connection between a web browser and a website, to protect transmitted data from eavesdroppers. Browsers will show the web address (URL) starting with https:// and may also show a padlock symbol. Internet of Things (IoT): The wide range of devices that have processors and are connected to the Internet, generally referred to as “smart” devices. Includes smart speakers, thermostats, home entertainment systems, home security systems, car systems, baby monitors, and many more devices. Internet Protocol (IP) address: The Internet address given to your device by your network or Internet Service Provider (ISP). Internet Service Provider (ISP): The company that provides your Internet connection. At your home, that could be a cable, DSL, or fiber company, such as Comcast, Spectrum, or AT&T. For your mobile devices, that’s your wireless carrier, such as Verizon, Sprint, or AT&T. Key: The digital equivalent of a physical key; text, code, or software that unlocks something. Keylogger: Software that records the keys being typed, often used to steal login info or other sensitive data. Mac: Abbreviation of Macintosh, a computer manufactured by Apple. Note that it’s not spelled MAC (all caps) because it’s not an acronym. There is an acronym MAC, for Media Access Control address (a unique identifier for a device on a network). Malware: Generic term for malicious software. Includes viruses, spyware, ransomware, Trojans, rootkits, and more. Metadata: Data about data. For example, the metadata of a phone call are the details about the call, such as phone number called, time of call, and duration of call. The metadata of an email are the details about the email, such as email address sent to, time sent, and subject. Online predator: A person who sexually exploits one or more children over the Internet (or attempts to). Operating system (OS): The main software that runs on a computer or other digital device, which other software runs inside. Common computer operating systems are Windows, macOS (Apple), and Linux. Common mobile operating systems are iOS (Apple) and Android. Parental controls: software that allows a parent to control what their child can do with device, which may include limiting screen time, disallowing apps, or filtering content. Personally identifiable information (PII): Information by which you can be identified, such as name, Social Security number, driver’s license number, phone number, and email address. Phishing: Fraudulent messages that attempt to steal info. For example, you may receive an email that appears to be from your bank, asking you to click a link to log in. But the link actually points to a malicious website disguised to look like your bank, which steals your login info as soon as you enter it. Potentially unwanted program (PUP): Software which is suspicious but not clearly malicious. Anti-malware software can’t tell whether you want it, so it labels it “potentially unwanted.” If you don’t know that you need it, uninstall it to be safe. Principle of Least Privilege: Give users, accounts, and services only as much access and capability as they truly need, to limit the damage they can do (deliberately or accidentally). Privacy: Keeping hidden or secret the data that you want to keep hidden or secret. Ransomware: Malware that prevents you from accessing your files (often by encrypting them) until you pay a ransom. There’s no guarantee that you’ll get your files back if you pay the ransom. Revenge porn: Distributing sexual images or videos of a person (often an ex) to get revenge. Security: Restricting access to an object or data, ensuring that only the proper people or systems can access it. Security questions: Questions that must be correctly answered to authenticate you. Often used as a secondary way to authenticate if you forget your password. Security theater: Measures that give the appearance of security to put people at ease, but which do little or nothing to actually increase security. Sensitive: Data that is valuable and you don’t want to fall into the wrong hands. For example, Social Security number, home address, financial information, medical information. Sexting: Sending nude or partially-nude photos, or sexually explicit text. In the US, federal law makes it illegal for minors to sext (due to child pornography legislation). Sextortion: Threatening to harm a person, or if they don’t provide sexual images, videos, or favors, or money. The threat may involve distributing sexual images or videos of the target, or some other form of threat or blackmail. Short Message Service (SMS): Technical name for text messaging, text messages, texting. Technically, only text can be sent by SMS. If you send anything else (images, audio, etc.) you’re using MMS (Multimedia Messaging Service). Smishing: Fraudulent SMS/text messages that attempt to steal info; phishing done by SMS/text message. Social engineering: Manipulating or tricking people into giving access to information or systems. Spam: Unsolicited “junk” messages received by email, text/SMS message, social media messaging system, or some other messaging system. Note that it’s not spelled SPAM (all caps) because it’s not an acronym. A person who sends junk messages is a spammer; the action is spamming. Special characters: Written symbols that aren’t letters or numbers. Examples: ~ ! @ # $ % ^ & * ( _ + [ \ ; ‘ < . ? SSL (Secure Sockets Layer): Obsolete, insecure protocol that’s been replaced by TLS (see TLS below). Because the term SSL is more widely-known than TLS, people use SSL when they usually mean TLS. But it’s wise to confirm, because SSL is obsolete and insecure. Surveillance: Watching, observing, tracking. Digital surveillance can be done by a human, but most surveillance is done automatically by systems. TLS (Transport Layer Security): Protocol that encrypts data in transit (while it’s traveling). Used in HTTPS Web traffic and in many other forms of secure communication. Two-factor authentication (2FA), multi-factor authentication (MFA): Using more than one means to prove that you are who you say you are. A password is commonly one factor; other factors could be a code generated by an authentication app, or biometrics (fingerprint, iris scanner, etc.). Verify, verification: To prove or provide evidence for. Virtual Private Network (VPN): A secure tunnel from your device to a remote server. Can be used to protect your Internet traffic when you’re on an insecure network (such as public Wi-Fi) or to make it look like you’re located somewhere else, allowing you to get around Internet restrictions (such as in China). Virus: Malicious software that replicates itself like a biological virus. A virus is a specific type of malware, but the word virus is often used to refer to all malware. Vishing: Fraudulent phone calls or voicemails that attempt to steal info; phishing done by phone. Vulnerable, vulnerability: Capable of being attacked or exploited because of a flaw. Whitelist: A list of approved items. A system that uses a whitelist blocks all items that are not on the whitelist. Opposite approach of a blacklist. Zero-day, 0-day: A zero-day vulnerability is a software or hardware flaw that is generally unknown, so no one has yet created a defense against it; it can be attacked or exploited immediately. Zero-knowledge encryption: A form of encryption in which the service provider has no knowledge (zero knowledge) of the user’s encryption key, so the provider is not able to view the user’s data. For example, a zero-knowledge storage or backup company can’t view the files users store on its servers.
<urn:uuid:861c85d0-1833-4533-bdfb-4dd064a72eec>
CC-MAIN-2022-40
https://defendingdigital.com/glossary/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00721.warc.gz
en
0.896728
2,969
3.796875
4
Search giant Google is looking to connect developing economies to the Web with the aid of giant floating balloons that function like WiFi hotspots in flight -- part of an endeavor called Project Loon. As part of this effort, the company announced that Indonesia's top three mobile network operators, Indosat, Telkomsel, and XL Axiata, have agreed to begin testing Project Loon's balloon-powered Internet capabilities over Indonesia in 2016. From Sabang all the way to Merauke, many of Indonesia's people live in areas without any existing Internet infrastructure, so Google is hoping that over the next few years Loon will be able to partner with local providers to put high-speed LTE Internet connections within reach of more than 100 million currently unconnected people, and provide enough speed to read websites, watch videos, or make purchases. "In Indonesia today, only about 1 out of every 3 people are connected to the Internet," Mike Cassidy, vice president of Project Loon, wrote in a blog post. "And even though most of their connections are painfully slow, they're doing some pretty incredible things. Soon we hope many more millions of people in Indonesia will be able to use the full Internet to bring their culture and businesses online and explore the world even without leaving home." Project Loon balloons travel approximately 12 miles above the Earth's surface, in the stratosphere. Winds in the stratosphere are stratified, and each layer of wind varies in speed and direction. Google uses software algorithms to determine where its balloons need to go, then moves each one into a layer of wind that is blowing in the right direction. By moving with the wind, the balloons can be arranged to form one large communications network. Loon's balloon envelopes are made from sheets of polyethylene plastic. They are 50 feet wide and 40 feet tall when fully inflated. The company's plans for the country go far beyond Project Loon -- Android One phones are helping to make smartphones more accessible in a place where most people first access the Internet on a mobile device. [Can Google's self-driving cars really work? Read: Autonomous Vehicles Vs. Helping Humans Drive Better.] Google is also working to ease the use of data with features sure as Search Lite, which streamlines search so pages load more quickly, or by optimizing Web pages so that they require less data to load. Indonesia is one of the first countries where YouTube users can take videos offline to watch later during periods of low or no Internet connectivity. Google isn't the only company working to bring the Internet to remote areas. Social media giant Facebook is also taking to the skies in an effort to spread the Web worldwide. In August, Facebook announced it is ready to begin testing a full-scale version of its Aquila drone, one of a projected fleet of drones designed to provide limited Internet service to the estimated 10% of the world without reliable network access. Once deployed for normal operation in support of Facebook's Internet.org program, the company's drones are expected to remain aloft for 90 days each, in a relay, flying at an altitude of 60,000 to 90,000 feet on solar power.
<urn:uuid:0d8bf000-16cb-420f-adf4-8b84f4a93b7e>
CC-MAIN-2022-40
https://www.informationweek.com/mobile-business/google-s-project-loon-to-launch-internet-balloons-in-indonesia
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00721.warc.gz
en
0.937293
651
2.640625
3
MongoDB is a cross-platform document-oriented database which uses JSON-like documents with dynamic schemas (BSON) improving the integration of data between different applications. MongoDB is very popular for scalability, performance and high availability, it represents a valid solution also for very complex architectures, in order to achieve high performance MongoDB leverage in-memory computing. Today MongoDB is used by many organizations, the bad news is that nearly 40,000 entities running MongoDB are exposed and vulnerable to risks of hacking attacks. “Without any special tools and without circumventing any security measures, we would have been able to get read and write access to thousands of databases, including, e.g., sensitive customer data or live backends of Web shops. The reason for this problem is twofold: • The defaults of MongoDB are tailored for running it on the same physical machine or virtual machine instances. • The documentations and guidelines for setting up MongoDB servers with Internet access may not be sufficiently explicit when it comes to the necessity to activate access control, authentication, and transfer encryption mechanisms” states the report published by the researchers. “Since we now are able to connect to the MongoDBs found by calling the mongo shell with the IP address found.” mongo $IP 4 “In order to verify the impact and risk related to the found MongoDB instances, we exemplarily double-checked that these databases are not intentionally configured without access control and further security mechanisms. Briefly looking at a large database1 , we found a customer database of a French telecommunications provider with about 8 million customer entries” wrote the researchers. “Our initial port scan revealed 39,890 instances. However, this number might be inaccurate, since on the one hand many larger providers blocked the scan such that there might be more publicly accessable MongoDBs online, and on the other hand some of these databases might be intentionally configured without security measures, e.g. as honeypots” Using a free standard account we identified a first set of vulnerable MongoDB addresses by pasting the following HTML code. curl $SHODANURL |grep -i class=\"ip\" |cut -d ’/’ -f 3 \ |cut -d ’"’ -f 1|uniq >db.ip ” Those who are affected by the issue should use latest installer for MongoDB which limits network access to localhost by default and also refer MongoDB Security Manual.”
<urn:uuid:ef0e7ccb-c1c7-416b-b8f7-117632035ace>
CC-MAIN-2022-40
https://securityaffairs.co/wordpress/33487/hacking/40000-vulnerable-mongodbonline.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00721.warc.gz
en
0.909102
518
3.078125
3