text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Server farms, also called data farms, generally have backup servers, which can take over the role of primary servers in the event the servers fail. A server farm is usually co-located with the network switches and/or routers which enable communication between the different parts of the cluster and the users of the cluster. The computers, routers, power supplies, and related electronics for a server farm are typically mounted on 19-inch racks in a server room or data center. The failure of individual machines is common in a large server farm because of the sheer number of computers. For this reason, management needs to provide support for redundancy, automatic failover, and rapid reconfiguration of the server cluster. Combining servers and power into a single unit has been quite common for many years in research and academic institutions. Today, more and more companies are utilizing server farms as a way of managing the vast quantity of computerization of tasks and services that they require.
<urn:uuid:58e2cd9f-c45c-47a0-a51c-1cdb0510a206>
CC-MAIN-2022-40
https://cyrusone.com/resources/tools/server-farm/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00557.warc.gz
en
0.950764
193
3.515625
4
Biometrics make use of our most unique physical features and behaviors to serve as digital identifiers that computers and software can interpret and utilize for identity-related applications. They can be used to identify someone in a biometric database, or to verify the authenticity of a claimed identity. Our irises are useful as a biometric. The iris a muscle in the eye that regulates pupil size. Ophthalmologists confirmed in the 1980s that the patterns on the iris are unique to each individual. This led to the development of iris recognition technology for identity verification in the mid ’90s. A broad patent for iris recognition expired in 2005 which opened it up to a bigger market. Today, iris recognition technology is used as a form of biometric identification that can verify the uniqueness of an individual with exceptional accuracy. Note that iris recognition is not to be confused with retina recognition. The latter establishes uniqueness using the arrangement of blood vessels on the retina at the back of the eye. Iris scanning process There are four main steps involved in iris recognition enrollment for use as a form of biometric authentication: 1. Image capture A high-quality image of the individual’s left and right iris must be captured using a specialized iris camera. These cameras use near-infrared (NIR) sensors to capture the minute and intricate details of the iris with much greater accuracy than visible light (VIS), which can pollute the sample. Research from Michigan State has determined that iris recognition accuracy drops significantly when VIS is used instead of NIR. Visible light is also more at risk of causing discomfort and pupil contraction when shined into a subject’s eyes. By comparison, near IR causes neither pupil contraction nor discomfort during an iris scan. 2. Compliance check and image enhancement The next step is to perform quality and compliance checks to ensure that the captured image is suitable as a biometric template for future iris scanning. This requires specialized software that analyzes each image for key characteristics that indicate quality, including, but not limited to: - Gray-level spread. - Iris sclera contrast. - Iris pupil contrast and pupillary dilation. - Eyelash presence. - Eyelid occlusion. Once the iris has been segmented from the rest of the eye and evaluated for quality, the sample can be saved for future use as a biometric template. 3. Image compression Each iris-scan template should be compressed using the JPEG 2000 format. This format preserves image quality and minimizes the occurrence of artifacts (image distortions) that result from other compression methods. 4. Biometric template creation for matching Finally, the high-quality sample is put to use as a template for iris scanning. In one-to-one authentication, each live scan of an individual’s iris is compared to the existing template for identification and authentication. In one-to-many authentication, a live scan is analyzed and compared against an existing gallery to identify a match or lack thereof. Advantages and disadvantages of iris recognition Iris biometrics have several unique advantages when used for identification and authentication. These include: - No physical contact when scanning (more sanitary). - Accurate matching performance. - Can be captured from a distance (with advanced software and optics). - The iris is protected by the cornea so it doesn’t change much as people age. - Difficult to spoof. Disadvantages of iris scanning include: - Can’t use a regular camera; requires IR light source and sensor. Visible light must be minimized for highest accuracy required for search. - Generally require close proximity to camera, which can cause discomfort for some. - Less value for criminal investigation (no latents) . Iris recognition’s use cases range from national citizen ID programs, to physical access control for organizations, to border management and defense. Iris recognition can also be used as a form of mobile authentication on those devices with special IR-enabled cameras; for example, the Samsung Galaxy S8 and S9. Aware products for iris recognition Aware offers the following iris recognition products: - Nexa|Iris: Iris matching algorithm SDK. - IrisCheck: SDK for automated quality analysis and compliance assurance biometric iris images. - AwareABIS: Automated biometric identification system. Click here to learn more about Aware’s portfolio of iris recognition products and services.
<urn:uuid:8f2146cf-9f27-4624-80ca-5fe3fdd6b4be>
CC-MAIN-2022-40
https://www.aware.com/iris-recognition/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00557.warc.gz
en
0.880711
959
3.140625
3
Supply Chain Risk and the Healthcare Sector Healthcare is critical infrastructure, especially during the COVID-19 pandemic. This has made it a major focus of cybercriminals in 2020, which exacerbates the cybersecurity challenges that the industry consistently faces. However, healthcare organizations are uniquely vulnerable to supply chain and third-party risk. Most healthcare providers are extremely dependent upon modern technology. A ransomware infection on an MRI machine could delay a critical diagnosis, and compromise of Internet-connected medical devices could cause a breach of sensitive medical data or injury or death (in the case of Internet-connected pacemakers, medication dispensers, etc.). Identifying and managing supply chain risk is critical in healthcare. Otherwise, hospitals run the risk of being incapable of providing critical care due to a cyberattack, as occurred multiple times during 2020. Supply Chain Risk and COVID-19 It is commonly known that the Pfizer/BioNTech vaccine requires a sophisticated “cold chain” to keep it viable during distribution. The vaccine needs to be kept at -70 degrees Celsius throughout the distribution process. This requirement for a “cold chain” creates significant supply chain vulnerabilities for the vaccine. IBM has reported that it had detected efforts to potentially disrupt this supply chain by targeting transport companies, manufacturers of solar panels for transport vehicles, and dry ice distributors. If the attack was successful, it could have disrupted the distribution of the vaccine, inhibiting affected countries’ efforts to bring the COVID-19 pandemic under control.
<urn:uuid:692304fe-cfee-484b-9c28-bdc2255544c1>
CC-MAIN-2022-40
https://www.morganfranklin.com/insights/company-insight/third-party-and-supply-chain-risk-in-the-healthcare-sector/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00557.warc.gz
en
0.94868
311
2.53125
3
Duke University researchers invented an inexpensive printed sensor that can monitor the tread of car tires in real time. The sensor warning drivers when the rubber meeting the road has grown dangerously thin. The device will increase safety, improve vehicle performance and reduce fuel consumption. Researchers have demonstrated a design using metallic carbon nanotubes that can track millimeter-scale changes in tread depth with 99 percent accuracy. The technology relies on the well-understood mechanics of how electric fields interact with metallic conductors. The core of the sensor is formed by placing two small electrically conductive electrodes very close to each other. By applying an oscillating electrical voltage to one and grounding the other, an electric field forms between the electrodes. However, measuring the interference through the electrical response of the grounded electrode, it is possible to determine the thickness of the material covering the sensor. The sensor made from a variety of materials and methods. The researchers optimized performance by exploring different variables from sensor size and structure to substrate and ink materials. Printing electrodes made of metallic carbon nanotubes on a flexible polyimide film. Besides providing the results, the metallic carbon nanotubes are durable enough to survive the harsh environment inside a tire. The sensors printed on most anything using an aerosol jet printer, even on the inside of the tires. Well, it is not certain that direct printing will be the best manufacturing approach, whatever approach is ultimately used. Researchers also want to explore other automotive applications for the printed sensors, such as keeping tabs on the thickness of brake pads or the air pressure within the tires. This is consistent with a key trend in the automotive sector toward using embedded nanosensors. More information: [IEEE]
<urn:uuid:60b6a327-1b2f-4f54-9f28-5a00862cfc5d>
CC-MAIN-2022-40
https://areflect.com/2017/06/15/printed-sensor-can-monitor-tire-wear-in-real-time/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00557.warc.gz
en
0.911991
355
3.546875
4
Today’s world is certainly a digital one. Everything, from the stores where we shop to the messages we send to one another, interacts with the internet in some fashion—and sometimes in ways that aren’t exactly secure. In fact, risky interactions with the Web can even extend to the homes and family. So, what impact (good or bad) does the digital world truly have on today’s families? To better understand family habits and attitudes towards the connected lifestyle, we conducted a global survey of 13,000 adults about their children’s digital habits, how they, as parents, supervised internet time and more. The results are, well, telling. Here’s what we found: Kids are spending more time on digital devices than ever before This one is not surprising, but we now have statistics to back it up: Children today are spending more time in front of a screen than ever before. In fact, about 20 percent of respondents say they allow their children three to four hours of screen time per day. The majority, 48 percent, allow their children a solid one to two hours a day. This includes use before bedtime, alone and when children interact with friends. To expand, 76 percent of parents allow their child to bring an internet-connected device to bed. Perhaps predictably, almost a quarter of parents today don’t monitor how their children use a device before bed. This isn’t to say those parents aren’t worried—80 percent of parents said they’re concerned about who their child interacts with online—rather, they simply don’t know how to monitor their child’s browsing habits and interactions. Parents aren’t sure how to manage kids’ screen time Digital devices are new. So are the social mores that go with them. This means there’s a lot of tension over device use within the family. In fact, about 32 percent of respondents said they’ve argued with their child about bringing a device to bed. So how do parents today govern device use? It boils down to two methods: the tried and true method of confiscation (35 percent), and, for the tech savvy, monitoring software (23 percent). But monitoring methods are going to have to change—and likely towards the more technical side of things. The reason is simple: the more time a child spends online, the more likely they’ll be present (or actively seek out) age-inappropriate content. In fact, 34 percent of responding parents report they have discovered their child has visited an inappropriate website. So, what’s the answer? New solutions, such as McAfee Secure Home Platform can help you manage and protect devices connected to your home network while providing parental controls that can be suited to the needs of all age groups. Perhaps more critical to the healthy use of today’s digital devices is simply talking to children about today’s dangers online. The good news is that a lot of parents already do this. A solid 85 percent of respondents say they at least occasionally talk to their children about online dangers. Conversely, and perhaps a bit naively, the parents who don’t talk to their children say they believe their children already know about online dangers and don’t need more discussion on this topic. Overall, this survey brings to light the confusion parents feel about how to keep their families safe online, especially in light of increased device usage – pointing to the need for better solutions. Here are a few tips and tricks parents can keep in mind when raising their children alongside digital devices, to help us start closing that gap. - Discuss online safety early and often. Start talking with your children about online safety early and often. It’ll likely help heighten security awareness as children get older. Make sure you talk to them about cyber threats, such as phishing attacks, and the basics of safe online behavior—starting with avoiding strangers on the Web. - Practice what you preach. If you’re going to lay out rules for device use (and you should) make sure you follow them as well. Children are experts at picking out parental contradictions, so make sure the rules you set are reasonable. Finally, you can set a good example for device use by abiding by good device etiquette (e.g.: don’t use your phone at the table during dinner, don’t hold a conversation while browsing on a device, etc.). - Educate yourself. Finally, you can’t teach what you don’t know. Take the time to research the various devices your children either use, or are planning on acquiring in the future. Make sure you also read up on any new social media accounts they may want to join. This should give you a better idea of the community these devices and networks foster, as well as how you can protect your children from cybercriminals. Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:c4627187-a39b-420c-aead-faeab016bf3a>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/family-safety/connected-family-smarthome-2017/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00557.warc.gz
en
0.953998
1,038
2.921875
3
First in a series Video surveillance systems offer a number of benefits for business. In addition to helping prevent robberies, they serve other purposes as well. - Control employee theft - Monitor workflows - Improve safety - Help to avoid frivolous lawsuits - Reduce insurance costs Choosing the right system When it comes to video security systems you have two main choices – analog or IP. Analog systems – sometimes referred to as CCTV – have been around for years. In this type of system, cameras capture an analog video signal and transfer it via coax cable to a Digital Video Recorder (DVR). The DVR converts the analog signal to digital, compresses it, and then stores it for later retrieval. Analog cameras connect directly to the DVR and usually require two cables, one for power and one for video. IP stands for Internet Protocol. In this type of system, digital video cameras send and receive data via a computer network. Rather than send feeds to a DVR, IP cameras encode the video signal and it to a Network Video Recorder (NVR) through internet protocol (IP). IP cameras require a connection to the same network as the NVR, but do not have to be directly connected to it. Either type of system can do the job, but IP systems have several advantages and are increasingly popular: 1. Image quality. IP cameras provide higher resolution than analog (currently up to 5 megapixel). This means that you can zoom into a recording without losing clarity. IP cameras can also capture a clearer image when objects are moving. IP cameras an also capture a much wider field of view than the typical analog cameras. As a result, a single IP camera can potentially take the place of three or four analog units, 2. Simplified cabling infrastructure. Because IP cameras use the same network connections as computers, they can easily be attached to the existing Ethernet network – reducing the need for costly and complicated rewiring. IP cameras allow video and audio surveillance data to be sent over a network using cat5 cable with an assigned IP address to each camera. 3. Power source. IP cameras can be powered over the same network cable through POE (Power Over Ethernet) by simply connecting them to a POE-capable network switch -eliminating the need for separate source of power. This is not the case in analog cameras, where each camera requires a separate power source. Next week, in part two, we’ll examine the management of surveillance systems.
<urn:uuid:0bf82699-e39b-4fd1-b069-e54933d9286b>
CC-MAIN-2022-40
https://futurelinkit.com/somebodys-watching-video-surveillance-options/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00557.warc.gz
en
0.923024
514
2.6875
3
Any activities involved in transforming raw materials into finished products are part of the manufacturing process. This might be accomplished through the use of labor, machinery, chemicals, formulation methods, or biological processes, to add value to a raw material before selling it. But, what is manufacturing technology? In simple terms, any technology that shapes or influences the manufacturing process is a form of manufacturing technology, which provides the tools that enable the production of all manufactured goods. The tools of production include machine tools, related equipment, and their accessories and tooling. Machine tools are usually non-portable, power-driven manufacturing machines and systems used in performing specific actions on various materials to produce marketable products or components. Equipment and technologies relevant to this description would include Computer Aided Design (CAD), Computer Aided Manufacturing (CAM), and the assembly and testing systems needed to produce a sub-assembly or finished product. Among the most commonly used manufacturing technologies are: Software-Based Systems: These include Computer Aided Design (CAD), Computer Aided Manufacturing (CAM), Computer Numerical Control (CNC), Direct Numerical Control (DNC), Programmable Logic Control (PLC), Numerical Control (NC), systems integration software, and software for process optimization. Material Removal Tools And Processes: Drilling, milling, turning, grinding, tapping, sawing, broaching, electrical discharge machines (EDM), water jet cutting and laser process equipment, are all examples. Material Forming Equipment And Processes: e.g., cold and hot forming equipment, stamping, bending, joining, shearing, presses, and hydro-forming. Work Holders: Systems for holding components in place, such as clamps, blocks, chucks, tooling columns, angle plates, and fixtures. Tooling Systems: These include drills, taps, punches, dies, reamers, boring bars, and grinding wheels. Material Handling Systems: Conveyors, pallet changers, die handling equipment, bar feed equipment, automated wire guided vehicles, and robots are all in this class. Automated Systems: Among this class of manufacturing technology are assembly systems, transfer machines, and Flexible Manufacturing Systems (FMS). Additive Processes: These include 3D printing, laser sintering, and rapid prototyping equipment. Several types of manufacturing systems are made possible through the application of different forms of manufacturing technology. Among the most widely used are the following. Computer Numerical Control or CNC machining is a manufacturing process in which special computer software programs govern the activities of factory tools and machinery. Using CNC, it’s possible to control a range of complex machinery, including grinders, lathes, mills, and routers. CNC machining enables manufacturers to use single set of configuration triggers to perform three-dimensional cutting tasks. Programs for CNC machines are fed to computers though small keyboards, using a standard language known as G-code. This code is written to control the various behaviors of a particular machine, such as the speed, feed rate, and coordination. Newer prompts can be added to pre-existing programs through revised code. When a CNC system is run, the software program is configured with a set of desired cuts, which are relayed to the relevant tools and machinery that carry out the designated tasks, in a similar manner to a robot. Computer Integrated Manufacturing or CIM is a system that uses computers to control an entire production process. It’s a technique commonly employed in factories whose management are looking to automate design, accounting, purchasing, distribution, inventory control, and other business functions. Typically, a central administrative console links all of these disparate functions, to facilitate efficient materials handling and management, while simultaneously delivering direct control and monitoring of all operations. CIM is an advanced manufacturing process that often includes the following: By streamlining production processes, computer integrated manufacturing reduces the costs of direct and indirect labor, reduces downtime, maintains correct inventory levels, and improves efficiency. A flexible manufacturing system or FMS is a means of production that can easily adapt to changes in the kind of products being made, and the amount of goods produced. By configuring the machines and computerized systems of an FMS setup, organizations can manufacture a range of different components, and cater for changing levels of production. In this way, a flexible manufacturing system can improve the organization’s efficiency, and lower the company’s production costs. Manufacturers can also use an FMS to implement a “make-to-order” policy, which allows customers to customize the products that they want. A typical flexible manufacturing system might include a set of interconnected workstations with processing terminals that govern the complete life cycle for creating a product: loading, unloading, machining and assembly, storage, quality testing, and data processing. This can result in considerable capital costs for setting up, and require the hiring of skilled technicians to keep the system running. The high tech manufacturing sector produces cutting-edge technology, and is characterized by a high degree of complexity, stiff competition, and high costs. Products range from computers to jet engines, component parts (circuit boards, semiconductors, and the like) and also the complex machines that manufacture all of these components and finished products. The cost and complexity challenges fa cing high tech manufacturers are attributable to a number of factors, including unpredictable cycles of supply and demand, long manufacturing lead times for critical component parts, difficulties in obtaining raw materials, and complex production processes. High-tech manufacturers must therefore have efficient supply chains, and the agility to enable them to deliver for customers on time and in full (OTIF), while keeping their costs and inventory levels down. Manufacturing engineering technology is an applied engineering discipline that seeks improved ways of making products. The manufacturing engineering technology field takes a methodical and scientific approach with an emphasis on upholding quality standards, reducing cycle times, and keeping costs at reasonable levels. In practice, the discipline takes a hands-on approach to problem solving, rather than relying on complex mathematical analysis. Manufacturing engineering technologists typically work with many departments in an enterprise, ranging from inventory control to sales. As with most specialist fields these days, there exist web sites and literature dedicated specifically to the usage of technology in the industrial sphere. Manufacturing Technology Insights for example, is a print magazine that offers a knowledge base and networks for key decision makers in the manufacturing arena. The publication covers prevailing trends, consumer behavior, and the several technology solutions which are disrupting the industry. The magazine also advises manufacturing firms on the connectivity infrastructure and digital tools that can provide an insight into production levels, inventory, available production capacity, quality levels, and order status from all of their suppliers. Though commonly associated with grime and metalwork, the manufacturing industry spans diverse sectors including food production, textile product mills, apparel making, wood product manufacturing, chemical manufacturing, and the manufacture of computer and electronic products. Each of these different types of manufacturing process use characteristic tools, techniques, and technologies. Manufacturing technology is a fairly broad term referring to a number of tools, systems, and methods of science, production, and engineering to assist in industrial production and various manufacturing processes. In the modern context, many of the different types of manufacturing technology relate directly to the Fourth Industrial Revolution, or “Industry 4.0”, and deal with innovations such as automation, data exchange, digital technology, artificial intelligence, machine learning, and the Internet of Things or IoT. In this regard, some of the more prominent types of manufacturing technology include: Smart Factories: These are highly digitized manufacturing environments with devices and sensors linked via network connectivity. Using automation and self-optimization, these machines and systems can learn and adapt to situations to develop increased productivity. While able to produce goods on a large scale, smart factories can also facilitate processes such as planning, supply chain logistics, and product development. Cyber-Physical Systems: These systems integrate computer, networking, and physical processes, allowing embedded computing technologies to control and monitor processes in real time. The computer system monitors the process and identifies areas where change is required, and the physical system reacts accordingly. Additive Manufacturing: This blanket term describes a range of technologies and computer controlled processes in which three-dimensional objects can be created by materials deposited in layers. Objects can be fabricated without the use of machining or other techniques, resulting in less surplus material. Augmented Reality (AR) And Virtual Reality (VR): Augmented reality or AR technology displays digital content as an overlay on the real world, while virtual reality (VR) enables the digital creation of complete environments. This allows for the visualization of products or the superimposing of data or plans onto physical components and machinery. Robotics And Automation: Robotic process automation (RPA) allows repetitive tasks to be performed practically indefinitely, with precision, high efficiency, and very little error. Robotic devices can operate in conditions or environments inaccessible to humans, implementing processes like laser cutting and additive manufacturing which rely on computer numerical control to efficiently and remotely create products. A manufacturing or production system consists of any of the methods used in industry to create goods and services from various resources. Such systems are “transformation processes,” in that they transform resources into useful goods and services. There are different types of manufacturing systems, whose usage will depend on what an organization wishes to manufacture, how they want to manufacture it, how big the enterprise is, and various other factors. The most common systems in use today include: Continuous Manufacturing System: The classic mass production scenario, in which a product moves along an assembly line, with workers performing various specialized actions to assemble the product at stations along the way. This allows for higher output and lower unit costs, but requires a large capital investment in labor and machinery. Intermittent Manufacturing System: Here, the manufacturer produces multiple identical items at the same time, using a system with little or no allowance for customization, in which the products must be standardized. Intermittent manufacturing is usually most effective for low-volume or limited production runs. Flexible Manufacturing System: In a bid to make the organization more adaptable to changing marketplace conditions, a flexible manufacturing system enables companies to invest in several machines that they can easily reconfigure to make a lot of products in a short period of time. Robots or other automated devices to replace or augment human labor are characteristic features of these systems. Custom Manufacturing System: In a custom manufacturing system, each product is made by hand or by a single operator using a machine specially designed for this purpose. Companies typically use custom manufacturing for very specific product lines. Besides the specific industry vertical, different types of manufacturing organizations will select their production process based on a number of factors, such as consumer demand, the manufacturing technique for their final product, and available resources. Each process is different, and they all have their advantages when completing a certain task. There are several types of manufacturing process, which may be summarized as follows: Repetitive Manufacturing: Organizations would typically use repetitive manufacturing for repeated production that commits to a certain production rate. Dedicated production lines for repetitive manufacturing processes can produce the same item or a variety of items, on a 24/7, year round basis. Setup requirements are often minimal, so operating speeds can be increased or decreased to meet customer demands or changing requirements. Discrete Manufacturing: For types of manufacturing that require a variation of setups and changeover frequencies, discrete manufacturing assembly or production lines enable organizations to adapt to factors based on whether the required products are similar in design, or significantly different. Job Shop Manufacturing: This process produces smaller batches of custom products, which can be made-to-order (MTO) or made-to-stock (MTS). As a result, job shop manufacturing uses production areas, rather than assembly lines. Workstations may be organized to make one version of a custom product, or more. Continuous Manufacturing: Also known as process manufacturing, this is similar to repetitive manufacturing, in being a 24/7 operation. Raw materials are typically gases, liquids, powders, slurries, or granules. Product designs are generally similar, unless the disciplines required to create a final product or production process are more diverse. Batch Manufacturing: Another form of process manufacturing, which shares similarities with the discrete and job shop processes. Batch processes are continuous in nature, and once a batch is completed, the equipment is cleaned, ready to produce the next batch. Ingredients for batch manufacturing are similar to those for continuous manufacture, but the production processes tend to be more diverse. Additive Manufacturing: A set of technologies that enable products to be produced from various composites and materials, rather than the traditional methods of physical labor or automation. The most well-known form is 3D printing, which assembles components in layers of material. As we observed in the previous section, 3D printing is just one in an array of different types of additive manufacturing. Several individual additive processes exist, each varying in the way they build up material layers, and in the substances and hardware that they use. A standards organization known as the American Society for Testing and Materials (ASTM) classifies the various additive technologies into seven categories: This method uses a liquid plastic resin or photopolymer, which provides the feed material for building up a model in layers. Here, a jet of material is projected onto a platform in a continuous stream or in calibrated drops, in a way similar to the workings of a two dimensional ink jet printer. This additive process uses a structural material in powdered form, and a binding medium, which is generally a liquid. In binder jetting, the machine’s print head moves laterally, depositing a layer of build material then one of the binder, until the desired form is achieved. In this method, the structural material is heated as it passes through a nozzle, and is then deposited in layers to construct the model. The nozzle moves from side to side, and a platform shifts vertically to the level of each new layer. Powder Bed Fusion uses heat energy directed at specified zones of a powdered material, to melt or fuse the powder into a desired shape. It’s a relatively simple and cost-effective process that is suitable for the fabrication of production components. Selective laser sintering (SLS) is an additive methodology in this category. In sheet lamination, the bonding of thin layers or sheets of material builds up three-dimensional objects. One of the sheet lamination variants is ultrasonic additive manufacturing (UAM). Here, physical pressure and ultrasonic waves bond metal sheets at room temperature. Another variant is laminated object manufacturing (LOM), where a coat of adhesive substances is the bonding agent. Directed Energy Deposition or DED is a complex process often employed in repairing existing components, or for adding extra material to them. The materials used in DED are heated to melting point as they are deposited, which fuses them together. Laser engineered net shaping (LENS) and directed light fabrication (DLF) are innovative processes in this category. A wide and diverse mix of materials may be joined or manipulated using the various types of additive manufacturing technologies. These range from more traditional fabrication materials such as plastics, metals, and ceramics, to foodstuffs and human tissue. Broadly speaking, investing in and implementing manufacturing technologies can increase the efficiency of business systems, and streamline an organization’s relationships with suppliers and customers. Manufacturing technology can also greatly increase the speed, flexibility, and efficiency of the production process, in addition to expanding the range of what can be produced. These benefits aren’t confined to traditional heavy industries, either. In wine production for example, technology is used to maximize product quality and reduce production costs. Technology plays a key role in manufacturing process decision making, while process technology has a direct influence on quality control and product cost. The implementation of innovative technologies can improve the way that businesses in the manufacturing sector operate. The use of technology in manufacturing has been proving beneficial since the days of the first Industrial Revolution. Today, the digitization of manufacturing is ushering in the Fourth Industrial Revolution, or Industry 4.0 — an age driven by data, connectivity, and cyber-physical systems. In this environment, technology in manufacturing is facilitating a number of changes for the better, such as: Historically, many changes in the manufacturing industry have originated from consumer demand, and the current drive towards technological advancement is no exception. Today’s consumers want the latest and greatest things — faster, better, personalized, and unique. And manufacturers are increasingly turning to technology, to meet these demands. Many of the building blocks of Industry 4.0 are providing manufacturers with the tools and techniques they require to adapt to changing market conditions and patterns of customer behavior. They include: Cloud services: Online platforms and service offerings enable organizations to quickly share data and resources from any location. The Internet of Things (IoT): Smart connectivity is enabling the maintenance and monitoring of devices used in manufacturing processes. Nanotechnology: This emerging field is enabling faster computer processing, longer product life-cycles, super-precise manufacturing techniques, and pioneering advancements in sectors such as space engineering and biotechnology. Advanced data analytics: Big data and predictive and prescriptive analytics technologies are facilitating and improving process control, the early detection of defects, preventative maintenance, and quicker response times for manufacturing. Artificial intelligence (AI) and industrial robotics: AI, robotics, and related technologies are providing new ways to increase productivity, improve quality, and reduce costs by automating difficult or repetitive tasks. Simulation technologies: Augmented reality (AR) and virtual reality (VR) in conjunction with analytics and machine learning systems are processing real-time data and mirroring the physical world in virtual models that can include machines, products, and humans. Additive manufacturing: Technologies like 3D printing now make it possible to create almost any component using metal, plastic, and other materials, reducing lead times and streamlining the design-to-production process. Horizontal and vertical system integration: These integrations are enabling companies, departments and functions to become much more cohesive, effectively transforming them into automated value chains. In the digital economy, fast-growing manufacturers tend to be those who are more technology-oriented. Due to the importance of technology in manufacturing industry applications, organizations that embrace manufacturing technology are able to substantially increase productivity and efficiency within their operations, reduce costs and wastage, and ultimately, increase their bottom line. Across the board, technologies such as automation, robotics, data exchange, cloud computing, and artificial intelligence are becoming more mainstream. And at this point in the development of the Fourth Industrial Revolution (“Industry 4.0), many of the different technological innovations within the manufacturing sector are starting to converge. As this happens, more and more organizations are looking at how they can integrate new systems and implement them into their workflows. Though some have concerns that the increasing prevalence of robotics and automation could have a damaging effect on employment, for now at least, the expansion in the use of manufacturing technology is not reducing the need for human input and lowering the number of jobs. Instead, it is fueling a need for specialist workers and technicians, and creating more roles for innovative, forward-thinking manufacturers to help propel the industry into a new age of increased output. One of the key aspects of the importance of technology in manufacturing today is the influence of automation. Thanks to pre-configuration and robotic process automation, manufacturers around the world can leave repetitive, tedious and time-consuming tasks for technology to handle. Technologies like collaborative robots (cobots) are relieving some of the burden and risk from human manufacturing workers, while enabling organizations to enjoy increased productivity, improved quality, and greater efficiency in their manufacturing operations. Speed, quality, and precision are major factors in differentiating successful manufacturers from the rest of the market. Computerized production planning systems are reducing the risk of error in many cases, and ensuring that quality products are supplied to consumers. Coupled with predictive analytics, artificial intelligence and machine learning systems are capable of producing automated workflows that employ collaborative robots to improve the quality of the manufacturing process and of overall production. Productivity increases are also being facilitated, with manufacturing technologies enabling factories to run 24 hours a day and seven days a week — often year round. Automated systems can enable more to be done in a short time frame, with advanced automation and digital technologies contributing to greater precision and levels of quality. Innovative techniques and uses of material are central to the importance of technology in production. In this respect, additive manufacturing — the category of technologies that includes 3D printing — is gaining momentum. Additive processes enable manufacturers to produce custom components to precise specifications, with minimum wastage. As the technology becomes more affordable, this is making it easier for inventors, innovators, and smaller companies to design and produce detailed components quickly and cost-effectively, and to market them on a global scale. “Smart” manufacturing harnesses the power of the Industrial Internet of Things (IIoT) to populate manufacturing facilities with machines that can communicate with each other and work together to reduce downtime, minimize errors, and improve quality, while lowering labor, waste, and production costs. IoT devices can also collect, process and analyze data, manage equipment, regulate maintenance schedules, track inventory, and measure performance. Using smart technology in combination with advanced analytics, organizations can monitor their entire manufacturing workflow, troubleshoot production issues, optimize performance, and gain insights for making business decisions — all in real or near-real time. Artificial intelligence (AI) is playing a role in all of this, with algorithms capable of teaching machines to make complex decisions regarding custom product configurations, quality control, predictive or adaptive equipment maintenance, and other functions. Predictive analytics with big data also opens opportunities for manufacturing organizations to make predictions and forecasts for future demand for goods, trends in customer behavior, and other market factors. This can enable manufacturers to judge when to produce their products to maximize profits, and which industry trends to follow, as they are just beginning to develop. Though the technology is still evolving, blockchain provides manufacturers with the potential to create smarter supply chains capable of tracking every detail of a product’s journey, recording precise audit trails, and providing real-time visibility. Technologies like additive manufacturing, autonomous robots, Computer Numerical Control, and Integrated Computational Materials Engineering are radically altering the industrial landscape, enabling manufacturers to guarantee better control over the production process and ensure efficient production standards. Advantages of manufacturing technology which are making this possible include: With robotics and automation now playing a substantial role in the manufacturing process, there’s less chance of human error and omissions that can lead to product defects or degradation. So one of the major benefits of advanced manufacturing technology is quality enhancement. Advanced manufacturing technologies may be applied to a diverse range of production scenarios, giving manufacturers greater flexibility in planning and executing their operations. Manufacturers can make products in small batches for niche customers, adjust their production lines in response to design changes, and reduce the time to market by generating prototypes quickly using technologies like 3D printing. Automated assembly and machines for mass fabrication enable manufacturers to increase their production output to levels previously unattainable with purely human labor. Automation technologies can also free workers to concentrate on higher level, strategic tasks that require decision-making. Technologies like Computer-Aided Design (CAD) and virtual modeling enable industrial designers to quickly create and modify plans for components or finished parts. These techniques are often more cost-effective than conventional manufacturing design and testing processes. Use of advanced technologies also enables manufacturers to produce high-quality goods that can be custom made specifically to a buyer’s requirements. Digital manufacturing systems using virtualization enable manufacturers to generate digital factories that can simulate the entire production process. This allows engineers to save money and time by optimizing factory layouts, identifying flaws in the production sequence, and modeling product output and quality. Using design and process simulation, companies can try out many “what if” scenarios at minimal cost to validate their processes and arrive at the most optimum solutions. For organizations planning for expansion, digital simulations can provide a cost-effective pathway for manufacturers to replicate entire assembly lines in different locations. Human ergonomic simulation helps organizations to identify the best ways of carrying out a task manually, so as to cause the minimum possible physical stress to people working on the shop floor. This reduces the chance of accidents and occupational hazards. Factories employing robotic automation or collaborative robots (cobots) to work alongside humans also gain the advantage of delegating hazardous or physically demanding tasks to these machines, rather than risking the health and safety of their human workforce. While much has been made in the media about the promise of 3D printing and other related technologies, some companies remain on the fence about additive manufacturing, and aren’t convinced that it’s the future. However, the advantages of additive manufacturing technology are quite numerous. They include the following: For some time, the cost of the machinery required for additive manufacturing has been too expensive for smaller manufacturers to contemplate. However, the development of new technologies has made the cost of entry much more affordable, with equipment costs now as low as $3,500 for a reliable industrial quality machine. Another of the advantages of additive manufacturing is its capacity to reduce the amount of capital required for scaling up production without making major changes, giving manufacturers the benefit of both greater cost-effectiveness and agility. In traditional manufacturing, modifying a design during production can lead to significant cost increases or time delays, as the tooling on a production line is changed out. Additive manufacturing gives engineers the creative freedom to produce multiple versions of a single design in a quick and cost-effective manner. Traditional manufacturing methods tend to generate a lot of material wastage. A milling machine for example works by removing material from a block that is bigger than the final product — material that’s typically in a form that cannot be reused. Additive manufacturing benefits fabricators by significantly reducing the amount of waste and surplus material. Rather than removing material, the technology adds ingredients layer by layer so that only the material specifically required gets used. As a result, additive manufacturing can reduce material costs and waste by as much as 90%. The fact that additive manufacturing uses less material produces an overall reduction in energy costs for the enterprise. The technology also generates savings by eliminating steps in the production process that would otherwise consume energy and resources. In addition, the re-manufacturing of parts through additive manufacturing processes can return end-of-life products to a “like new” condition, using only 2% to 25% of the energy that it would take to build an entirely new part. Additive manufacturing gives fabricators the opportunity to offer a “made to order” service, customizing designs for individual clients. For example, 3D printing is employed in creating items tailored for specific needs, and with short production runs, in sectors including the oil and gas, food and beverage, pharmaceutical, and automotive industries. Prior to final production, additive manufacturing enables organizations to produce several prototypes, honing a design and reducing the room for error. As we have already observed, changes to the original specification can be made digitally, reducing the expense associated with traditional design modifications. Unlike in the early stages of the technology’s development, there is now a wide variety of training programs on offer for additive manufacturing designers and fabricators. Courses run from novice to advanced levels, and include training on the tools and technology of additive manufacturing, methods and best practices for using the technologies, and instruction on material handling and post-production processes. Before the advent of heavy machinery and automation, manufacturing was done by hand. Most of these skilled trades were performed in a rural setting, with knowledge being passed down from master craftsmen to apprentices. As demand for craft goods grew, putting-out systems were introduced to unite a collective of manufacturers under a single business hub — a central provider who was able to subcontract specific orders to rural manufacturers, spreading the workload, and giving them a wider customer base. With the evolution of manufacturing technology in the subsequent centuries, the development of interchangeable parts led to more efficient usage of labor, and mass production. The refinement of fixtures, jigs, and gauges paved the way for identical parts to be manufactured quickly, before being passed along to less-skilled assembly workers. Key events in the history of manufacturing technology include: Manufacturing Engineers focus on the design and operation of integrated systems for the production of high-quality, economically competitive products. These systems may include computer networks, robots, machine tools, and materials-handling equipment. Regarding the history of manufacturing engineering, most historians credit Matthew Boulton’s Soho Manufactory (opened in 1761) as the first true manufacturing facility, which laid the groundwork for modern-day manufacturing engineering. Among historians, the general consensus is that manufacturing engineering came to light in the mid-18th century. Cotton mills used inventions such as the steam engine and power loom during the 19th century, and at this time, precision machine tools and replaceable parts facilitated improved production efficiency with less waste. This established the underlying principles basis for later studies of manufacturing engineering. As far back as the 1800s, ideas surrounding 3D scanning were beginning to take shape. Key events in the history of additive manufacturing include: The global manufacturing market involves itself in processing and producing a wide range of items, in a spectrum that spans consumer goods, heavy industrial outputs, the storage and transportation of raw materials, and the distribution and sale of finished products. To sustain growth, manufacturers are currently focusing on three key areas. The first is to improve the rate at which expensive fixed assets are utilized. Second is filling the current and anticipated shortages of specialized labor. In their Future of Manufacturing Technology projections, some analysts estimate that by 2028, the skills gap in the US will result in 2.4 million vacant positions, out of a total 16 million manufacturing jobs. Thirdly, manufacturers must protect operating profits as industry average margins continue to decline. Many startups are now offering custom-designed products and services to help traditional manufacturers meet these goals. In this regard, six key themes are now emerging in manufacturing technology: Looking forward, robotics and SaaS, stand out as the most disruptive technologies likely to have a high impact on manufacturing — and two areas that will create the most value for manufacturers by universally addressing the most significant challenges facing the industry, such as the talent shortage, optimizing throughput rates, and machine utilization. Additive manufacturing is currently going mainstream, as technology prices fall and researchers develop new materials for use with additive systems, new manufacturing processes, and new systems with greater adaptability. Looking toward the future of additive manufacturing, the technology looks set to play a continuing role in complementing traditional manufacturing processes, or in some cases, replacing them entirely. Additive manufacturing systems and unique materials will facilitate a broader range of vertical applications. The future of additive manufacturing in engineering will likely be unlocked through the development of new materials which are versatile enough to be used with a broader range of commercial additive manufacturing systems. The newer systems and broader range of materials being developed will also help increase throughput with additive manufacturing systems, improving productivity, printing resolution, build volumes, and loading / unloading procedures. Artificial intelligence (AI), the Internet of Things (IoT), robotics, 3D printing, and virtual reality (VR) are some typical examples of manufacturing technology. Others include: Smart manufacturing makes both information and manufacturing procedures available on demand, so that it’s possible to use analytics to make decisions regarding the course of any critical business operation, and computer integrated manufacturing techniques and mechanisms to put these decisions into action. Cloud computing enables manufacturers to use a network of remote services and tools which are connected via the internet, along with various facilities and resources for storing, managing and processing information. So for example, cloud-based Software as a Service (SaaS) can make Enterprise Resource Planning (ERP) platforms available to businesses without the resources or skills to deploy them on-site. Though still in its early stages of development, nanotechnology is currently being used in the biotechnology and space engineering sectors. At a technical level, it enables faster computer processing, and smaller memory cards with more memory space. Nanotechnology is also the basis for garments that last longer and help keep the wearer cool during summer, and bandages that are able to heal wounds more quickly. Here are 5 examples of production technology: A class of technologies most commonly associated with 3D printing, additive manufacturing consists of several techniques for making products and components by depositing thin layers of material using a digital blueprint. This is manufacturing which occurs at the atomic level. It can be either top-down (reducing larger materials down to nano scale) or bottom-up (building things from molecular components). This is a process by which components interact with each other spontaneously to build an ordered structure. This is the process of producing products out of biological materials, such as pharmaceuticals, chemicals, paper, and food. With advances in machine learning and predictive analytics, manufacturers are finding it more cost-effective to move production facilities with industrial robots closer to consumers. Flexible manufacturing typically involves a network of interconnected processing workstations with computer terminals that process the end-to-end creation of a product. A flexible manufacturing system example would have connections ranging from loading / unloading functions, to machining and assembly, to storing, to quality testing, and data processing. Here are 5 examples of flexible manufacturing: Considerable capital expenditure is required to purchase the equipment necessary to support additive manufacturing. And since the raw materials required for the process must often consist of very fine particles, the cost of these powders can further inflate the budget. Since there’s usually no way to successfully introduce additional materials and traits later in the 3D printing process, additive manufacturing is unsuitable for components requiring alloys. Additive manufacturing technology is still evolving, and current iterations are often too slow for economical mass production. Spirit AeroSystems uses 3D printing to produce near-net-shape aerospace components for the Boeing 787. Used for access door latch fittings on the passenger jets, 3D printing this component to near-net shape before machining offers a lower buy-to-fly ratio than machining the part directly from stock materials. The International Organization for Standardization (ISO/ASTM 52900) classifies seven types of additive manufacturing technology: The internet is now an important part of any business — and manufacturing is no exception. The web and cloud provide access to information, tools, platforms, and services that benefit industry. Speed is also critical in today’s business world, and thanks to digital technology, many processes can be conducted in real or near real time. A wide range of technologies are having a positive impact on manufacturing. Computer software, big data analytics, fiber optics, drones, image recognition, artificial intelligence (AI), virtual reality (VR) and other technologies play a big role in different industrial sectors. Within the next decade, analysts expect 2.2 million jobs in advanced manufacturing to go unfilled due to a lack of adequately trained labor. Among the skills necessary for a career in advance manufacturing are: The short answer is “Yes.” In the current economy, jobs in manufacturing are in plentiful supply, use the latest technology, provide an opportunity for growth, and are secure. Manufacturers contribute over $2.3 trillion to the economy of the United States every quarter, and the US currently has 12.85 million manufacturing jobs, or around 8.5% of the workforce. On average, American manufacturing employees earn nearly 30% more than workers in other industries — including pay and benefits. As of November 2020, the Volkswagen Group in Germany is the world’s biggest manufacturing company, with an annual revenue of around $282.9 billion. In second position is Toyota Group Engineering of Japan, with an estimated yearly revenue of $265.1 billion. Among the top ten largest manufacturing companies in the world are organizations from the automotive industry, and sectors including electronics, medical, and technology. China currently leads the world in terms of manufacturing production, with over $2.01 trillion in output.
<urn:uuid:34ccce9c-3ad4-49f4-b122-9c6a15243916>
CC-MAIN-2022-40
https://itchronicles.com/what-is-manufacturing-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00557.warc.gz
en
0.931021
7,526
3.75
4
An IP address tells computers how to find a certain device within a computer network. An IP address is like an address label for information packets. For each network your computer is connected to, it has a unique IP address on that network. So, one device can have several IP addresses at the same time. In most home computers you may see traffic on these IP addresses: 127.0.0.1is the loopback address which is used if something on your device needs to talk to another service on the same device. - A home network address which is usually in a range reserved for private networks. Well known ranges for this purpose start with 192.168.which are often pre-programmed in routers whose job it is, among others, to assign IP addresses to connected devices. - Your IP address on the Internet, which is in most cases is assigned to you by your Internet Service Provider (ISP), and changes from time to time. You can learn your current Internet IP address by looking at this site. What does IP stand for? IP is short for Internet Protocol and is part of TCP/IP which is the networking software that makes it possible for your device to interact with other devices on a computer network, including the Internet. TCP/IP is actually a stack of protocols that make it possible for computers around the world to communicate without differences between languages and hardware. For a device to be able to use the Internet protocol it needs to have IP software and an IP address. How are IP addresses written? Most IP addresses that you see will be Internet Protocol version 4 (IPv4) addresses. These have 32 bits of information and are written in four octets of eight bits. Since we are used to working with decimal numbers, you will usually see the four octets written as four decimal numbers between 0 and 255, separated by dots. For example, at the time of writing, the computer running this website had an IP address of Decimal vs octal In some cases, it might be beneficial to know the difference between the different notations. Decimal means a number expressed in the base-ten system which is the system that we use every day that uses the ten digits 0-9, whereas octal means the number system that uses the eight digits 0-7. Since an IP address is a 32-bit number, sometimes it makes sense to use the octal number system instead of decimal. The decimal IP address 127.0.0.1 looks like 0177.0000.0000.0001 in octal. A computer will recognize both of them as different, equally valid ways of writing the same address. Here’s why: In decimal, numbers are written according to how many ones they have, how many tens, how many hundreds, and so on. So, the number 127 is 1 * 100, 2 * 10 and 7 * 1. In octal, numbers are written according to how many ones they have, how many eights, how many 64s, and so on. So, the number 127 is represented as 0177, which is 0 * 128, 1 * 64, 7 * 8 and 7 * 1. Running out of IP addresses There are only 4,294,967,296 different combinations of four numbers between 0-255, so that is the theoretical maximum number of IPv4 addresses you could have on any one network (in reality it's less than this because some IP address ranges are reserved). In November 2019, the RIPE NCC (the regional Internet registry for Europe, West Asia, and the former USSR) announced that it had exhausted its pool of IPv4 addresses. This did not come as a surprise, and it didn't mean that suddenly nobody could have an IP address—sometimes addresses can be recovered, and networks can be extended using Network Address Translation—but it demonstrated the need to implement the successor of IPv4. RIPE warned that “Without wide-scale IPv6 deployment, we risk heading into a future where the growth of our Internet is unnecessarily limited. “ IPv4 and IPv6 What is Internet protocol version 6 (IPv6) and what makes it different from IPv4? Obviously, since one of the reasons to deign IPv6 was the shortage of IPv4 addresses, there are more IPv6 addresses available. As we pointed out earlier an IPv4 address is a 32 bit number, whereas IPv6 address is a 128 bit number. IPv4 is a numeric addressing method whereas IPv6 is an alphanumeric addressing method. And where IPv4 binary bits are separated by a dot(.), the IPv6 binary bits are separated by a colon(:). The difference in bits allows for IPv6 to multiply the number of possible IP addresses by 1028, which may not sound like much, but it gives us 340 trillion trillion trillion possible addresses! There are technical differences between the protocols as well. We will not handle them in detail as that is outside the scope of this post, but it’s good to be aware of them: - IPv6 has built-in quality of service (QoS). - IPv6 has a built-in security layer (IPsec). - IPv6 eliminates the need for Network Address Translation (NAT). - IPv6 enables multicasting by default which means the same packet can be sent to several addresses. IP addresses and geolocation IP addresses are allocated on a geographic basis, so they can be used for a crude form of geolocation. An important thing to remember though, especially for all the Internet detectives out there, is that finding out an IP address does not provide you with a physical location. The result you get from looking up an IP address's location can be wrong by hundreds of miles. The location of an IP address on a map can be very misleading as it will often point to the location of the ISP that assigned the address, or to the center of an area where similar IP addresses reside. Innocent people have been harassed, even by the police, based on misunderstanding these “maps”. IP-based geolocation is useful for website geotargeting (showing users content based on their country or region) but it is not suitable if you want to pay someone a visit. Aside from geolocation, there is another way to connect an IP address into a physical address: Your Internet IP address is typically allocated by your ISP, and your ISP typically knows your physical address. Anyone who can convince your ISP to give up that information, either by buying it, issuing a subpoena or by social engineering, can learn your address. How to hide your IP address Many people don’t like their IP address to be known or visible to the websites or services they are interacting with. There are various possible reasons for wanting to hide your IP address. As awareness of corporate surveillance and criminal hacking has grown, so have concerns about personal privacy. Many people believe that it should be their choice when and how they give up some of their privacy, and don’t want prying eyes on their normal, legitimate behavior. A Virtual Private Network (VPN) gives you more control over the IP address and other information that is visible on the Internet. Of course, you still need an IP address when using an online service or website, or the packets will not know where to go, but the outside world can only see your VPN provider's IP address, not the one given to you by your ISP. By using a VPN, your packets are taking a detour. Compare it to a PO box where you can have your mail sent without providing your physical address to the sender. With the difference that you don’t have to go out and fetch it, it still gets delivered to your home by the one thing that knows your real IP address: The VPN provider that you have decided to trust.
<urn:uuid:27666479-4b78-410c-bd94-99b8d797baed>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2021/04/what-is-an-ip-address-do-i-need-one
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00757.warc.gz
en
0.953338
1,636
3.890625
4
Basic Cisco Router Setup and Configuration Getting a Cisco router up and working is a fairly straightforward endeavor, as there are really only a few items that need to be addressed. Let’s take a look at the basic instructions that will help you configure and setup the router. Once these basic settings are established, the configuration can be made more complex given the individual needs specific to each use. How to Setup a Cisco Router When a Cisco router boots for the first time, it will ask if the user would like to review the set of prompts to configure. You can go through these prompts or ignore them and choose to configure the router manually, as most engineers do. Once initially booted, all that is required to configure the router is a USB cable or serial rollover cable. After a connection is made, the initial basic configuration of the device can begin. Once connected, the user will first see a user exec mode prompt (as long as the user entered ‘no’ in the initial configuration dialog). We need to be in privileged exec mode in order to continue with the configuration. Enter the ‘enable’ command to access this prompt. Once in this mode, the user should be able to access all the commands necessary to alter the configuration of the router. Entering the ‘configuration terminal’ (or ‘conf t’) command will allow the user to do this. This initial config mode is called the Global configuration mode. Here is where domain name, passwords, and the hostname can be configured. Enter the ‘hostname hostname’ command to configure the hostname of the router. Once entered, the hostname should be reflected in the prefix of the prompt. The domain name can now be configured by entering ‘ip domain-name domain-name’ command. The user can now move on to setting up the password. Configure the enable password by entering the ‘enable secret password’ command. Setting up the password will prevent unwanted users from getting into privileged exec mode, though it should be known that local users will still be able to access user exec mode. Setting up interfaces for LAN and WAN Next, we will cover configuring the Fast Ethernet interfaces for LAN and WAN. Fast Ethernet LAN interfaces are automatically configured as part of the default VLAN, so they are not set up with individual addresses. Access is through the VLAN itself. You can find more information about creating VLANs here. The procedure for configuring Fast Ethernet WAN interfaces will depend on what model router you are working with. The Cisco 1811 and 1812 routers have two Fast Ethernet interfaces for WAN, while the 1801, 1802, and 1803 routers have one ATM interface for WAN connection. To configure the Fast Ethernet WAN interface, set up the interface with ‘interface fastethernet number value’. The user can then set the IP address and subnet mask for the specified Fast Ethernet interface with ‘ip address ip address – subnet mask’. To configure the ATM WAN interface, first enter ‘interface atm0’ to go to the interface configuration mode. Then the user can set the IP address and subnet mask for the ATM interface with ‘ip address ip address – subnet mask’. One final important step is left: enabling the interface. This can be accomplished by entering ‘no shutdown’ while still in the interface configuration mode. Configuring command-line access to the router Access to the router, when connected to the console port, is not password protected by default. This means that any user with physical access will be able to access user exec mode. You can alter this by configuring a password on the console line by entering ‘line console 0’ to enter line configuration mode. Then the user can issue the ‘password password’ command to set the password. Lastly, we will look at enabling Telnet access to the router. You can configure this access through VTY terminal configuration mode. Most VTY lines used for Telnet connections are labeled 0-4. Enter ‘line vty 0 4’ to access to the mode. From here the user will enter ‘login’ to enable a login prompt that will show when other users are attempting to access the router through the terminal lines. The command ‘password’ will allow us to set the password for this access. This review of the basic setup instructions for a Cisco router is intended to be a quick reference guide designed to get the device working in a short about of time. For a more thorough review of the setup process, it may be helpful to visit Cisco’s online configuration guide. For more in-depth training classes, see the Cisco training class schedule here. Have questions? Need guideance? Contact us by email, phone at 913.322.7062 / 314.644.6400, or by completing the following form.
<urn:uuid:3d62ec05-2c88-4365-90d9-e3f7984b227f>
CC-MAIN-2022-40
https://centriq.com/blog/basic-cisco-router-setup-and-configuration/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00157.warc.gz
en
0.871754
1,027
3.0625
3
With the incredible improvements in cloud computing over the past few years, losing a device has become less disruptive than it was five years ago. While losing your smartphone is certainly a major inconvenience and a financial strain, there are many ways that you can protect and store your data and personal information. Cloud backups such as iCloud and G Cloud are a good start for consumers, but those services do not protect your data in the instance of a lost or stolen mobile device. In the past, losing a device forced you through a number of steps in order to protect personal information such as bank accounts, credit cards, pins, passwords, and so on; frantically calling credit card companies to cancel cards, checking in with your bank to prevent fraudulent activity. If you suspected your phone was stolen, it was, and definitely still is, a good idea to call your carrier and deactivate your SIM card. However, in the modern threatscape, even going through all of these tedious steps might not be enough to protect you from data theft and/or loss. We want to ease your stress and help create a preventative approach to handling your mobile security. Fortunately, we can share a simple method to securing your device and data - it's called encryption. Encryption is THE most effective and surefire way to achieve maximum device and data security. When you encrypt a mobile device, you must create a unique passphrase or encryption key in order to access the encrypted file. However, you must be very careful when you start working with encrypted files because the passphrase is the literal “key” to your data. If you forget your encryption key, you run a very high risk of losing your data with the inability to recover it. Before we go any further, you must remember this and be sure to write down or store your encryption key in a secure location immediately after you set it! Simply put, encrypting your smartphone makes it near-impossible for any unauthorized users to access your device. We can say this because, in order to even power on the phone, the encryption key is required. If the key is not entered, the device will not even start up. This makes it virtually impossible for anyone to access your information as even the very best hackers would not be able to decrypt your phone when it's powered off. There are very few disadvantages when it comes to encryption. As we mentioned above, losing or forgetting your encryption key could make it incredibly difficult, or impossible, to recover data. In order to avoid this, make sure to store your encryption key in a secure location that is not associated with the device you’re encrypting. If you store your encryption key on the device that you are encrypting, there is a high chance that your phone could be decrypted by a hacker/thief. Additionally, encryption can slow down your device's performance, but any mid-range modern smartphone should be able to handle it with minimal or no hold ups. iOS: Luckily, as of iOS 8, Apple is already one step ahead of you. If you set a passcode and/or enable Touch ID on your device, your data will be automatically encrypted. You can reference Apple’s Security Whitepaper for iOS 8.3 and later to get more detailed information. Additionally, as described in the Whitepaper, every current iDevice has “a dedicated AES 256 crypto engine built into the DMA path between the flash storage and main system memory”, meaning that the device is specifically designed to handle the encryption and therefore performance will not be impacted. Android: Although there has been much speculation of Android enabling automatic encryption, the devices are still not encrypted by default. However, manual encryption is relatively easy to do. These specific steps should work for any recent Android device/OS. Before you begin, make sure that your device is plugged into a charger as interrupting the process could most likely cause data corruption. Now, start by opening your Settings App > Security > tap “Encrypt Phone” to begin the encryption process. Your device will ask you to enter your security code or “confirm your pattern” and the encryption should begin. After the encryption has finished, you can confirm that the process has worked by going to Settings > Security and looking for the “Encrypted” badge under “Encrypt Phone”. Unfortunately for Android users, the encryption process can impact speed and performance, especially if you are using an older device. Encryption works better on new devices with at least 64-bit ARMv8 processors and faster storage. Please note that with Android there is no way to decrypt your device without completely wiping and restoring the device, therefore it might be a good idea to make a clean backup before you begin your device encryption. *Please make sure you are familiar and completely understand any steps you take when it comes to encrypting your data. As we have mentioned, data can become corrupted or can get lost if you do not handle the process appropriately. Please call Delaney Computer Services if you have any questions about the process at 844-TECHIES.
<urn:uuid:0fe793d0-c0bf-40f3-85b4-2fc4d5e25c5f>
CC-MAIN-2022-40
https://www.dcsny.com/technology-blog/why-do-i-need-to-encrypt-my-smartphone/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00157.warc.gz
en
0.953078
1,041
2.546875
3
AWS Networking and Security 101 Is the virtual machine, or as AWS calls them, instances. This is Identity and Access Management, and it allows users to get access to the instances or applications. This is a Virtual Private Cloud, which is essentially a data center in the cloud. This allows users to access storage using a Public IP Address. Helps users connect their on-premise Data Center to AWS. It’s the DNS System that allows users to build applications and different services using a domain name resolution. Allows users to connect their remote branches to the closest point in the AWS System. This caches frequently used data. This is the CDN service. The AWS Cloud Computing Structure and Components - In a region… - The first thing to do is to create a VPC using a specified CIDR. VPC’s are regional concepts in AWS. - Specify an AZ within the VPC. - Specify a subnet in the AZ. - This will be part of the CIDR that was already created. - There is now an AZ to subnet affinity. - Define or deploy the instances within the subnet. - Once the instances are defined, users must now define a level of security. - Create a security group and apply it to multiple instances, or have one security group per instance. - Network ACL - This is a stateless Access Control List that is applied at a subnet level. - Route-table and router - Users have basic access to the route-table, but do not have access to the actual router. Some AWS Gateways Internet Gateway (IGW) A service that provides internet connection to the Virtual Machine. Virtual Private Gateway (VGW) A service that allows users to build IPSec tunnels. This gateway allows private subnets to connect to the Internet. Transit Gateway (TGW) This provides hub and spoke connectivity for the VPCs in the system. This connects the on prem network to the VPC or creates a hub and spoke technology between third party VPN devices and the AWS VPN Gateway. Customer Gateway (CGW) This is allows users to create a shell or definition for the gateway sitting on the on-prem site and then apply the configuration on the actual router. Direct Connect Gateway (DXGW) This is a service that allows multiple regions to connect to a physical Direct Connect circuit. The Architecture of the AWS Gateways - The region holds the VPC’s. - These VPC’s connect to their own TGW’s. - The CGW connects to the DXGW. - The DXGW connects to the TGW’s. AWS Transit Gateway (AWS TGW) - This allows many VPCs to talk to each other without VPC Peering. - The difference between using VPC Peering or using the TGW is that the VPC peering doesn’t allow transitive routing while the TGW does. - This native service supports multiple route tables and also has a VPN attachment type. - However, this has its own limitations. - The enterprises are responsible for programming the VPC routes manually when using the AWS platform without Aviatrix. - The IPSec Tunnel throughput is only about 1.25Gbps. - The scalability is a problem. - There is no overlapping IP support. How Aviatrix Solves for the Limitations of the AWS Transit Gateway - Aviatrix manages and controls the AWS TGW which removes the routing limitations. - Aviatrix takes care of the initial configuration of the routes and any updates. - It helps to simplify the BGP over Direct Connect. - Aviatrix provides network correctness and propagates all the on-prem routes to the VPCs. For more information, watch the video above.
<urn:uuid:96daf6c7-b557-4321-8436-0251a4e7a9eb>
CC-MAIN-2022-40
https://community.aviatrix.com/t/h7hxphg/aws-networking-and-security-101
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00157.warc.gz
en
0.83831
860
2.796875
3
When looking for a printer for your business, you will likely come across two types of printers – a laser printer and an inkjet. Laser printers use a laser to add copy to paper, while an inkjet printer produces copies by spraying ink onto the paper. Inkjet printers are generally used for high-quality prints. That is because the tiny droplets of liquid ink allow for precision and detail, the same way your screen renders high-quality images with thousands of pixels. While you can buy monochromatic black ink jet printers, most people and businesses buy colour inkjets. Colour printers generally require four cartridges - black, yellow, cyan and magenta. These colours are used to print unlimited hues and shades. When you print a document, the paper is fed into your business inkjet printer through a set of rollers. The printer head then moves back and forth, spraying ink onto the page. If you’re deciding whether to invest in an inkjet printer for your business, there are a few things to consider. - Quality of copies Because inkjets spray ink onto the paper, they provide more variety of tones and blend colours better. However, they are also subject to smudging so you must let them dry and handle them carefully when removing them from the printer. - Paper options Laser printers are limited to certain types of paper. They cannot print on heat-sensitive paper and are not always effective on plastic sheets. Inkjets are more versatile, providing you with more base options for your projects. - Printing speed Precision and detail come at a cost and that is speed. If you are planning on printing a larger number of copies, you may find an inkjet printer slow. They’re also not able to hold as much paper as other printers. Ink cartridges for inkjet printers tend to cost more. This is because they must be carefully designed and manufactured to generate high-quality copies. They are also more expensive because the liquid ink is more expensive to ship and store than the powder cartridges used in laser printers. Ink cartridges can also dry out over time, so if you don’t use the printer regularly you may find yourself wasting a lot of ink. The initial cost for a business inkjet printer starts at a low price. A basic inkjet with a starter set of cartridges can be $100 or less. This is substantially less than a basic laser printer. However, the operational costs can be higher as ink cartridges produce less pages than laser printer toners. - Print subject Inkjets are the better option if you will be printing lots of images and posters. While they can print text documents, the cost per page will work out to be more than copies from a laser printer. Whether or not you need an inkjet printer for your business depends on the unique requirements of your company. For help choosing the right printer for your Ontario-based business, contact OT Group today. We supply new and refurbished printers and copiers to businesses across Southern Ontario, between Toronto and Ottawa.
<urn:uuid:968fadd3-5ba5-4cba-b0c9-c2d984534076>
CC-MAIN-2022-40
https://www.otgroup.ca/business-technology-insights/what-is-a-business-inkjet-printer
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00157.warc.gz
en
0.914927
637
2.59375
3
Most of us know that the first thing to try when something isn’t working right on the computer is to shut it down and restart it. With Windows 8 and 10, however, it’s likely that shutting down windows does not fix the problem or problems. Why is this? Windows doesn’t shut down completely when you use the shutdown feature. Why Doesn’t Windows Shut Down Completely? Windows has a Fast Startup feature that is set to default unless you change it. It’s a hybrid of the hibernate and shut down features so that your operating system loads much quicker when you turn the computer on to get busy. This is usually a great feature, however, it becomes an issue when you need a complete shutdown. Shutting down windows does not fix the problem with this feature enabled. What’s the Difference Between Shutdown with Fast Startup and Traditional Shutdown? When you shut down your system with the Fast Startup feature enabled, it starts off discarding any open files or programs just like the traditional shutdown. The difference is that this hybrid feature keeps the state of the Windows kernel hibernating on the disk, ready for the next startup, meaning that Windows doesn’t shut down completely. What if You Need a Complete Shutdown? Saving the state of the Windows kernel on the disk means that any hiccups in program you’re using won’t have the opportunity to reboot with a fresh start. You can either use the restart feature, or hold down shift while you click the shutdown button. The shift trick works whether you use the shutdown button in your start menu, the start screen or the one under the control-alt-delete screen. So, when shutting down Windows does not fix the problem, try one of these. For network or technical support for small- to medium-sized businesses, conntact the experts at CCSI.
<urn:uuid:94cc9c56-63f8-401c-baae-d5381534c13e>
CC-MAIN-2022-40
https://www.ccsipro.com/blog/shutting-down-windows-doesnt-solve-the-problem/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00157.warc.gz
en
0.88943
393
2.765625
3
Where’s my stuff? How location and IoT play well together Ever wondered how to track small, high-value devices where there might be varying strengths of Wi-Fi, GPS, and cell tower signal? At its heart, the core value from IoT stems from data. A lot of people refer to data as the new gold but the reality is it’s the new oil. When oil comes out of the ground it has intrinsic value, yet it’s not much use to anyone in its raw form. The true value of data, like petrol and diesel, can only be realized after being refined. The data from IoT sensors is very raw – possibly consisting of a temperature, location, or pressure reading; and, because this raw data can come from many different sources, the key differentiator for IoT is the ability to aggregate data stemming from different places using analytics capabilities. IoT data insights developed by Watson IoT Platform allow organizations to tap into knowledge they’ve never had access to before – an incredibly powerful capability. Once refined, the data can come together to be fed into applications and solutions, turned into insight, and applied as knowledge that among other things can help to grow or expand a business. Data can be derived from any number of IoT devices such as sensors and actuators. The data may come in any manner of shape and form ranging from temperature, pressure, motion, and distance. Typically sensors connect through a gateway because they cannot communicate directly over the internet. Getting the connector to communicate over the wide area network or the internet requires connecting in an open interoperable, scalable and secure manner. On the web, when people talk through services they are using a single protocol, HTTP – which is open and interoperable. In the same way, IoT devices need to be connected in an open, a scalable manner, where everything is interoperable and secure. The advent of a new generation of billions of sensors being deployed and retrofitted to existing systems, some of which could be 10, 20, 30, and even 40 years old, has given organizations the ability to access and collect data derived from multiple, often siloed systems. For example, a facilities management system might include sensors for managing heating, ventilation, and air conditioning. In parallel, the assets in the building are connected to an asset management system. Historically the two systems were only linked indirectly through individuals who used the phone (another system) to raise a problem, or, to manually type in a meter reading. Watson IoT Platform makes it possible to bridge disparate systems, enabling not only the integration and aggregation of different data sources from each system, but also allowing the inclusion of new data sources, such as weather and location to be added to the mix. How IoT and location fit together IoT is important from two perspectives. From the point of view of technology, IoT is a fantastic driver of innovation; on the business side IoT is an enabler for business transformation and business disruption. An IoT solution can help organizations to improve their existing business. In fact, location data combined with timestamps gives organizations the capability to know when and where something is, or where it was; It also enables trend mapping which can be used to optimize processes. Depending on the type of business or operating model, business optimization can be achieved through streamlining costs, improving efficiency, increasing yield, ensuring or improving quality, reducing cycle times, etc. IoT in general is a huge creator of data, and has created many opportunities for big data analysis. Location is a key element in a comprehensive IoT solution, one that can enable companies to both increase revenue and decrease costs. Adding location data as another data point for input into data analysis enables an organization to derive valuable insights which can lead to more informed decision making. For example, including variables such as traffic patterns and usage density can be very important to decisions ranging from city planning, to placement of a new store location, to where a company’s next manufacturing plant might be located. There are many good examples where using location services can help achieve results: - Optimizing routes – reducing time wasted via traffic or inefficient dispatch. - Providing status or feedback when a machine is down- identify what the problem is? - Reducing theft and loss by being able to track assets accurately, and receive alerts if they venture from a known area or route. - Enhancing customer loyalty and engagement through preemptive service calls and quick response to issues. - Validating contractual obligations – using blockchains with location to validate where a ‘thing’ is, where it was, or when it got to, or, if it was left at certain location. A simple way to understand how location can affect a business A commercial waste collection company collects bins every day from any number of industrial units or commercial properties. The traditional system of collection is based on a rotation where sites A, B, C are visited on a Monday; Tuesday, the route includes sites D, E and F. Regardless of whether the bins are full and overflowing or whether they’re empty, the site visits occur. But, what happens if the same site visits could be correlated to the level of waste in the bin? If the level of the waste in the bins is known at the end or beginning of every day, a planner can optimize the route for its waste trucks. If an organization can make sure its valuable assets– in this case the collection vehicles– are used optimally, it becomes more efficient by saving time, fuel, labor, and wear and tear on its fleet of vehicles. Instead of wasted trips collecting half full, or empty bins, the application of IoT technology enables a route planner to dynamically optimize routes, collecting only the bins that will be full by end of day. More importantly, using route optimization frees up previously wasted capacity, increasing the organization’s bandwidth to service more accounts in the same amount of item. All this is achieved without compromising the quality of service, and without incurring any additional costs associated with bringing in new manpower, or purchasing more vehicles. The company is able to expand its business without incremental cost simply by making better use of its available assets and capacity. Common use cases where location and IoT can be applied A common use case for location in the IoT space is tracking and tracing industrial tools and expensive equipment often located on larger sites. For an organization with an extensive warehouse that extends for miles containing expensive pieces of equipment, it is very inefficient if a piece of machinery or a tool cannot be located quickly when needed. Using a combination of indoor location technology and Watson IoT these assets can be easily tracked. Supply chain and logistics is another use case where understanding the location of assets at any time is central to preventing theft, misplacement or malfunction. Product intelligence allows an organization to increase efficiencies and track assets or items on their journey from plant to store to final location. Using location tracking provides a much better understanding of where products end up, particularly when dealing with distribution networks where the producer of a product is not selling directly to the end user. Being able to track location close to sale helps organizations have a clearer picture of exactly who their end of the line customer really is. A deeper dive into the asset management use case Managing costs while improving efficiency are key imperatives for any organization – especially health-care facilities. Reducing critical operating costs, logging operational hours of high-value machinery, increasing the use of an asset, and minimizing unplanned downtime can go a long way to reducing costs and improving operational efficiency. When combined with preventative and predictive maintenance initiatives, further strides can be made towards increasing expensive hospital equipment return on investment. Preventative maintenance is an area well-suited to the application of IoT capabilities because it is critical that high-value, or high-demand assets are functioning optimally, and available without unplanned disruption due to break-downs. Sensors placed on valuable, high-demand equipment can provide insight into asset use, location, status, and maintenance. For example, a large medical campus or facility might have several dialysis machines which are in constant use. By installing sensors on dialysis machines, a hospital can monitor the health of the machine and know when maintenance is due. In addition, an organization can switch from time-based maintenance (where maintenance is schedule and performed whether it’s needed or not), to condition based maintenance, where care is provided when it is needed. This way, machinery is maintained when it is required, saving time and resources Using the same scenario, a hospital also needs to know where each dialysis machine is at any given moment – for both patient use, and, technician service. If a technician is dispatched to fix the machine or a member of staff is dispatched to use the machine with patients, locating that machine quickly is essential to ensuring the machine can serve more patients in a shorter period of time. Maximizing the use of an expensive piece of equipment such as a dialysis or MRI machine is not only good for the health of patients, it also improves the efficiency of the hospital. Applying hybrid locations in IoT opens up new realms of opportunity How is it possible to track small, high-value devices where there might be varying strengths of Wi-Fi, GPS, and cell tower signal? There are many reasons why tracking device location is difficult: devices are getting smaller, using lower amounts of memory, have tiny batteries, and offer little data bandwidth for transmitting back and forth. With literally millions of devices available, controlling the cost per unit is an important factor. Typically, from a battery power perspective, relying on GPS can be expensive to deliver location in a small connected device. Lastly, there are many places on earth where traditional location tracking technologies such as GPS can’t reach or reach accurately (urban canyons, high rises, etc.). Skyhook Wireless is a mobile location services company specializing in location positioning, context and intelligence. In 2003, Skyhook pre-empted the mass adoption and proliferation of mobile technology that could be used in any device, large or small. Initially collecting the locations of billions of Wi-Fi access points worldwide, Skyhook developed an innovative way to obtain location using Wi-Fi signals which are faster, more precise and battery-friendly than GPS and cell-towers alone. This capability is important especially in locations like urban canyons, high rises, and campus where Wi-Fi is becoming ubiquitous because Skyhook doesn’t require authentication to Wi-Fi devices, it determines location based on the Wi-Fi signal. Making location faster, more precise and practical Although there are different services available that can deliver a location coordinate, a latitude and a longitude onto a device whether it be a smartphone or something else, Wi-Fi solutions need to work in many different scenarios. On the one hand, a location needs to be fixed- even when nowhere near any major infrastructure; likewise, they’re expected to work in urban environments among urban canyons created by tall buildings, narrow streets and alleys. Whether its industrial equipment, commercial vehicles, hazardous waste, perishable medical supplies– such as vaccinations, blood or organ donations, vital healthcare equipment, luxury items scheduled for delivery, or financial documents being transferred from institutions or private investors (stock certificates), precise location data is essential. In any of these instances, using the role of location and IoT to track assets indoors or outdoors, through large facilities or campus-like environments, using a combination of Wi-Fi, GPS and cell tower locations (fundamentally triangulation with a twist) provides organizations with the practical ability to use the right signal at the right time to get the most efficient and accurate location for a device. Boasting customers such as Apple, Samsung, Sony and MapQuest, Skyhook’s global network powers billions of location requests in all the places as they happen. For example, Skyhook’s Context Accelerator radically simplifies harnessing the full power of a user’s location data to deliver frictionless, vital mobile experiences or the most relevant content. Context Accelerator is highly respectful of privacy and fits easily into almost any development stack. Learn more about using location and IoT solutions Here are two ways you can learn more about Skyhook location innovation: - Recipe: It’s easy to take Skyhook’s capability for a test drive in IoT apps using the new Skyhook recipe. - Webinar: Please join Skyhook and IBM to learn how Watson IoT Platform and Skyhook solutions are being used to track small, high-value, low-power ‘things’ everywhere. In this webinar session, hear how Skyhook solutions use Watson IoT Platform to enable the tracking of high-value assets using indoor location, geofencing and asset management for high value equipment. Topics covered will include using location awareness to increase efficiency and reduce loss due to theft, misplacement, or malfunction; determining where devices go after they leave the manufacturing plant; tracking from plant to store to final location; and deriving insights about traffic patterns and usage density. Register now.
<urn:uuid:008e203f-5509-4135-a59a-9bdde1b525f6>
CC-MAIN-2022-40
https://www.ibm.com/blogs/internet-of-things/location-iot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00358.warc.gz
en
0.933629
2,678
2.53125
3
A survey carried (opens in new tab) out amongst the Association of Teachers and Lecturers showed that nearly sixty percent of those interviewed considered plagiarism from the internet as a problem. The Internet has allowed an immense amount of information to be readily available to students, some of whom are so lazy that they copied and pasted text in assignments and essays regardless whether the text copied actually answered the question. Teachers found that some assignments even contained web adverts which are often integrated in the text body. Mary Bousted, general secretary of the ATL, said: "Teachers are struggling under a mountain of cut-and-pasting to spot whether work was the student's own or plagiarism." Plagiarism affects both secondary and university institutions with an increasing number of techniques - such as Turnitin (opens in new tab) - being used to identify whether work has been copied or not. As the ATL writes, plagiarism will affect the students’ long term skills when it comes to creating content and disserting around subjects and themes as well as understanding the subject being studied. One teacher said that getting students to understand what plagiarism is would help alleviate the problem although it might do little to stem the growing number of online paid-for essay (opens in new tab) services which can deliver genuine quality essays for a fee.
<urn:uuid:943a4bc4-c17a-44b5-9837-2ab17d5dc51f>
CC-MAIN-2022-40
https://www.itproportal.com/2008/01/21/net-plagiarism-threatens-whole-generation-uk-students/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00358.warc.gz
en
0.956664
275
2.65625
3
Concepts → Base Table A base table is a physical schema entity object, typically a facts or intermediate object, that is a child table in multiple join relationships with other objects in the same physical schema or other physical schemas. You can reference this entity object as a base table to do the following: - Create dimension-only or attribute-only queries that reference entity objects that do not have a direct join relationship. Typically, these entity objects are dimension tables that do not share common dimensions; however, each of them has either a direct join relationship with the base table (where the base table is the child entity object) or a join path to it. - Ensure that Incorta applies the join paths you expect when there are multiple join paths between two entity objects in a given query. - Handle Many-to-Many join relationships. Consider a physical schema with the following entity objects: - SALES (transactional or facts table) The physical schema has the following left-outer join relationships: |Child Table||Parent Table||Join Condition| |SALES||PRODUCTS||SALES.SALES.PROD_ID = SALES.PRODUCTS.PROD_ID| |SALES||CUSTOMERS||SALES.SALES.CUSTOMER_ID = SALES.CUSTOMERS.CUST_ID| |CUSTOMERS||COUNTRIES||SALES.CUSTOMERS.COUNTRY_ID = SALES.COUNTRIES.COUNTRY_ID| Notice that there is no direct join relationship between the PRODUCTS and COUNTRIES or the PRODUCTS and CUSTOMERS objects. In the case that you want to create a query, an insight for example, that references both the PRODUCTS and CUSTOMERS or the PRODUCTS and COUNTRIES entity objects, you need to either include the SALES object in the query or explicitly reference it as the driving Base Table for the query. Referencing the facts or intermediate object in the query may not work when determining the join path in some visualization types, an aggregated table for example. You will need to explicitly define the facts or intermediate object as the base table for the measures from the participating objects. You can define a base table for one of the following while referencing two or more objects: - an insight - an Incorta Analyzer table - a business schema view - an Incorta View You can also define a base table when exploring data from a physical schema or a business schema. The method you use for defining a base table varies according to the query context or the object. For objects that you use the Analyzer to create: insights, Incorta Analyzer tables, and Incorta views, or when exploring data, you need to drag a column from the facts table (Base Table) to the Measures tray → pill properties panel → Advanced → Base Field. Here is an example: With reference to the physical schema mentioned above, you can use the Analyzer to create an aggregated table insight that shows sold products per country. In the Cluster Management Console (CMC), you can create a tenant that includes Sample Data for the SALES schema. Load the physical schema data before using it to create the example. Here are the steps to create the Sold Products Per Country aggregated table insight: - In the Navigation bar, select the Content tab, and then select + New → Add Dashboard. - In the Add Dashboard dialog, for Name, enter Sales Dashboard, and then select Add. - In the Action bar, select + (add icon), or select + Add Insight. - In the Insight panel, select Listing Table or V. - In Tables, select Aggregated Table. - In the Data panel, select Add Data Set (+). - In the Manage Data Sets panel, in Tables, select SALES. Close the Data Sets panel. - From the Data panel, drag and drop the COUNTRIES.CountryNamecolumn to the Grouping Dimension tray. Drag and drop the PRODUCTS.ProductIDcolumn to the Measure tray, and then do the following: - Select > to open the pill properties pane, and then change the Aggregation function to DISTINCT. - Select Query Plan to check the query plan for the PRODUCTS.ProductIDcolumn in the Query Plan Viewer. Close the Query Plan Viewer. - In the Measure tray, for the PRODUCTS.ProductIDcolumn, select > to open the pill properties panel, and then drag any column from the SALES table to Advanced → Base Field. - Select Query Plan again to check the changes in the query plan for the PRODUCTS.ProductIDcolumn. Close the Query Plan Viewer. Notice the following: - Before defining the Base Field, the insight shows only the number of products while it does not show any country. The Query Plan Viewer shows the PRODUCTS object as the Base Table for the query and no join paths between the participating objects. - However, after defining the base field, the insight shows the countries in the COUNTRIES table and the number of products per country. The Query Plan Viewer shows the SALES object as the Base Table and the join path to move from the measure object: PRODUCTS to the dimension object: COUNTRIES through the SALES object. Checking the query plan for your columns is a good practice. It allows you to ensure that The Analytics Service is applying the join paths you expect while running a query. For a business schema view, use the Business Schema Designer to set the Base Table. To learn more, review Tools → Business Schema Designer → Set the base table for a business schema view Define the base table at the business schema view level to avoid defining it multiple times when referencing the same entity objects in different insights, and use the business schema view as the source for the different insights as appropriate. When integrating with external tools, such as Tableau or PowerBI, make sure that you define a base table for Incorta views that these external tools use. In the case of using a base table other than a physical schema table, perform a full load of the object data after defining the base table for the business schema view.
<urn:uuid:f0c506f4-10e0-4c9d-8d5d-50572c9c16cf>
CC-MAIN-2022-40
https://docs4.incorta.com/5.0/concepts-base-table/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00358.warc.gz
en
0.819521
1,356
3.625
4
Oct 08 2021 The continual increase in the quantity of the data that systems must process places a burden on traditional system architectures, leading to bottlenecks in performance. In this blog post, I’ll explain how computational storage technology can enable existing servers to gain performance, increase their energy efficiency, and open new technical and business opportunities at the edge. Computational storage is an architecture that provides compute functions embedded within storage devices. Computational storage devices (CSx) enhance the capabilities of traditional storage devices by adding compute functionality that can improve application performance and drive infrastructure efficiency. They reduce the movement of data between storage and main CPU, while augmenting the overall compute capability of a given system. Similar to the advantages an onboard CPU gives to SmartNICs, CSx bring low-latency efficient processing of vast amounts of data to existing servers. CSx is part of a broader industry trend in system design focused on moving compute to where the data resides. We see this pattern at a macro scale with the evolution of edge computing. We’re also seeing the rise of Smart NICs, which move computation closer to where network packets arrive. These systems-design patterns and macro compute patterns are all based on the goal to move compute to where data is stored or generated, rather than always moving the data to a centralized compute location. CSx give us the choice to process data where it is stored and avoid the need to move it across the system bus to the system CPU for processing — reducing the energy associated with massive data transfers and increasing the parallelism of bulk data-processing operations. There are a number of reasons these devices make sense and are starting to gain traction in today’s application and infrastructure designs: The following diagram illustrates the way these factors compare to the “traditional” architecture. As mentioned earlier, CSx are becoming an important part of system architecture, due to the large increases in data generation requiring processing with greater speeds and more energy efficiency and smaller form factors. Today’s edge systems can gather vast amounts of data but struggle to perform the needed analytics where they reside without the benefit of larger systems and compute clusters. Instead of moving the data to a central location for processing, we move the processor to the data! By allowing the storage devices to process the raw data locally, CSx (SSDs with embedded compute, like those provided by NGD Systems, are able to reduce the amount of data that needs to be moved around and processed by the main CPU. Pushing analytics and raw data processing to the storage layer frees up the main CPU for other essential and real-time tasks. Many examples of basic use cases — some as simple as offloading compression to the devices — are available today. However, more advanced applications include running DBs, machine-learning (ML) models, and other forms of analytics on storage. Running a full Linux OS on the device, for example, can allow for AI/ML and other advanced applications to be executed on the drive itself. We at VMware have starting exploratory work to run ESXi ARM directly on the storage NVMe. Perhaps we’ll be able to have a vSAN cluster spanning nonvolatile memory express (NVMe) devices in future. One use case for CSx technology we are currently investigating with NGD Systems is running a parallel DB (such as VMware’s Greenplum) directly on the storage layer. This offloads the main CPU and allows for traditional and ML inference-driven queries to be executed on the storage layer. Greenplum shards data across nodes to minimize cross-traffic and distributes queries across all data segments for highly parallel performance. For more information on our progress, check out this awesome VMworld session recording. Other areas we are exploring for the use of CSx include improvements in video collection and analytics use cases, IoT analytics, and even emerging automotive technology. For locations with limited compute space — such as smaller offices, ships, branch offices, and on satellites in space — CSx-enabled systems can augment traditional system resources, reducing the number of servers needed, driving higher density and better sustainable computing outcomes. VMware’s vSphere platform technology and our modern apps Tanzu technology stand to benefit from the opportunities CSx-enabled solutions will provide. We’re exploring these emerging opportunities to bring new industry-leading solutions to market. Below are two of the most frequent questions we have come across with this new technology: Can I put CSx in any server? Yes. These devices were intentionally designed to be “plug and play” with traditional storage devices, lowering the barrier to entry and increasing the flexibility of the deployments. Do I need to change my app architecture to use CSx? Some devices have locked or fixed features that are pre-loaded on the drives. Each performs a single function and requires some programming to enable them. Other devices have much more open and programmable resources and might simply require a cross-compile from x86 to ARM, avoiding the need to rewrite the entire application or function. CSx is still in its infancy, but VMware researchers are starting to see that computational storage is part of a broader industry trend in system design focused on moving compute to where the data resides. There are a number of interesting use cases for combining VMware software offerings with these types of hardware. The ability to pack more horsepower into existing servers offers exciting possibilities. Both VMware and NGD Systems are working with SNIA around the Computational Storage standards efforts, as well. Check out their website for more information. Originally published on VMWare.
<urn:uuid:3d8bb609-f547-496f-b9ef-3785cbd45d8f>
CC-MAIN-2022-40
https://ngdsystems.com/why-computational-storage-makes-sense-for-the-edge/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00358.warc.gz
en
0.921089
1,147
2.6875
3
With the Cybersecurity world focused on Phishing, attacks to Critical Infrastructure, and Ransomware, there is one area that is just as vulnerable, perhaps more: the medical devices implanted in, or connected to, patients. This area of Cybersecurity risk is just now starting to receive serious attention. Why are medical devices so vulnerable? It is fair to say that many Americans have experienced a serious medical setback or diagnosis which has required hospitalization. This, of course, has been greatly exacerbated by the COVID-19 pandemic. Many patients, while in the hospital and after they are discharged, are hooked up to some sort of machine or have a medical device implanted in them. Typical examples of this include insulin pumps, kidney dialysis machines, and cardiac pacemakers. It is not the devices themselves that are unsafe. Rather, given the recent advancements in medical technology, it is the connections from these devices to an external environment that creates risk. This has been fueled primarily by the boom of the Internet of Things (IoT), where the objects that we interact with daily, both in the physical and virtual worlds, are interconnected. It is these myriad interconnections which pose a Cybersecurity risk. For instance, when everything is interconnected in this way, the attack surface for the Cyberattacker is greatly expanded. There are many backdoors that become open for Cyberattackers to penetrate and then move laterally in a covert fashion. Worse yet, many of the interconnections just described are not secure, and are not even encrypted. All information and data that is transmitted from a patient to his or her healthcare provider (and vice versa) are sent in what is known as “cleartext” format. This means that if this sensitive date is intercepted, it can very easily be deciphered by the Cyberattacker, especially when it comes to the username/password combination, and any form of verification-based credentials. A recent survey conducted by Fierce Healthcare1 has further substantiated this risk. This is what their research found: - 80% percent of all healthcare organizations have experienced some sort of IoT related Cyberattack. - 30% of these attacks involved medical devices. - There are an astonishing 10-15 million medical devices in the United States alone. - There is an average of 10-15 medical devices connected to each hospital bed (thus making them an extremely easy Cyber-based target). - 42% of respondents blame the IoT for the Cyberattacks on medical devices. - 50% of respondents blamed the unpatched and outdated Network Infrastructure of their respective healthcare organization for the Cyberattacks on the medical devices. - A shocking 82% of IoT vendors have strong concerns about the level of security that is deployed into their medical devices. - An overwhelming 70% of healthcare organizations are using an unsupported version of the Windows OS , such as Windows 7, Windows 2008 and Windows Mobile. The Solution: the Health IT Joint Security Plan Framework In the world of Cybersecurity, many frameworks have been established to provide entities guidance regarding the implementation of controls. The healthcare industry is no exception; it is guided by the Health Insurance Portability and Accountability Act (HIPPA). But as it relates to medical devices, one devoted framework was recently created: the Health IT Joint Security Plan2, or the “JSP” for short. It was formulated by the Healthcare Industry Cybersecurity Task Force which has its roots in the Department of Health and Human Services. The theoretical constructs of the JSP framework lie in the Cybersecurity Information Sharing Act of 2015. The JSP encompasses: - The Cybersecurity responsibilities of the vendors that create and manufacture medical devices. - The establishment of specific Risk Assessment Methodologies. - How vulnerabilities are to be reported to regulatory agencies. - How the lines of communications from the medical device manufacturers to the healthcare providers (and vice versa) can be further improved. The entire development and usage lifecycle of medical devices is also covered, including the following: - The incorporation of best practices and standards in the inception of newer and upgraded medical devices. - The implementation of controls after the device has been produced. - The handling of any consumer complaints or concerns. - Risk and vulnerability management of the medical devices after they have been deployed to the healthcare provider. - How the medical device should be properly decommissioned after it has reached its so called “End Of Life”. For more detailed information about the JSP framework see this infographic. Ravi Das is a Cybersecurity Consultant and Business Development Specialist. He also does Cybersecurity Consulting through his private practice, RaviDas Tech, Inc. He is also studying for his Certificate In Cybersecurity through the ISC2.
<urn:uuid:550dd82f-48d3-47bf-9c26-c2c6dce8d552>
CC-MAIN-2022-40
https://platform.keesingtechnologies.com/medical-devices-at-risk-for-cyberattacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00358.warc.gz
en
0.951259
1,058
2.921875
3
Gainesville, Fla., Greenlights Countdown App for Traffic Signals A majority of the city's traffic signals are part of a network that uses predictive algorithms and other technology to send real-time information to drivers about signal wait times. Most of the city’s signals are part of a connected network that uses real-time information, predictive algorithms and other technology to send information to drivers via the Enlighten mobile app. The system is powered by Connected Signals Inc. of Eugene, Ore. The technology, known as “signal, phasing and timing,” or SPAT, communicates to drivers signal information such as how much time is left on a green light, explained Emmanuel Posadas, Gainesville Traffic Operations Manager. The system is live with 150 signals in Gainesville out of the city’s 248. The project grew out of a smaller pilot project in 2017 with the University of Florida’s Transportation Institute’s I-STREET test bed, which was formed as a partnership with the Florida Department of Transportation’s (FDOT) Connected Vehicle Initiative, and includes numerous projects across the state, as Florida experiments with autonomous and connected vehicles and the infrastructure needed to support them. In Gainesville, the signal timing project is part of the city’s overall Vision Zero plan, a concept taken up by many cities across the country to reduce traffic-related fatalities to zero. The technology is already leading to the development of traffic signal prioritization for emergency vehicles, said Daniel Hoffman, assistant city manager overseeing public works, transportation, fire and rescue, parks and recreation and other areas. “The Vision Zero program is meant to provide us metrics, and targets, that we intend to hit through intelligent design, through additional technology, and by beginning to balance out our priorities a little bit,” said Hoffman. Gainesville, a college town of about 132,000 residents, is treating transportation as less of a system to move cars, and more as a system to move people, said Hoffman. That means giving equal weight to pedestrians, cyclists and public transit. “We are not so dense that we have even a method to completely move away from single-passenger vehicles,” said Hoffman. “But we’re starting to think more broadly around — particularly in parts of downtown — how does the overall system operate for all users.” Improving transportation efficiencies for all users is part of the goal of the Connected Signals project. When signal timing takes into consideration traffic volume, speed and other data points — and then relays that signal timing information to drivers — that means less idling time at red lights and less greenhouse gas emissions, say officials. “Lowering fuel emissions, reducing carbon footprint, increasing safety are all hugely noble efforts worth pursuing and that’s why we are thrilled to provide this service to municipalities and agencies at no charge,” said Matt Ginsberg, CEO and co-founder of Connected Signals, in a statement. The company, working with the U.S. Department of Energy’s Argonne National Laboratory, is conducting signal research in Las Vegas, Portland, Ore., Phoenix and San Jose, Calif. “We’re small enough so that you can pilot and test things out with a greater degree of ease than a bigger city,” said Hoffman. “But we’re still big enough and still have a sophisticated enough transportation system that you can try out different things.” In another project with the university, Gainesville launched a pilot in December 2017, to conduct a three-year pilot with three driverless shuttles in a project know as GAToRS. Intelligent traffic signal technologies are also being explored in Las Vegas; Columbus, Ohio; Denver, Colo., and other cities.
<urn:uuid:cca16af5-2d51-43d0-8eeb-3da36450f5e3>
CC-MAIN-2022-40
https://www.govtech.com/fs/automation/gainesville-fla-greenlights-countdown-app-for-traffic-signals.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00358.warc.gz
en
0.945705
796
2.59375
3
Smart cities are changing the way urban development will operate in the next century, and things are just getting started. Businesses can take advantage of growth and opportunity in these cities, particularly in China, as digital infrastructure takes hold. The Internet of Things (IoT) increasingly gives consumers options to make their lives easier by connecting devices that think and act on their preferences and anticipate behavior. The same is happening in cities around the world – with leaders considering how implementing these major technological changes could transform their cities into working hubs of smart technology. The goal is to create “smart” cities that allow citizens to work, play, interact and travel by using technology as much as possible. The National Development and Reform Commission of China defines a smart city as a “new idea and new mode of promoting smart city planning, construction, management and service, using the Internet of Things, cloud computing, big data and spatial geographic information integration, etc.” A smart city takes information from layers of digital services and then uses it to make better decisions about how to plan a variety of services, such as transport, energy and healthcare. For instance, the Malaysian Government and China Telecom Americas launched SMARTXP, a Smart City Experiential Centre, which allowed citizens to come and interact with technology, showing how such a smart city would work. Virtual reality, screen displays and interactive games all demonstrated what living in a smart city would be like. This might seem like a futuristic concept – but that future isn’t as far away as you might think. As China Telecom Americas explains, technologies like smart home gateways can turn homes into smart homes, allowing multiple devices and apps to control various everyday aspects of a home – including kitchen appliances, beds, or even power sockets that can be turned off or on. “Technology is permeating into every aspect of our lives – yet how can we utilise the smart devices in our palm to enhance our living standard?” How a Smart City Works The creation of a smart city takes a significant amount of infrastructure. For instance, to provide digital services a city must be able to develop “layers” of digital infrastructure: - Sensors: Cameras, sensors and smartphones help to gather data from users. This can then help create massive repositories of information that feed into subsequent layers, enabling better and more data-driven decision-making. - Networks: This layer requires partnering with trusted, local networks and providers, such as electricity grids, the internet itself and telecommunication networks to spread and gather the information from users. - Platforms: To provide smart services, underlying platforms need to be created, requiring expertise in security, network management and information processing. This requires strong and reliable cloud services from a trusted provider that can ensure security. - Applications: Whether a citizen is looking up a bus timetable that is updated in real-time to account for traffic or monitoring their own electricity use, applications need to feed the information gathered in the previous three categories. These advances are crucial for expanding economic activity. According to Malaysian Prime Minister Datuk Seri Najib Razak, the creation of such experiences as the SMARTXP centre, created in conjunction with China Telecom Americas, will help people envision a more connected future. He points to studies showing 90 percent of Malaysians will be living in cities by 2020. “So, the demand for smart city infrastructure will be tremendous and vital,” he told the Malay Mail Online, also saying he was “pleased” to see the creation of the SMARTXP concept. What a Smart City Can Do From a citizen’s perspective, the types of services a smart city could provide would involve information about everyday activities. Taking public transport, for instance, would be a far more enjoyable experience with sensors on trains and buses that provide real-time location updates. In Zhenjiang, citizens are able to make hospital appointments and rent bicycles from their smartphones. Information is sent to a “control center”, which then helps planners and operators reduce inefficiencies. Using cloud services, China will develop these smart cities further. As the China-Britain Business Council explains, businesses around the world could find opportunities to digitize infrastructure relating to transport, water, energy and healthcare, along with a need for massive digital storage. This evolution could see substantial changes in the way cities are run and managed in decades to come. As MIT has pointed out in its own analysis, telecommunication providers such as China Telecom are playing a crucial role in developing smart cities. “They are not only extending their network coverage and improving their network quality, but also exploring new technologies to build new network layers.” Discover how technology is fuelling this development with the help of trusted Chinese telecommunications providers like China Telecom Americas.
<urn:uuid:aa63fc16-c707-4a93-8a77-ff26b1dd27b3>
CC-MAIN-2022-40
https://www.ctamericas.com/smart-cities-smart-innovation-next-generation-city/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00358.warc.gz
en
0.931755
990
3.328125
3
A brief history of machine learning in cybersecurity How to connect all the dots in a complex threat landscape (IMAGE COURTESY OF BIGSTOCK.COM) As the volume of cyberattacks grows, security analysts have become overwhelmed. To address this issue, developers are showing more interest in using Machine Learning (ML) to automate threat-hunting. In fact, researchers have tried to implement ML in cybersecurity solutions since the late 1980s, but progress has been slow. Today, ML is showing increasing promise with the advent of Big Data because the quality of information from which ML can learn is improving. However, there is much more to be done. Anomaly Detection – The Early Days When we talk about security, we want a system that can separate good from bad, normal from abnormal. Therefore, it is quite natural to apply anomaly detection to security. We can trace the beginning of anomaly detection back to 19871 when researchers started building intrusion detection systems (IDS). Around 1998-1999, DARPA (the government agency that created the Internet), created benchmark sets and called for research on ML methods in security2. Unfortunately, few of the results were practical enough and even fewer products got to the operational stage. Anomaly detection is based on unsupervised learning, which is a type of self-organized learning that helps find previously unknown patterns in a data set without the use of pre-existing labels. In essence, a system based on unsupervised learning knows what is normal, and identifies anything abnormal as an anomaly. For example, an IDS might know what ‘normal’ traffic looks like, and it will alert on any traffic variants that don’t match that knowledge such as a vulnerability scanner. In short, anomaly detection systems based on unsupervised learning make a binary decision (normal/abnormal) and don’t make sophisticated evaluations. Some refer to unsupervised learning applications as ‘one-class problems.’ As you might imagine, systems based on unsupervised learning can generate a lot of false positives, because a situation deemed abnormal can be perfectly innocuous (think vulnerability scanner again). This is a problem that security analysts still struggle with today. The Rise of Big Data After 2000, developers and researchers began creating spam, phishing, and URL filtering systems based on supervised learning. In supervised learning, decisions are based on comparing a set of data (or labels) against a perceived threat. One such example is a URL blacklist, where incoming e-mail is matched against a list of undesirable URLs and rejected if it matches a label on the list. A supervised learning algorithm analyzes the data and produces an inferred function (i.e., this traffic behavior matches this input data, therefore it is bad), which can be used for mapping new examples. Early filtering systems using supervised learning were based on relatively small datasets, but datasets have grown in size and sophistication with the advent of Big Data. For example, Gmail offers an Internet-scale database of known good addresses, and it’s easier to train its ML engine with sophisticated models of what is acceptable. Big models (in terms of a number of parameters) based on Big Data, such as deep learning models, have gradually become more popular. For example, supervised ML has been successfully used in anti-virus signature generation for years, and in 2012, Cylance began offering next-generation anti-virus systems based on datasets other than signatures, such as anomalous traffic behavior. Combining Supervised and Unsupervised Learning Supervised learning has shown more success in security applications, but it requires easy access to large sets of labeled data, which are very difficult to generate for cyberattacks such as APT (advanced persistent threats) and zero-day attacks targeted at enterprises. Therefore, we cannot easily apply supervised ML to solve all cyberattacks. This is where unsupervised learning comes back into the situation. We need to develop more advanced AI/ML that can be unsupervised or semi-supervised (e.g., through adaptive learning) to solve the additional cybersecurity challenges. Adaptive learning (human-guided analysis) coupled with supervised and unsupervised learning improves your ability to detect those APTs and zero-day exploits. A New Direction: Connecting the Dots One of the big problems with simple anomaly detection is the volume of false positives. One way to address this issue is by correlating multiple events (dots) and then evaluating whether or not the correlation indicates a strong signal for a cyberattack. For example, one ‘dot’ might be an executive logging into the network at 2 a.m., and while this alone might be seen as a false positive, it wouldn’t be enough to trigger an alert. However, if the executive is seen logging in at 2 a.m. from an IP address in Russia or China, it would trigger an alert. Developers and researchers are just now combining supervised and unsupervised learning into cybersecurity products. For example, Stellar Cyber’s Starlight product correlates multiple events and evaluates whether, when looked at together, they constitute a threat. This approach significantly reduces false positives and helps analysts identify APTs or zero-day attacks more quickly. The next frontier will be to make ML self-teaching, so that past experiences in detecting and responding to threats will be factored into new evaluations of potential threats. The system would thus grow more accurate over time. Some security systems are beginning to implement self-teaching technology today, but the history of ML in cybersecurity is relatively short, and large-scale improvements will emerge in the future. While security analysts will always be needed to make the ultimate decision about whether to kill a threat, ML can make their jobs much easier if applied properly. D. E. Denning, “An Intrusion-Detection Model,” IEEE Transactions on Software Engineering, vol. 13, no. 2, pp. 222–232, 1987. R. Lippmann, R. K. Cunningham, D. J. Fried, I. Graf, K. R. Kendall, S. E. Webster, and M. A. Zissman, “Results of the 1998 DARPA Off-line Intrusion Detection Evaluation,” in Proc. Recent Advances in Intrusion Detection, 1999.
<urn:uuid:e2d83189-9378-48b9-a826-7af9ccd79884>
CC-MAIN-2022-40
https://stellarcyber.ai/a-brief-history-of-machine-learning-in-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00358.warc.gz
en
0.93973
1,315
2.859375
3
IP address is the identifier that allows information to be sent between devices on a network. It contains location information and makes devices accessible for communication. IP addresses are mathematically assigned by the Internet Assigned Names Authority (bet you didn’t know that!). This might be fine and dandy news for the non-technical, but odds are you still have no idea why hiding your IP address is advised. Since your IP has location information, it can be used to discern your physical location. The accuracy of determining your location via IP address information is actually extremely accurate. Another reason to hide your IP is the increase in cyberattacks as of late. IP addresses can often be used to target attacks. You can also hide your IP with the goal of watching blocked content in your region. Changing your IP can be done, but this is a more detailed process. Hiding it is a much easier option. A Virtual Private Network creates an encrypted tunnel between your device and the service’s server rather than connecting to a website directly, adding a layer of protection. The VPN allows you to connect to the internet as normal and retrieve the information but through the tunnel created. This ensures that your web traffic cannot be intercepted, and furthermore anyone looking at the IP will only see the IP address of the VPN. What you can also do is use a series of computers that are distributed across the globe. Rather than a request made between two points, your computer will send out layered requests that are each encrypted. You will be relayed from Tor node to Tor node before exiting the network and reaching the desired destination. Each node only knows the previous jump and the last jump. This method of Tor will make your movements much harder to track, making you much less susceptible to attack. In order to complete this method, download the Tor Browser, or talk to your IT professionals. If you would like to educate yourself in more detail about the information presented in this blog post please visit : www.pcmag.com
<urn:uuid:79ad12be-b6c5-4a8a-b544-854ca848ef27>
CC-MAIN-2022-40
https://www.bvainc.com/2016/08/22/securtiy-alert-hide-ip-address/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00558.warc.gz
en
0.948962
404
2.921875
3
The modern method of carrying out banking transactions from a PC or mobile device is ever more popular. Around half of all Internet users in the EU (48%) use online banking services. Financial transactions are among the most sensitive online procedures and, naturally, security is of the utmost priority here. To users, Security (96%) and data protection (94%) are the most important factors when it comes to online banking.* In times of ceaseless reports about security issues such as “Heartbleed”, “Panama Papers” and similar issues, financial service providers, hosts, administrators and users etc. are vital links in the chain of making a process smooth and secure. G DATA experts have put together the following tips and recommendations to ensure that digital bank transactions run smoothly. Security and convenience are not mutually exclusive in this case. 87% consider the use of a publicly accessible PC to be a risk.* Computers in an internet café do not fall into the category of “secure devices”. Doing online banking over a public Wifi network is akin to moving across a digital minefield. Users have no control over the security of third-party and (especially) public computers. Hence it must be assumed that, at worst, they are infected with malware that can steal input access data, for example, or specifically target online banking customers. Open WLANs are not secure because the data traffic can be very easily intercepted and read. This means that the input data is not safe from attackers. The use of password-secured networks and encrypted transfer protocols (e.g. universal HTTPS) is a first step in the right direction towards better security. Using a VPN tunnel is the only way to isolate all data traffic from being snooped on by attackers without interrupting it (thus giving the attacker away). For 59% of participants in our survey, the use of personal devices is an important step in increasing security when using online banking services.* Attackers exploit the inattentiveness of potential victims and coax them into visiting websites that look and feel similar to the original website so they can carry out phishing or malware attacks. Assume there is a bank with the domain name “MyPersonalBank.com”: Attackers would for example register a domain called “MyPersonaIBank.com”. It is not at all visually apparent that the lower case “l” in “Personal” has been replaced with a capital “i”. Inattentive users can be lured into this trap when such a trick is used in spam emails or on websites. The use of HTTPS instead of HTTP is also important. Modern browsers use colours and icons to indicate whether a connection between the browser and the web server is secure and whether the certificate provided is valid. On this point, choosing an appropriate payment authorisation process such as a transaction number (TAN) is generally important. The HBCI/FinTS process is also an option. There are several processes that each have advantages and disadvantages – various banks have their own favourites and advise their customers on the options appropriate to them in the branch. Some banking Trojans specialise in carrying out attacks using technological and social engineering techniques. They use so-called web injects to display additional content on a website to the victim, such as a test transaction or an allegedly failed credit to your account. The attackers claim that the victim must carry out a “return transfer” as the money does not belong to him/her. However, in reality there never was a credit – it is just a technical trick. The “return transfer”, however, is a perfectly normal transaction from the bank’s point of view and results in a deduction from the account. Many banking portals have now implemented an automated logout function which logs the user off after several minutes of inactivity. Logging out of the service is important as the current session is terminated and nobody can use technological means to take it over. Irregular transactions should be apparent as soon as you look at an account overview. The first point of contact for clearing up the matter is your local bank. The terms and conditions of the financial provider will tell you what steps must be taken in what period to make a valid claim. Specific periods apply here in terms of the closing of accounts, which differ from account statements under law and must be carried out at least once a year. 34% of respondents say that they use a daily limit for online transfers for the security of their online banking.* Banks generally offer a function that can be used to set a limit for all transactions carried out offline and online. This allows you to determine what amount or sum can be transferred within a specified period as a maximum. Even though this might entail a loss of convenience, setting up a limit can reduce damages, should the worst happen. Even if reading such contracts is not your favourite pastime, studying the terms and conditions makes a lot of sense. Banks specify different requirements of their customers in respect of security precautions. Using an up-to-date security solution is either recommended or even mandatory by most banks. Furthermore, many financial institutes stipulate a legally permissible personal excess of €150 for customers in the event of a claim for damages. If gross negligence or fraudulent intent on the part of a user is proven, he might (certain cases excepted) be required to pay for all damages in full (Art. 675v of the German Civil Code). 75% see the use of security software as a personal measure for increasing security when using online banking services.* Antivirus software fends off malware before it gets onto a device. Innovative products with proactive technologies, Cloud connections and spam protection etc. offer optimum protection. All G DATA PC security solutions also include G DATA BankGuard for specific protection against banking Trojans. Security holes must be closed as quickly as possible so cyber criminals cannot exploit them for their attacks. G DATA security solutions with innovative Exploit Protection prevent the exploitation of security holes. More than half of the users (54%) state that installing updates is a way of staying secure when using online banking services.* The access data for an account is of value in all respects and should be regarded as such. Every account requires a separate, strong password. A common mistake is for users to use their birthday as their PIN because it is easy to remember. It is easier if you use a secure password manager to remember the password for you. A login name in combination with a password is the minimum requirement for secure access. It is better if additional login factors are used such as a biometric factor. User reviews, required permissions and the reputation of the app provider and the shop are all elements that should be checked before any download. G DATA Mobile Internet Security for Android checks the permissions of installed apps. The number of users who also use mobile devices for online banking is increasing. 21% of respondents use a smartphone and 13% stated that they also carry out banking transactions via a tablet.* Many attackers try to trap potential victims via email. To do so, they send mass emails that are frequently visual copies of original emails. The look is supposed to increase confidence that the links included are not dangerous (but are frequently used for phishing), and that attached documents have to be read urgently (in almost all cases those documents contain malware). Unfortunately, such social engineering attacks work all too often. Therefore, users need to be on their guard to be prepared for non-technical attacks as well. Financial services providers do not send important, personal messages via email. * All statistics: Online Banking 2014 – Security counts! Preferences and requirements of banking transactions on the Internet – a D21 Initiative survey conducted by TNS Infratest (2014)
<urn:uuid:b3a9cf1e-b755-4936-8661-b99eeeadcb7e>
CC-MAIN-2022-40
https://www.gdatasoftware.com/tipps-tricks/sicheres-online-banking
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00558.warc.gz
en
0.930473
1,702
2.5625
3
What Is Google Classroom? What is Google Classroom? Recently, a Chicago teacher published a lesson on a popular teach networking website that turned a few heads. For a world literature class, the teacher integrated lessons that incorporated students creating and maintain Google Earth journals. As the class progressed through each reading and author covered in the curriculum, students participated in assignments that both integrated and relied on Google Earth and related Google programs to produce assessments. Students interacted with each other via Talk and Hangout, collaborated and helped each plan and design their respective journals, and shared our findings and interesting connections in a group setting. Essentially, the class gave students not only a literary grounding in the world literature community, but also a visual one. The incredibly important job of providing meaningful context in an English class is made incredibly easier for teachers by projects like these. With cloud services like Google bringing in reference points and making it easier and easier for students and teachers to gain useful information immediately at need in the classroom, there is no doubt the cloud has re-shaped the way English teachers approach their instruction. While this project and curriculum was certainly an effective and innovative idea, perhaps more useful for English teachers has become the growing number of texts available via the cloud for free, as well as through providers like Amazon, Google Books, and Barnes and Noble. Quickly the days when dusty and torn paperbacks were used again and again by classes until they literally fell apart. Instead, teachers can assign readings and homework without worrying about providing student access to the text outside of the classroom. PDFs of English texts are useful for many other reasons as well. Most notably, students deft with programs like Adobe Reader can annotate, navigate, and print from PDFs, giving them constant engagement with the text without needing to be concerned about maintaining the condition of the book itself. English classes have long been thought of as the musty antique rooms of the modern middle and high school. Whereas subjects like science and math, responding to changes in the economy and the workforce, have leaped ahead in integrating new technologies and cloud services into instruction, English classes continue to rely on hard copy books, written assessments, and traditional instruction. However, this is no longer true for the majority of teachers, and indeed is rapidly becoming ancient history in many school districts. As both teachers and students continue to grasp the excitement and opportunity afforded for both instruction and collaboration through the cloud in secondary classrooms, ideas continue to spring up that may revolutionize the future of English education to the same extent math and science instruction has been changed. While lessons like the Google Earth journal and new resources like digital texts offer numerous potential limitations as well, some of which are still unidentified, the massive crowd of teachers experimenting with these technologies across the country seems to indicate that innovation will continue to respond to the new needs of the 21st century English classroom. By Adam Hausman Adam Hausman draws on an unusual combination of professional and academic experience for his writing. Currently a teacher at Northside College Prep, a magnet school in Chicago, Adam’s academic resume includes a B.A. in English and Psychology from Indiana University, as well as an M.Ed. in Instructional Leadership from the University of Illinois-Chicago. But education is just one passion of Adam’s. On the side, Adam is a fledgling web entrepreneur, working on a number of small start-up projects as a site creator and content editor and contributor. Adam’s experience also includes content work for digital advertising agencies including BGT Partners, Ignite Media and currently CloudTweaks Media, and contracted work for companies like Quintiles. Born and raised in the New York City metro area, Adam currently calls the north side of Chicago home
<urn:uuid:75191e24-fdc4-41b7-8a8b-0ee89834311b>
CC-MAIN-2022-40
https://cloudtweaks.com/2013/08/what-is-google-classroom/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00558.warc.gz
en
0.960452
753
3.15625
3
Securing the Federal Government’s Data In late 2021 Homeland Security Secretary Alejandro Mayorkas said, “Cybersecurity threats are among the greatest challenges facing our Nation. Organizations of all sizes, including the federal government, must protect against malicious cyber actors who seek to infiltrate our systems, compromise our data, and endanger American lives.” The cyber threat landscape against the federal government is only growing more complex. From cybercriminals to nation-state actors and their proxies, a diverse array of attackers seek to steal sensitive data, extort money, or create chaos and heighten distrust in American institutions. These threats are particularly pronounced at the federal government’s edge – that is, any data or system outside of a traditional data center. Without proper cyber protections in place, the edge, which can include mobile endpoints like laptops and cellphones, deployed defense tactical elements, and training centers and testing labs, is vulnerable to attack and exploitation. Reinforcing this vulnerability is a misunderstanding across the federal landscape about the appropriate tools required for protecting the edge, as demonstrated in survey findings.
<urn:uuid:b812a61e-149a-4d83-88ec-dac172d0eddc>
CC-MAIN-2022-40
https://acronisscs.com/data-security-needs-for-federal-government/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00558.warc.gz
en
0.909402
223
2.578125
3
Translation for Use In a Multitude of Different Industries While translation has proven to be an indispensable and invaluable tool in regard to global commerce, translation can take many different forms. For example, translation can be used in the context of medical professions, defined as the translation of texts and documents pertaining to the areas of medicine, healthcare, or pharmaceutical products. As medical language is based largely on Latin, a medical translator who is fluent in the language can be extremely beneficial to both doctors and patients. What’s more, a medical translator can also be used to help medical practices maintain compliance, as many countries around the world have strict regulations as it pertains to accessing patient information and drug approval. While medical translation is one form of translation that is widely known, there are many other types of translation that can be used in a variety of contexts? Legal translation is defined as the process of translating information contained within legal documents from one language to another. As improperly translated legal documents can have disastrous consequences on the legal rights of thousands of citizens in a particular nation, ensuring that legal documents are translated in the most comprehensive and efficient manner possible is of the utmost importance. Moreover, as many legal documents contain ambiguities, a legal translator must also avoid interpreting these ambiguities in their own words, a task that can prove to be extremely difficult. Some examples of legal documents that can be translated include court and witness transcripts, contracts, depositions, wills and trusts, confidentiality agreements, policies, complaints, licenses, litigation documentation, and legal disclaimers. Financial translation is defined as converting financial documents, statements, reports, and audits from one language to another. Financial translators often utilize a combination of proficiency in multiple languages, as comprehensive experience in a variety of different financial fields. As many countries have financial terminology that is unique to the particular language of said country, the equivalent of a financial term in one country may not be readily apparent in another language. To this end, financial translators work to connect the dots, making use of their expertise and experience to ensure that financial transactions are able to be conducted in the most fluid and effective manner possible. Some come examples of financial documents that can be translated include regulatory documents, insurance policies, shareholder summaries, expense reports, bonds, and equities. Corporate or business translation is defined as the translating of both formal and informal documents that a particular company, business, or corporation produces and disseminates throughout its organization. As businesses around the world are more connected than ever before in history due to the power of the internet, translation for use in global commerce has become extremely important. For example, as fast-food restaurants operate locations in various countries around the world, ensuring that they are able to translate their terminology into the language of the country they are operating in can mean the difference between success and failure. Some examples of documents that can be translated in the context of corporate or business functions include websites, business proposals, legal agreements, presentations, commercials, and brochures, as well as other forms of correspondence. Literary translation is defined as the process of translating dramatic and creative works, whether they be prose or poetry, into other languages. As perhaps the most commonly known form of translation, literary translation has been key to our modern understanding of many ancient cultures. Well-known literary works such as the Epic of Gilgamesh and Beowulf have been able to live on throughout history via the means of literary translation. In contrast with many other forms of translation, where a similar term in one language can be translated for a similar term in another language, the accuracy or inaccuracy of a literary translation can completely alter the meaning of a particular text or piece of literature. As such, literary translators have to take all means necessary to ensure that they are able to translate works of literature in the most literal manner possible. While translation has been a valued skill throughout human history, technological advances have impacted the world of translation, just has occurred in virtually every other industry. As such, translation work is no longer limited to the work of human translators, as there are now automatic translation software programs that can do this same work at a fraction of the cost, as well as in significantly less time. Through these automatic translation software programs, users can translate all kinds of documents in a variety of different languages, ranging from Farsi to Turkish. As such, no matter the type of translation that a consumer may be interested in, there are solutions within the marketplace that can suit every individual’s specific needs, no matter their scale or scope.
<urn:uuid:543efa7c-51f3-4833-adba-293edcb583fd>
CC-MAIN-2022-40
https://caseguard.com/articles/translation-for-use-in-a-multitude-of-different-industries/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00558.warc.gz
en
0.945985
935
2.75
3
There are three types of Digital Certificates; namely TLS/SSL (Transport Layer Security/Secure Socket Layer) Certificates are installed on the server. The purpose of these certificates is to ensure that all communication between the client and the server is private and encrypted. The server could be a web server, app server, mail server, LDAP server, or any other type of server that requires authentication to send or receive encrypted information. The address of a website with a TLS/SSL certificate will start with “https://” instead of “http://”, where the “s” stands for “secure.” Code Signing Certificate Code Signing Certificates are used to sign software or files that are downloaded over the internet. They’re signed by the developer/publisher of the software. Their purpose is to guarantee that the software or file is genuine and comes from the publisher it claims to belong. They’re especially useful for publishers who distribute their software for download through third-party sites. Code signing certificates also act as a proof that the file hasn’t been tampered with since download. Client Certificates or Digital IDs are used to identify one user to another, a user to a machine, or a machine to another machine. One common example is emails, where the sender digitally signs the communication, and the recipient verifies the signature. Client certificates authenticate the sender and the recipient. Client certificates also take the form of two-factor authentication when the user needs to access a protected database or arrives at the gateway to a payment portal, where they’ll be expected to enter their passwords and be subjected to further verification.
<urn:uuid:2d558dd6-8879-457d-b912-45913a82fa3f>
CC-MAIN-2022-40
https://www.appviewx.com/education-center/types-of-digital-certificates/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00558.warc.gz
en
0.923021
359
3.3125
3
About Static and Default Routes To route traffic to a non-connected host or network, you must define a route to the host or network, either using static or dynamic routing. Generally, you must configure at least one static route: a default route for all traffic that is not routed by other means to a default network gateway, typically the next hop router. The simplest option is to configure a default static route to send all traffic to an upstream router, relying on the router to route the traffic for you. A default route identifies the gateway IP address to which the FTD device sends all IP packets for which it does not have a learned or static route. A default static route is simply a static route with 0.0.0.0/0 (IPv4) or ::/0 (IPv6) as the destination IP address. You should always define a default route. Because the FTD uses separate routing tables for data traffic and for management traffic, you can optionally configure a default route for data traffic and another default route for management traffic. Note that from-the-device traffic uses either the management or data routing table by default depending on the type (see Routing Table for Management Traffic), but will fall back to the other routing table if a route is not found. Default routes will always match traffic, and will prevent a fall back to the other routing table. In this case, you must specify the interface you want to use for egress traffic if that interface is not in the default routing table. You might want to use static routes in the following cases: Your networks use an unsupported router discovery protocol. Your network is small and you can easily manage static routes. You do not want the traffic or CPU overhead associated with routing protocols. In some cases, a default route is not enough. The default gateway might not be able to reach the destination network, so you must also configure more specific static routes. For example, if the default gateway is outside, then the default route cannot direct traffic to any inside networks that are not directly connected to the FTD device. You are using a feature that does not support dynamic routing protocols. Route to null0 Interface to “Black Hole” Unwanted Traffic Access rules let you filter packets based on the information contained in their headers. A static route to the null0 interface is a complementary solution to access rules. You can use a null0 route to forward unwanted or undesirable traffic into a “black hole” so the traffic is dropped. Static null0 routes have a favorable performance profile. You can also use static null0 routes to prevent routing loops. BGP can leverage the static null0 route for Remotely Triggered Black Hole routing. Routes that identify a specific destination take precedence over the default route. When multiple routes exist to the same destination (either static or dynamic), then the administrative distance for the route determines priority. Static routes are set to 1, so they typically are the highest priority routes. When you have multiple static routes to the same destination with the same administrative distance, see Equal-Cost Multi-Path (ECMP) Routing. For traffic emerging from a tunnel with the Tunneled option, this route overrides any other configured or learned default routes. Transparent Firewall Mode Routes For traffic that originates on the Firepower Threat Defense device and is destined through a bridge group member interface for a non-directly connected network, you need to configure either a default route or static routes so the Firepower Threat Defense device knows out of which bridge group member interface to send traffic. Traffic that originates on the Firepower Threat Defense device might include communications to a syslog server or SNMP server. If you have servers that cannot all be reached through a single default route, then you must configure static routes. For transparent mode, you cannot specify the BVI as the gateway interface; only member interfaces can be used. See MAC Address vs. Route Lookups for more information. Static Route Tracking One of the problems with static routes is that there is no inherent mechanism for determining if the route is up or down. They remain in the routing table even if the next hop gateway becomes unavailable. Static routes are only removed from the routing table if the associated interface on the Firepower Threat Defense device goes down. The static route tracking feature provides a method for tracking the availability of a static route and installing a backup route if the primary route should fail. For example, you can define a default route to an ISP gateway and a backup default route to a secondary ISP in case the primary ISP becomes unavailable. The Firepower Threat Defense device implements static route tracking by associating a static route with a monitoring target host on the destination network that the Firepower Threat Defense device monitors using ICMP echo requests. If an echo reply is not received within a specified time period, the host is considered down, and the associated route is removed from the routing table. An untracked backup route with a higher metric is used in place of the removed route. When selecting a monitoring target, you need to make sure that it can respond to ICMP echo requests. The target can be any network object that you choose, but you should consider using the following: The ISP gateway (for dual ISP support) address The next hop gateway address (if you are concerned about the availability of the gateway) A server on the target network, such as a syslog server, that the Firepower Threat Defense device needs to communicate with A persistent network object on the destination network A PC that may be shut down at night is not a good choice. You can configure static route tracking for statically defined routes or default routes obtained through DHCP or PPPoE. You can only enable PPPoE clients on multiple interfaces with route tracking configured.
<urn:uuid:50c4ffff-c943-4802-be29-c7f9a94a8549>
CC-MAIN-2022-40
https://www.cisco.com/c/en/us/td/docs/security/firepower/610/configuration/guide/fpmc-config-guide-v61/static_and_default_routes_for_firepower_threat_defense.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00558.warc.gz
en
0.886188
1,247
3.21875
3
The new Intel “Knights Landing” processor’s topology includes what it calls near memory, an up to 16 GB block of on-package memory accessible faster and with higher bandwidth than traditional main memory. Near memory is distinct from the up to 384 GB of main memory supported on the Knights Landing chip. This new memory architecture presents some interesting options for applications running on high performance systems. The processor and memory topology of Knights Landing can be seen in the following figure. The MCDRAM is near memory and DDR4 is the far memory in the system. But what is this near memory really, and how/when would an application want to use it? Intel does not yet provide a lot of information, but here is a block diagram to show what the Knights Landing architecture looks like: And here is what we do and do not seem to know: - The near memory (also known as on-package memory) was jointly developed by Intel and Micron Technology. This High Bandwidth Memory (HBM) will be up to 16 GB in size at launch. This is on-package, not on-chip. - The near memory link’s bandwidth will be about 4.5X to 5X higher than that of DDR4 memory. This appears to be total bandwidth, as shown in the above figure, with all channels – both MCDRAM and DDR memory – populated. As reported previous in The Next Platform: “The HBM on the package has around 400 GB/sec of aggregate bandwidth across eight segments of 2 GB of memory using a proprietary link. The two DDR memory controllers on the chip have around 90 GB/sec of bandwidth and max out at 384 GB of capacity. So it is 4.5X the bandwidth for HBM than DDR4.” - Given a cache miss on a core, and assuming low memory-bus utilization, the latency of that cache miss sourced from MCDRAM is still unknown. The MCDRAM’s access latency, though, is assumed to be considerably faster than that of a similar DDR memory access. The relative difference would, though, need to be some function of the bandwidth and on the same order of magnitude as any benchmark heavily exercising relative memory accesses. - Each core has 32 KB each of L1 data and instruction cache. Pairs of cores share a 1 MB L2 cache kept coherent across all cores. (We’ll be discussing cache concepts shortly.) - The MCDRAM can be used in one of three different modes (with mode set at system restart): - Cache mode: The MCDRAM units together acts as a shared (up to 16 GB) L3 cache, a cache common to all of the chip’s cores. We will be assuming that this L3 cache acts as a cast-out cache. Any data, including both changed and unmodified, that is cast out of a core-pair’s L2 cache is first written by the hardware into the L3 cache. More on this later. It seems to follow, though, that each MCDRAM controller – shown as grey in the figure above – has an L3 cache directory for identifying which real memory data blocks reside in each MCDRAM. - Flat Model mode: The MCDRAM acts as memory in its own right. The DDR memory and the MCDRAM are addressed with separate real addresses. Both the Flat Model MCDRAM and DDR memory are accessible from the processors. Unlike the L3-mode MCDRAM, wherein the hardware decides what resides in the MCDRAM, the applications and the operating system decides – via virtual and real addressing – what resides in each type of memory. - Hybrid mode: Portions of the MCDRAM act as L3 cache. How this differentiation is decided is not clear at this writing. - The setting of these modes is controlled strictly at system boot time; you need to decide what setting works best for your use. - Whether the MCDRAM mode is set up for a cache or as memory, an L2 cache miss may find its data there; the latency of access from MCDRAM on such a cache miss is assumed to be the same, independent of mode. Simply because MCDRAM can exist as an L3 cache, an MCDRAM access would need to be considerably faster than a similar access from the DDR memory. At this writing, Intel has only published the likely associated bus bandwidths; we do not know the latencies. So, for now, we guess. So let’s say the relative difference is a factor of 4X. (Again, this is only a guesstimate, intended only to provide you with a mental model.) So, given the above, under what circumstances do we decide to use each of these modes? It happens that knowing this stuff is not enough, but it is a start. The MCDRAM and its massive bandwidth links exist for reasons of performance. Given that cores incur an L2 cache miss, we assume your application’s performance would be better served by cache fill from the MCDRAM, no matter what the mode. Better served, yes, but let’s keep in mind that good performance does not require that all accesses come from the MCDRAM any more than it requires that all accesses succeed in hitting on the L2 cache. It’s a probability thing, the cases of which we will get into shortly; the higher the probability of an access from a faster location, the better the performance of the application. - As an L3 cache, you control very indirectly how much of your data resides in the cache. You want to minimally access the DDR memory and maximize a core’s accesses from its own L1 and L2 caches. - As Flat Model memory, you choose what data objects are going to reside there such that your application perceives a high probability of MCDRAM accesses versus DDR memory. Even so, here too, you want to maximize the L1 and L2 hit rate. That sort of thing is what we will be covering in what follows. Some Quick Cache Theory So that we are all on the same page, let’s go over why it is that cache of any type improves performance. Caches are intended to be much more rapidly accessed than DDR memory. Each type of cache is segmented into regions called cache lines, each with the purpose of holding some contiguous data block – say, 64 bytes on a 64-byte boundary – representing the contents of an equal-size DDR memory block. Knights Landing, and many other modern processor designs come with different types of caches, each with a different speed and size. In Knights Landing there are - L1 data and instruction caches, 32 KB each in size, one each per core, - L2 data cache, 1 MB in size, common to pairs of cores (and the one to eight threads executing there), - Optionally, an MCDRAM-based L3, up to 16 GB in size, common to all of the cores on the chip. As shown earlier, this total cache is physically made up of eight regions, each with a controller and link. The L1 cache accesses are typically completed in one or two processor cycles; think fractions of nanoseconds for L1 accesses with DDR accesses taking hundreds of nanoseconds. L2 access latencies are slightly slower than L1; think a small handful of cycles (for example, less than ten). L3 accesses, being still further away from the cores (and indeed off chip), take still longer. Again, an access from DDR memory takes multiple times longer than MCDRAM accesses. It does not matter how few bytes of a block (i.e., cache line) your application actually accesses, if the data is not in that cache, the cache controller requests a complete block’s worth of data from a slower level of storage. We understand that the on-core L1 cache line size is 64 bytes. The 32 KB L1 caches are segmented into these 64-byte blocks, typically as a two-dimensional array of such blocks; one dimension exists for aging, keeping the more frequently accessed blocks in the cache longer. (But we don’t yet know that for Knights Landing.) As you would want, typically most data and instruction stream accesses succeed in coming out of this L1 cache. Most accesses, yes, but when the needed data is not in the L1 cache, your application starts to experience time delays, the worst being when it needs to wait for data to be brought out of the DDR memory. In the event that the L1 cache is accessed and the needed data or instructions are not there, a 64-byte cache fill request is sent to the core-pair’s L2 cache. If the L2 happens to have that memory block, the L2 responds back to the core relatively quickly with the needed 64-byte data block, filling a L1 cache line with its contents. Think of instruction processing as being paused for only a few processor cycles to accomplish this. Let’s pause for a moment to observe that the L2 cache line size might also be 64 bytes in size, but it might be 128 bytes or even 256 bytes in size. Even so, whatever the L2 cache line size, with a L1 cache fill request, given the needed data is in that core-pair’s L2, the L2 cache responds with the needed 64 bytes. Similarly, if such an L2 cache access finds that the needed data block is not in that L2 either, the L2’s controller similarly makes an access request – this time of size equal to the L2 cache line size – from still slower memory. For example, referring to the figure above, this access request may have its data block served from another core’s L2, from the MCDRAM – whether in L3 cache mode or Flat model mode – or from DDR memory. Even if your program accessed a single byte of a requested block, the cache fill access is a complete block’s worth in size. No matter that your program might be randomly accessing independent bytes all over memory (and so you might be thinking that the L1 cache can hold 32K of such bytes), the number of independent blocks in each cache is actually no more than the size of the cache divided by the cache line size. For example, # of L1 Data cache lines = 32 KB / 64 bytes = 512 L1 cache lines So, at any moment in time, the L1 data cache can only hold 512 different blocks of memory. As a new block is filled into the cache, another block must be removed. As data gets changed, the changed data continues to reside in the cache (typically L2) for a while; it is typical that the change is not immediately returned to the DRAM. As more data blocks are pulled into the cache, even aged changed blocks must be removed from that cache. Such changed block’s entire contents – no matter that, perhaps, only one byte had been changed – must then be written out of that cache. What happens next with that changed block depends on whether the MCDRAM is in L3 cache mode or Flat model mode. (As a bit of a warning, what follows is going to feel a bit esoteric, but what we are trying to first show is the physical differences between using this MCDRAM as a cache versus as real memory. From there we get a better feel for the performance trade-offs.) L3 Cache Mode: I have said that when new data blocks are pulled into the L2 cache, the previous contents of the selected cache line is either written out (if the cache line is changed) or perhaps simply lost (if unchanged). It is typical, though, that the L3 cache acts to hold any data blocks cast out of the L2 cache(s). Changed or not, data blocks cast out of the L2 are stored into the L3 cache, where the data block gets to live in a cache – albeit a slower cache – for a while longer. So, in the event a core-pair’s subsequent L2 cache miss, the resulting cache fill request might find it’s needed data still in the relatively faster L3 cache – here, the MCDRAM – rather than having to request the data from slower DDR memory. And there can be a lot of data blocks in there. Given the L3 cache line size is 128 bytes, the number of these cache lines is # of L3 cache lines = 16 GB / 128 byte lines = 134217728 blocks = 128 mega-lines Notice that these L3 cache lines have been fed with the cast out contents of any and all of the L2 caches on this processor chip. The total L3 may be a wonderfully large data state, but it came at a cost. Being a cast-out cache, every single data block cast out of any L2 cache is written into the L3 cache, that is if the block is not already in the L3 cache. Given a rapid L2 cache fill rate, the internal busses as well as the links to the L3 caches are experiencing a similarly significant traffic. Overuse of any resource typically means delayed access of any subsequent uses of that resource. Referring to the first figure of this story, notice that there are also up to eight MCDRAM controllers, each one associated with one unit of MCDRAM. These MCDRAM controllers will decide which cast out blocks go into what MCDRAM unit. These controllers also know, upon an L2 cache miss, whether their portion of the L3 cache has the requested data block. With any luck, both the data blocks cast out and being filled are spread out evenly over each of the eight MCDRAM. It would seem that there is such a look up on these directories for each block cast out of the L2 caches. The theory behind such a cast-out L3 is that any block previously held in any L2 cache will again be soon needed by a core. The extent to which this is true is what makes the L3 useful (or not). For examples, suppose: - After a program changes a data block in the L2, that changed block is cast out from there into an L3. That block is not soon re-accessed, so it ages there. Because other blocks are subsequently flowing into the L3, that changed block is aged out of the L3 to make space. So the changed block must be read out of the L3 and written back into its home location in the DDR memory. The cast-out into the L3 was not useful and required extra processing to return the changes to DDR memory. - As a slight variation on the previous, suppose the data block in the L2 had been unchanged and then cast-out into the L3. Lack of re-access means that it is similarly subsequently aged out of the L3; the unchanged block is thrown away. In either case, the hardware is spending bus resource to process a data block through the L3 that is not being used (before being lost). - As a counter-example to the previous cases, suppose that a changed block was cast out to the L3 and shortly thereafter another core wants access to that same data block. The L2 cache fill is done from the L3, re-accessing this changed data block. This might be done repeatedly over an extended period of time, avoiding any (re)access of the data block in the DDR memory. Here the cast-out process was useful, and further avoided the latency of a DDR memory access. Before going on, recall that this L3 cache contains copies – or modified versions – of up to 16 GB of blocks of DDR memory. If the relatively frequently used working set largely fits in this space, the DDR memory may be very infrequently accessed which is a very good thing. It is worth observing that the L3 cache backs up the L2 cache. For those periods where the current contents of the L2 cache is being completely successful in feeding the processor, the L3 cache provides no service. It’s because the L2 cache is often not big enough that more data blocks are needed and so more are cast out into the L3. The rate of fill – and so cast-outs – varies based on the current working set from very low to very high and everything in between. (As we’ll see later, you can influence this.) Flat Model Mode: The MCDRAM here is not an L3 cache, it is memory in its own right, with its own portion of real address space. It’s just faster memory than DDR memory. L2 caches still cast out changed data blocks, though, the target of those changed data blocks might be either the MCDRAM or the DDR memory. Whatever location acted originally as the source of the L2 cache fill, that same location acts as the target for the cast-out block. What is not happening is that the hardware does not automatically guide changed blocks from the MCDRAM memory into the DDR memory; your program might, cast unchanged data blocks into the MCDRAM, cast changed L2 cache blocks originally sourced from DDR memory into the MCDRAM. Similarly, as with DDR memory, the MCDRAM memory acts as the source of data blocks for cache fills targeting a core-pair’s L2 cache. The key here is that any such cache fills from MCDRAM memory are completed faster than from the DDR memory; such MCDRAM-sourced fills are assumed to be done just as fast as from MCDRAM-based L3. Because the application is taking responsibility for what resides in the MCDRAM, the hardware need not. As a result, hardware resources used to otherwise maintain the L3 cache remain free for cases where they are needed. Further, such pre=knowledge on the part of the application can result in a higher success rate on accessing from the faster MCDRAM. As a conclusion concerning cache theory, it’s worth noting that cache improves performance because of both storage locality and temporal locality. For storage locality, the cache designs assume that if your program is going to access even a byte, then your program is likely to need to access some number of the bytes immediately around it. Again, on an L1 cache miss, even though your program accessed – say – an integer, the hardware is going to fill an L1 cache line with 64 bytes which includes your integer. The cache design is successful when your program accesses data around it. Where this is not true, the benefit of the cache is lessened. For temporal locality, cache design assumes that your program will continue to (re)access one or more data items in a cache line frequently enough to keep that block in the cache. When the program is perceived as not accessing that block, the block disappears from the cache. “Frequently enough,” though, is dependent upon how quickly new cache lines are being pulled into the cache; the faster that data blocks are pulled into the cache, the shorter the time that another data block remains in the cache for subsequent reaccess. How do we decide using the L3 cache approach versus Flat Model memory? 16 GB seems like a wonderfully huge cache, and it is! The question on the table, though, is “Can pre-knowledge of our application’s data use – and, perhaps, even reorganization of that data – allow our application to run still faster if we instead use Flat Model mode?” With the Flat Model, we, rather than the hardware, manage that which will reside in the MCDRAM near memory. With hardware doing the managing (as a cache), the hardware saves everything there that no longer fits in the L2 caches whether or not it will be reused; essentially everything that gets accessed would ultimately flow through this L3. With the application managing the MCDRAM, you decide, mostly based on your perception of frequency of access there. It is frequency of access from the MCDRAM that is important; recall that for some applications the L2 cache(s) can (and preferably will) hold the data blocks for a while. Flat Model mode memory is memory, just like any other addressable memory. With MCDRAM, perhaps in ways similar to NUMA-based topology systems, we have multiple speeds of memory. It can also be virtual memory, managed as pages; some interface into the OS decides that some virtually-addressed pages will be mapped onto MCDRAM memory rather than DDR memory. Indeed, you may recall from this Intel Knights Landing presentation that Intel is proposing a heap manager interface, which allows your programs to allocate from MCDRAM memory; this hbw_malloc returns the virtual address of an allocated region in MCDRAM. Although, at 16 GB, the MCDRAM memory is certainly large, it is nonetheless a fraction of the potential 384 GB of the DDR far memory memory. The trick is getting that which is most frequently accessed – and incapable of staying in the L2(s), to reside in this Flat Model mode MCDRAM memory. You don’t need everything being accessed to reside there. Performance is all about frequency of access; that which is infrequently access can be accessed from slower memory and only minimally impact performance. For example, and we’ll be getting into this shortly, consider a straightforward search of a massive database table fronted by a large index. The DBMS tends to only touch the portion of the table proper which contains the row(s) actually being accessed as the result. Think of this as a single touch and so low frequency. On the other hand, the query – or even many queries – will be repeatedly touching portions of the index. It is the index that should reside in the MCDRAM, not necessarily the table. Efficient Use Of Memory And Cache Given we are using the MCDRAM in Flat Model mode, we nonetheless want to maximize the amount of relatively frequently accessed data that resides there. But, even if we weren’t, we also want to efficiently use the 1 MB L2 caches, keeping more frequently accessed data blocks there and minimizing the number of cache fills into it (which, we know, knocks other blocks out of the L2). The key to this is data organization; how would the data objects and program instructions get mapped into pages or cache lines? As a straightforward example, consider a linked list as in the following: For most programmers this diagram is just how we think about it (and no further). Each item in the list is an object, taking up some amount of storage. But where? All that is usually known for sure is that the storage is somewhere in a software abstraction called the heap. The generic heap manager doing this storage allocation has no idea that each was going to be part of a linked list; it will just allocate the needed storage as efficiently as possible from its point of view. The result is that, as your program scans through the list, each node is likely being found in a different block of memory, perhaps even in a different page of memory. Each is going to effectively consume a cache line when it is accessed, meaning a cache fill for each, and then be knocked out of the cache as other objects are pulled in. For a moderately long list, by the time the processor gets to the end of the list, the first part of it – and everything else that had been in the L1 before – is no longer in the L1 cache. It is also worth observing that the Link-Key object may be just a few bytes in size, a small fraction of the 64-byte L1 cache line size. Other stuff may be in the remaining bytes, but what, and is it useful now? The following figure is functionally the same linked list. Here, instead of allowing some generic heap manager allocate memory, the nodes of the list are organized as part of array-like storage. It’s not that this needs to be perceived as an array – each node can be pointing to another as before – it just that the virtual and physical storage backing these nodes are contiguous blocks of memory, nodes contiguous within data blocks. Here the same information is packed tighter in both the Fixed-Model MCDRAM and in the L1/L2 caches. Tighter packing means fewer cache fills – even if from this faster MCDRAM – and so still faster operations on this list, more space for other data in the caches, and more space for other data in the MCDRAM. (Note: Think of these linked arrays abstractly as one large and growable array. The “Link” values are then just indexes into that abstract array.) I will not take you through the details here, but mentally picture the same game with the nodes of a database index or any tree structure. The basic unit of virtual / physical storage to use is some multiple of the cache line size. Many such nodes are considerably smaller in size. Can the same information be repackaged and have an associated storage manager which is more friendly to cache and MCDRAM? I can imagine whole books being written on such optimizations, and you certainly don’t want to read much more of it here. Notice, though, where we have been in this article. We started with a physical construct, DDR memory and the faster MCDRAM as seen in the Knights Landing design. We realized that hardly any programmers work directly with such physical constructs. Instead, the MCDRAM is virtual memory mapped onto this faster physical memory or it is just an L3 cache, which most programmers hardly realize – or for that matter, hardly need to realize – exists. We then moved up the stack to concepts that programmers do appreciate and noted that you do have some control over the appropriate use of this new hardware feature, allowing your applications to really scream when needed. After degrees in physics and electrical engineering, a number of pre-PowerPC processor development projects, a short stint in Japan on IBM’s first Japanese personal computer, a tour through the OS/400 and IBM i operating system, compiler, and cluster development, and a rather long stay in Power Systems performance that allowed him to play with architecture and performance at a lot of levels – all told about 35 years and a lot of development processes with IBM – Mark Funk entered academia to teach computer science. He is currently professor of computer science at Winona State University. Reprinted with permission of Mark Funk. Original story posted here.
<urn:uuid:e6fe6c09-b2f7-464e-8779-4692751e30a1>
CC-MAIN-2022-40
https://www.nextplatform.com/2015/04/28/thoughts-and-conjecture-on-knights-landing-near-memory/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00558.warc.gz
en
0.942406
5,538
2.6875
3
Using big data and text mining methods, scientists at the University of Liverpool are create a warning system for a devastating disease in pet rabbits and sheep. Flystrike or myiasis is caused by larvae of Lucilia sericata feeding on the surface of the skin. The disease causes severe tissue damage likely to secondary bacterial infections and may result in death of the animal. Researchers from the Small Animal Veterinary Surveillance Network (SAVSNET) used electronic health records from over 40,000 rabbit consultations collected from veterinary practices across the UK. Computers programmed to screen all clinical records for suspect cases of flystrike. These records identified some 300 cases of flystrike among these rabbits. By analyzing the with flystrike, researchers identify the strong seasonal nature of the devastating disease, with most cases occurring between June and September. As well as confirming the disease, the results warn owners to check their rabbits for signs of flystrike and treat. Flystrike occurs from Spring to Autumn, when female flies lay their eggs on susceptible hosts such as rabbits and sheep. The flies are particularly attracted to soiled fur and diseased skin around an animal’s back end that may be associated with diarrhoea, loose faeces or other discharges. Owners should check their rabbits frequently to make sure they are healthy, clean and can groom themselves properly.
<urn:uuid:0b470853-aeb6-4ee8-9596-537a4ef5451b>
CC-MAIN-2022-40
https://areflect.com/2017/06/17/scientists-warn-of-seasonal-increase-of-deadly-rabbit-disease/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00558.warc.gz
en
0.93028
281
3.4375
3
Smart Cities will be filled with many applications that connect people, businesses, devices, and more. Everything we interact with on a daily basis, from public transport to grocery shopping has the potential to become more interactive, efficient, and intelligent. For example, the Internet of Things (IoT) promises to revolutionise businesses and services with sensors, data collection, and automation. But a Smart City doesn’t work and can’t effectively progress if there isn’t a way for the public to access and use the connected services provided. As more of the things we interact with on a daily basis move online, our ability to connect as a population is becoming even more of a necessity. An important feature of Smart Cities is therefore to provide everyone with pervasive, high-quality access to WiFi anywhere in a connected city. What is public access Wi-Fi? Public Wi-Fi is an access technology that allows people within a city to plug into the core internet network. It is usually provided by local government, who implement high-quality Wi-Fi coverage over a certain area, either with unlimited access or a login requirement. This is absolutely essential as we develop Smarter Cities because in many cases the correct technology infrastructure to enable 4G or 5G consistently across large areas doesn’t exist, lowering the connectivity quality and accessibility in many areas. What are the applications of public WIFI in a smart city? Overall, public Wi-Fi will enable better internet access in crowded areas, a more consistent and all-pervasive Wi-Fi service across a city, and more efficient and relevant public services. Without public access, people can only access Wi-Fi either in their homes, or in a public building that has chosen to provide it – the only real alternative is using mobile data, and in areas of poor service this often isn’t available. For the 21st century where people are reliant on their mobile phones for many basic activities – including navigation, communication, and payments – this can be very limiting. Public Wi-Fi would mean people in indoor and outdoor public spaces, from museums to sports fields to tourist destinations, would have open access to decent data speeds enabling them to use any smart or online services in the area. Introducing public access Wi-Fi is therefore beneficial to local residents and workers, extremely useful for tourists, and can improve the functionality of all connected services that are publicly available and city life as a whole. Local governments and communities can also make use of this service for marketing and communication purposes through smart apps and devices. For example, they could provide more digital content and interactive, personalised content to large groups of people, such as push notifications about events in the local area or traffic updates. What are the advantages and disadvantages of free public Wi-Fi? The overarching benefit of public Wi-Fi is that everyone can get high-quality access to the internet within that city. This will improve a city’s well-being and economic development, promoting tourism, improving leisure, business services, and council services. Public Wi-Fi is especially beneficial in times where the population density is increased, such as during a concert or New Year’s Eve. Mobile access quickly reduces in quality when there are very large crowds, but public Wi-Fi can address areas or times of poor coverage easily. Mobile operators often provide a network assuming a certain threshold of people in a certain area, and their service isn’t designed to support larger crowds than this. Public Wi-Fi is able to offload this pressure on mobile access by giving an alternative network connection for a proportion of people. Therefore, another benefit of public Wi-Fi is that it can be complementary to other connectivity technologies. The alternative for dealing with increased internet usage, such as rare times of crowd density or connected devices, would mean operators would have to ‘over-provide’ their network, resulting in a huge increase in telephone masts in closer proximity to cities. Instead, if public access Wi-Fi is managed correctly it can be used in tandem with existing technology, acting as an alternative to mobile coverage when connections are poor or overpopulated. There are a few potential disadvantages associated with public access Wi-Fi, but they can be addressed easily and with careful consideration during implementation. If a public network isn’t carefully managed, private content can potentially be made open to the public. However, as long as the right security measures are put in place there is no possibility of this happening. There also needs to be consideration made to public privacy. On one hand, public connections and Smart Cities could be a valuable tool for improving the city’s security, but it needs to be balanced with the guarantee of people’s personal privacy. How will public access Wi-Fi be deployed? The level of performance needed for public Wi-Fi to work effectively requires high-speed, reliable and cost-effective communications technology – mmWave wireless distribution nodes are an innovative solution that meet these requirements. These work as access points which link the core internet to devices using Wi-Fi as access technology. The core technology already exists, and so the access points would need to be deployed at a close enough range and in various locations across a city to enable a high-quality service. This kind of connection would typically be done with fibre in more limited areas, but for such a large range application this would be difficult – due to rough terrain, obstructions, or intrusive digging and building – and expensive. Wireless mmWave can therefore act as an equally effective supplement to this technology as it can be installed where fibre can’t, with simplified and cost-effective deployment onto existing infrastructure. Blu Wireless’s mmWave technology can therefore establish high-quality connections where they’re needed in a city, enabling the all-pervasive public Wi-Fi that smart cities need. This will be a key technology in spurring on Connected Cities, progressing economic development, and digitalising the way we live and function every day.
<urn:uuid:0534e037-113e-42be-8b62-e791ab604c12>
CC-MAIN-2022-40
https://www.bluwireless.com/insight/smart-city-wifi-solutions-enabling-public-access/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00758.warc.gz
en
0.949718
1,248
3.328125
3
We’re back with another edition of our #WhatTheTech series, where we transform tech jargon into a language you can easily understand. Today we go over our first term that’s more of a tech concept rather than an actual piece of technology — open source. #WhatTheTech is open source? In tech, open source refers to making the source code of a software program accessible to anyone, instead of limiting it as a proprietary solution. While an open-source solution can be edited by any user, a proprietary solution can only be modified by the original owner of the software. So, open-source software is a product that is publicly available and can be used, changed, or modified, and even contributed back to the original solution via appropriate review processes. Think of open-source software (OSS) as recipes. Some recipes are proprietary; they’re family secrets or formulas kept private by restaurants, but they’re still enjoyed by hungry eaters. And some recipes are open source — available online, in a cookbook, and free to use in their original form or modified for taste. How does open source work? Let’s look at the 10 criteria OSS must abide by according to The Open Source Initiative, a nonprofit organization, and the leading authority on OSS. - Free redistribution: The software license must be allowed to be redistributed with no charge. - Source code: The code for the software must be included and cannot be hidden in any way. - Derived works: The software must allow modifications and can be customized to the needs of the user. - Integrity of the author’s source code: Users have the right to know who created the source code they use and authors have a right to know how their software is being used. - No discrimination against persons or groups: The license must not discriminate against any person or group. - No discrimination against fields of endeavor: The license cannot discriminate against anyone that adapts the source code for their specific field of work, business, or organization. - Distribution of license: The rights of the program must also apply to anyone who redistributes it without the need for another license. - License must not be specific to a product: Use of the program is not tied to a certain product; all users who redistribute the program have the same rights as the author. - License must not restrict other software: The program cannot have restrictions on other types of software used with it. - License must be technology-neutral: The program cannot be restricted to use on a certain piece of technology or interface. Once a developer establishes the license for their software as open-source, anyone can use the software in its original form or adjust the source code as needed to remove bugs, improve functions or adjust it to their specific needs. This collaboration effectively enables continuous OSS improvement over time from a wide community of programmers, often speeding up development and encouraging innovation. How do businesses use open source? Whether it’s an open-source internet browser like Mozilla Firefox or an online training program built with the source code of Moodle, businesses leverage OSS every single day. OSS is also an integral part of many workflow tools that automate business processes, including robotic process automation (RPA), work platform connectors, and automated documentation generators. A few types of OSS used for these tools include: - Python: An open-source interpreted, high-level, general-purpose programming language that is key to automation. - OpenAPI Specification: A specification for machine-readable interface files that helps generate documentation of processes, and connects third-party platforms and services to your business workflow. - ApexMocks: A mocking framework for the Salesforce Lightning Apex language used for document generators. Follow our blog for more technology 101 in our #WhatTheTech series. If you’re interested in learning more about the Nintex Platform and what it can do for your organization, click here to request a live demo.
<urn:uuid:278789fc-adb7-4d20-90af-f43e35bedee7>
CC-MAIN-2022-40
https://www.nintex.com/blog/what-the-tech-is-open-source/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00758.warc.gz
en
0.907777
844
3.515625
4
- Something you have (physical keycard, USB stick, etc.) or have access to (email, SMS, fax, etc.) - Something you are (fingerprint, retina scan, voice recognition, etc.) - Something you know (password or pass-phrase). Website Password Policy While the process seems straightforward, organization should never take choosing passwords lightly as it will significantly affect the user experience, customer support volume, and the level of security/fraud. The fact is users will forget their passwords, pick easy passwords, and in all likelihood share them with others (knowingly and unknowingly). Allowing users to choose their password should be considered an opportunity to set the tone for how a website approaches security, which originates from solid business requirements. Defining an appropriate website password policy is a delicate balancing act between security and ease-of-use. Password policies enforce the minimum bar for password guessability, how often they may be changed, what they can’t be, and so on. For example, if a user could choose a four-character password, and they would if they could, using lower case letters (a through z), an attacker could theoretically try them all (26^4 or 456,976) in roughly 24 hours at only 5 guesses a second. This degree of password guessing, also known as a brute-force attack, is plausible with today’s network and computing power and as a result far too weak by most standards. On the opposite end of the spectrum, a website could enforce 12-character passwords, that must have upper and lowercase letters, special characters, and be changed every 30 days. This password policy definitely makes passwords harder to guess, but would also likely suffer from a user revolt due to increased password recovery actions, customer support calls, not to mention more passwords written down. Obviously this result defeats the purpose of having passwords to begin with. The goal for a website password policy is finding an acceptable level of security to satisfy users and business requirements. In general practice on the Web, passwords should be no shorter than six characters in length, while systems requiring a higher degree of security consider eight or more advisable. There are some websites that limit the length of passwords that their users may choose. While every piece of user-supplied data should possess a maximum length, this behavior is counter-productive. It’s better to let the user choose their password length, then chop the end to a reasonable size when it’s used or stored. This ensures both password strength and a pleasant user experience. Even with proper length restrictions in place, passwords can often still be guessed in a relatively short amount of time with dictionary-style guessing attacks. To reduce the risk of success, it’s advisable to force the user to add at least one uppercase, number, or special character to their password. This exponentially increases the number of passwords available and reduces the risk of a successful guessing attack. For more secure systems, it’s worth considering enforcing more than one of these options. Again, this requirement is a balancing act between security and ease-of-use. If you let them, users will choose the simplest and easiest to remember password they can. This will often be their name (perhaps backwards), username, date of birth, email address, website name, something out of the dictionary, or even “password.” Analysis of leaked Hotmail and MySpace passwords show this to be true. As was the case In just about every case, websites should prevent users from selecting these types of passwords as they are the first targets for attackers. As reader @masterts has wisely said, "I'd gladly take "~7HvRi" over "abc12345", but companies need good monitoring (mitigating controls) to catch the brute force attacks." During account registration, password recovery, or password changing processes, modern websites assist users by providing a dynamic visual indication of the strength of their passwords. As the user types in their password, a strength meter dynamically adjusts coinciding with the business requirements of the website. The Microsoft Live sign-up Web page is an excellent example: Notice how the password recommendations are clearly stated to guide the user to selecting a stronger password. As the password is typed in and grows in strength, the meter adjusts: When passwords are entered, any number of user errors may occur that prevent them from being typed in accurately. The most common reason is “caps lock” being turned on, but whitespace is also problematic as well. The trouble is users tend not to notice because their password is often behind asterix characters that defend against malicious shoulder surfers. As a result, users will mistakenly lock their accounts from too many failed guesses, send in customer support emails, or resort to password recovery. Either way it makes for a poor user experience. What many websites have done is resort to password normalization functions, meaning they’ll automatically lowercase, remove whitespace, and snip the length of passwords before storing them. Some may argue that doing this counteracts the benefits of character-set enforcement. While this is true, it may be worth the tradeoff when it comes to benefiting the user experience. If passwords are still 6 characters in length and must contain a number, then backed by an acceptable degree of anti-brute force and aging measures, there should certainly be enough strength left in the system to thwart guessing attacks. Storing passwords is a bit more complicated than it appears as important business decisions need to be made. The decision is, “Should the password be stored encrypted or not?” and there are tradeoffs worth considering. Today’s best-practice standards say that passwords should always be stored in a hash digest form where even the website owner cannot retrieve the plain text version. That way in the event that the database is either digitally or physically stolen, the plain text passwords are not. The drawback to this method is that when users forget their passwords they cannot be recovered to be displayed on screen or emailed to them, a feature websites with less strict security requirements enjoy. If passwords are to be stored, a digest compare model is preferred. To do this we take the user’s plain text password, append it to a random two-character (or greater) salt value, and then hash digest the pair. password_hash = digest(password + salt) + salt The resulting password hash, plus the salt appended to the end in plain text, is what will get stored in the user’s password column of the database. This method has the benefit that even if someone stole the password database they cannot retrieve the passwords -- this was not the case when PerlMonks was hacked. The other benefit, made possible by salting, is that no two passwords will have the same digest. This means that someone with access to the password database can’t tell if more than one user has the same password. There are several types of brute-force password-guessing attacks, each designed to break into user accounts. Sometimes they are conducted manually, but these days most are automated. 1. Vertical Brute-Force: One username, guessing with many passwords. The most common attack technique forced on individual accounts with easy to guess passwords. 2. Horizontal Brute-Force: One password, guessing with many usernames. This is a more common technique on websites with millions of users where a significant percentage of them will have identical passwords. 3. Diagonal Brute-Force: Many usernames, guessing with many passwords. Again, a more common technique on websites with millions of users resulting in a higher chance of success. 4. Three Dimensional Brute-Force: Many usernames, guessing with many passwords while periodically changing IP-addresses and other unique identifiers. Similar to a diagonal brute-force attack, but intended to disguise itself from various anti-brute force measures. Yahoo has been show to have been enduring this for of brute-force attack. Depending on the type of brute-force attack, there are several mitigation options worth considering, each with its own pros and cons. Selecting the appropriate level of anti-brute-force will definitely affect the overall password policy. For example, a strong anti-brute-force system will slow down guessing attacks to the point where password strength enforcement may not need to be so stringent. Before anti-brute-force systems are triggered, a threshold needs to be set for failed password attempts. On most Web-enabled systems five, 10, or even 20 guesses is acceptable. It provides the user with enough chances to get their password right in case they forget what it is or typo several times. While 20 attempts may seem like too much room for guessing attacks to succeed, provided password strength enforcement is in place, it really is not. Passwords should be periodically changed because the longer they are around, the more likely they are to be guessed. The recommended time between password changes will vary from website to website, anywhere from 30-365 days is typical. For most free websites like WebMail, message boards, or social networks, 365 days or never is reasonably acceptable. For eCommerce websites receiving credit cards, every six months to a year is fine. For higher security needs or an administrator account, 30-90 days is more appropriate. All of these recommendations assume an acceptable password strength policy is being enforced. For example, let’s say you are enforcing a minimalist six-character password, with one numeric, and no simple passwords allowed. In addition, there is a modest anti-brute-force system feasibly limiting the amount of password guessing attempts an attacker can place on a single account to 50,000 per day (1,500,000 per mo. / 18,000,000 per yr.). That means for an attacker to exhaust just 1% of the possible password combinations, 6^36 (a-z and 0-9) or 2,176,782,336, it would take a little over a year to complete. Minimum Strength Policy After everything is considered, the following should be considered the absolute minimum standard for a website password policy. If your website requires additional security, these properties can be increased. Minimum length: six Choose at least one of the following character-set criteria to enforce: - Must contain alpha-numeric characters - Must contain both upper-case and lower-case characters - Must contain both alpha and special characters Simple passwords allowed: No Aging: 365 days Storage: 2-character salted SHA-256 digest
<urn:uuid:f58b9262-c809-4df7-9eec-76a067154cc8>
CC-MAIN-2022-40
https://blog.jeremiahgrossman.com/2009/10/all-about-website-password-policies.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00758.warc.gz
en
0.931224
2,237
2.515625
3
While our homes have become ever-more digitalised over time, the bathroom has remained fairly traditional. This is perhaps unsurprising, given that a 2011 report found 41% of UK consumers (opens in new tab) view their bathroom as a space in which to escape their high-tech, modern lives. However, consumer tastes have since evolved, and 2018 may well be the year this changes. Recent breakthroughs in exciting emerging technologies such as artificial intelligence (AI), augmented reality (AR) and facial recognition systems point to a future wherein the bathroom is completely transformed. We recently carried out research into consumer attitudes towards digital transformation in the home and found that there is a real appetite for innovation in this space. Foremost among the findings was that almost half (46%) of people reported that – of all rooms in their home - the bathroom is where they’re most excited to see technological innovation. But how might this change be implemented? AI will revolutionise the design process The bathroom design process – a long-standing and expensive issue for consumers – has the potential to be completely transformed by technology. AI - in particular computer vision and machine learning - can be used to develop powerful room visualisation and space planning tools, which we see as the future of interior design. While there will always be a place for the creative flair of an experienced interior designer, consumers are now becoming more empowered to design their homes themselves. Smart design and visualisation platforms help them to do this; solving a range of ongoing design headaches such as effective use of space and the “imagination gap (opens in new tab)”, wherein consumers opt out of expensive home interior purchases after being unable to imagine how they will look in the home. Interestingly, the term “AI” is actually an umbrella term which refers to a host of applications, including (but not limited to) machine learning, computer vision, chatbots, and natural language processing. In this context however, we refer to computer vision and machine learning, which will have the most impact on the design space. Computer vision can be used to take an accurate 3D image of a room, analyse it, and subsequently generate a fully-rendered 3D space in which the user can add and remove furniture, fittings, flooring and wall furnishings to their heart’s content. Machine learning then goes deeper, ensuring the image remains to scale, so customers can not just see what furniture will look like in a space, but also if it will fit. Machine learning can even work in conjunction with the lighting conditions in the chosen room and instantly react to any change of tone or colour, thus giving a truly accurate picture of how the redesigned space will look. A certain type of machine learning model - deep convolutional neural networks - allows image semantic segmentation (opens in new tab) to transform the design process even further. This type of technology enables designers to input an image of their bathroom, after which the platform will output a set of pixel-wide predictions of that room. This allows space planning tools to make changes to pixels which belong to a certain class. Essentially, if a tool can determine the pixels that form a wall or floor, it can redecorate those pixels with the user’s chosen wallpaper or tiling, allowing consumers to completely re-design their space from top to bottom. AR: a new reality for bathroom tech 2017 was a landmark year for AR. Both Google and Apple invested heavily in the field, igniting a flurry of activity from developers after bringing their AR software development kits (SDKs), ARCore and ARKit, to market. This investment has changed the face of the AR market. Previously considered to be secondary to more immersive Virtual Reality (VR) tech, the sector is now estimated to become worth more than £42 billion (opens in new tab) in just five years’ time. The development of ARCore and ARKit creates huge potential for AR in the consumer market, and developers in all sectors are now getting involved. With this in mind, it seems inevitable that KBB design - an incredibly lucrative market in itself - will be dramatically affected by this sea-change. It’s likely to be only a matter of time before we see AR technology leveraged in the bathroom, which may take a variety of forms. Firstly, we could once again see AR impact the design process, changing the way we measure and plan out our rooms with mobile visualisation apps and virtual tape measures. But how could the technology be applied in the bathroom itself? According to our research, among the top use cases desired by consumers is an AR hairstyle simulator, which could show how a hairstyle would look on your head, and give step-by-step instructions on how to achieve it. We may also see an AR-powered vanity mirror, which could transform the morning bathroom regime in a variety of ways. First trialled by tech giants Panasonic at CES 2015 (opens in new tab), these smart mirrors were a major theme at this year’s show, demonstrating how AR could be leveraged to give ongoing beauty and grooming advice, which could collectively save hundreds (if not thousands) of hours in the morning. A smarter bathroom There are many other innovations we may see in the bathroom of tomorrow which do not simply fall under simply-defined umbrella terms. In the near future, we could see truly “smart” bathroom mirrors which move beyond being a solely cosmetic tool, performing daily health checks to save both the consumer and the NHS precious time and money. Hypothetically, these mirrors could be further extended, connecting to other Wi-Fi-enabled products such as scales, toothbrushes and shavers to become full-scale health hubs in their own right. Even relatively simple adaptations of existing appliances could be invaluable, such as a height-adjustable sink, toilet and mirror that moves up and down dependent on the user. Such an innovation would be priceless from an inclusion standpoint, making access to the bathroom available to all, regardless of age or potential disability. A particularly exciting new technology is voice recognition, which has exploded in popularity since the release of the Amazon Echo and Google Home. These voice-activated smart home devices are predicted to enter around 40% (opens in new tab) of UK households in 2018. This technology may play a pivotal role in the bathroom of the future, particularly when used in conjunction with the rapidly-advancing IoT (Internet of Things). In the near future, we could see these burgeoning technologies combined, allowing users to seamlessly control appliances in their bathroom using nothing but the spoken word. The future is closer than you think While it is the design process that will be most transformed in future by technology, emerging innovations such as AR, the IoT and facial recognition will also impact the bathroom itself in a number of exciting ways. Forward-thinking businesses, which develop and invest in these emerging technologies now, will be well-placed to engage the consumer of the future; an assertion backed by our research (opens in new tab) which shows that businesses and brands that make effective use of new technology are far more likely to build a loyal customer base. If we consider how retail giants like Amazon continue to embrace new technologies to transform the customer buying journey – a prime example of which being their computer vision powered (opens in new tab) checkout-free store, “Amazon Go”– can retailers looking to engage and retain customers really afford not to embrace technology? We recently found that over two thirds (68%) of consumers currently view their bathroom as outdated, showing that there is a clear consumer need for this digital transformation in the KBB retail space. The businesses who supply the bathrooms of the future will be those who can tap into this consumer demand, harnessing cutting-edge technologies to provide their customers with exciting new innovations and experiences, which also provide solutions to the everyday problems their customers face. David Levine, Founder and CEO of DigitalBridge (opens in new tab) Image Credit: Pixaline / Pixabay
<urn:uuid:251684e7-8ed4-49a7-a4f4-15dc1ba7260a>
CC-MAIN-2022-40
https://www.itproportal.com/features/what-could-we-see-in-the-bathroom-of-the-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00758.warc.gz
en
0.948879
1,652
2.546875
3
Home Security Heroes is an independent site. We may receive compensation from some companies listed on our pages. Learn more. By Calvin Fellows Topic: Prevent Password Theft January 20, 2022 Password security has never been more critical than it is today. From personal identification to account access and data storage, passwords are used in almost all digital transactions. Unfortunately, due to the sheer number of apps, devices, websites, and services that require passwords, it is easy for cybercriminals to compromise passwords and confidential data. To protect your digital information, it’s essential for anyone using passwords to make sure they’re as strong as possible. While there’s a lot that goes into personal cybersecurity and password management, there are some best practices and tips to use if you want to avoid having your password stolen and to improve your overall web security. Always Use a Strong Password A robust password is one that can’t be guessed easily, either by hacking algorithms or manually, and has strong password security. The first step to creating a strong password is making sure that it doesn’t contain any obvious or personal information such as your name, date of birth, or address. Avoid using any numbers or letter sequences such as ‘123456’, ‘123456789’, ‘qwerty,’ or obvious phrases for your passwords such as ‘password’ or ‘password123’. These kinds of sequences and terms are common-place and don’t do enough to provide users with adequate security. Aim to make your passwords as long and difficult to guess as you can. If possible, try and use more than 15 characters for each password you create. Within these 15 characters, try and use a combination of uppercase and lowercase letters alongside symbols and numbers. Doing this makes your passwords unique and challenging for others to guess. A great idea is to use song lyrics or literary quotes to form a password phrase, capitalizing each word. Never Use the Same Password More than Once If a hacker targets an unsecured website containing your only password and gains access to it, they suddenly have a potential way into every website or online service you use. It can be tempting to use the same password multiple times, but this should be avoided. Using the same password multiple times harms your password security and makes it easier for hackers to gain access to sensitive and confidential information. While you may think that no one will be able to guess your favorite childhood pet’s name, hackers don’t typically sit around using a bunch of different passwords to see what works and what doesn’t. Instead, they target vulnerable websites, apps, devices, and services and gain access to a multitude of different passwords all at once. If a hacker targets an unsecured website containing your only password and gains access to it, they suddenly have a potential way into every website or online service you use. The first they are going to do is try that password for another website or online service to see if it works. To prevent this from happening, use a different password for each website, device, app, or service you use that requires a password. While it can be difficult to remember multiple passwords, it’s better than your one and only password falling into unwanted hands. If you’re using different passwords, at least you know that the other accounts you use are safe should one of your passwords ever be compromised. The best way to remember all your passwords and keep them secure is to use password management software. These programs allow you to store all your passwords in a secure location and generate encrypted passwords for high-security sites such as financial institutions. Never Share Your Password With Anyone Even if you trust the person, you still can’t be 100% sure what they will do with your password or access details. When you give your password to someone else, you’re seriously compromising the security of the account the password provides access to. Even if you trust the person, you still can’t be 100% sure what they will do with your password or access details. For example, the person you share your password with may write the password down and store it in an unsafe location where it could be stolen; they might store the password on an unsecured device that can be hacked or communicate the password over an insecure messaging platform. Any of these scenarios open up the possibility of others gaining access to your password. Even if the person you share your password with means no harm, sharing your password with them only increases the chances of your password and data being stolen. Be Wary of Password Phishing Attempts Never follow a link that asks you to change your password unless you prompted it. You should only ever click on or open any password reset links that you have requested. Hackers often pose as an official organization, service, or website to trick users into either sharing their passwords or resetting their password through a link that the hacker has sent and has access to. To prevent this from happening, never follow a link that asks you to change your password unless you prompted it. If you ever receive an email, SMS, or any other form of communication asking you to change your password that you didn’t request, make sure you delete it and report it as spam. It’s also worth contacting the organization/company that the communication has come from to let them know you were the target of a phishing attack. For example, if you get an SMS message or email from your bank asking you to change your password, but you haven’t requested to reset it, contact your bank to let them know of the fraudulent phishing attempt so they’re aware of it and can act upon this information. Protect Your Devices If you’re looking to keep a particular device secure but you’re unsure how to contact the device’s manufacturer. While all the electronic devices we use today bring a lot of convenience, they also offer ample opportunities for hackers to gain access to private and sensitive data. To stop this from happening, it’s important to try and make sure any device you’re using is secure as possible. For desktops and laptops, make sure you use a firewall, install antivirus software, update applications and operating systems regularly, use strong passwords, and back up your files regularly. For tablets and cell phones, make sure you use a secure login PIN/Password, keep your apps and operating system up to date, only download apps that you trust, and avoid clicking or sharing any suspicious-looking links. If you’re looking to keep a particular device secure but you’re unsure how to contact the device’s manufacturer. The manufacturer should be able to give you tips and recommendations or point you in the direction of resources and information you can follow to make sure the device is secure. It’s impossible to keep your passwords 100% safe and secure all the time, but by using common sense and password security best practices, you can reduce the chances of any of your passwords being compromised. If you’re currently using any weak passwords or using the same password more than once, spend some time changing or strengthening your passwords to make sure every website, app, service, or device you’re using is as secure as possible. For more interesting articles, you may check out Lifelock Identity Theft Protection Reviews. Last Updated on
<urn:uuid:e12349df-f2dc-4115-9d6a-93384a930ca9>
CC-MAIN-2022-40
https://www.homesecurityheroes.com/how-prevent-password-stolen/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00758.warc.gz
en
0.925353
1,557
2.9375
3
Google Morocco was the latest victim of a Domain Name System or DNS attack. A notorious Pakistani leet hacker group named, “PAKbugs”, hijacked Google Morocco’s official website (www.google.co.ma). According to the Ping Results of the domain, the IP address of the hacked domain Google.co.ma points to [18.104.22.168] – located in Latvia (ip-219-99.dataclub.biz), which is not a Google IP address. Putting this IP address to a browser address bar, it redirects to www.pakbugs.com which shows that the hacker pointed the Google domain to PAKbugs Server where they hosted a deface page. It’s not clear how this attack was carried out, but it may have involved compromising the system operated by the Moroccan Top Level Domain Registrar (MaTLD). What is DNS poisoning? DNS is the system that converts website names into an IP address of the server hosting the website. A DNS poisoning attack tampers the valid list with fake records causing domain names to resolve to incorrect IP addresses. Why deface one website, when you can just hack the server that holds the IP address to the victim’s site? So, if you can hack the Domain Name System registrar that holds the records for an entire country, you can change any of the servers that you like to point to any website that you want. DNS poisoning first came to light in the mid-1990s when researchers discovered that attackers could inject spoofed IP addresses into the Domain Name System resolvers belonging to Internet service providers and large organizations. The servers would store the incorrect information for hours or days at a time, allowing the attack to send large numbers of end users to websites that install malware or masquerade as banks or other trusted destinations. Over the years, DNS server software has been updated to make it more resistant to the hack. Months ago, Ireland’s domain registry suffered an “unauthorized intrusion into the company’s systems” that affected DNS records for Google.ie and Yahoo.ie. The attack exploited vulnerabilities in the company’s configuration of the Joomla content management system to upload malicious code that caused unauthorized DNS changes. DNS attacks have also recently hit Romania, according to this blog post by BitdefenderLabs. These attacks can be much worse, if the hacktivists are a more malicious group. Like Nation State hackers, for example, who want to infect groups of systems from a target nation. Or gather pertinent credentials from users who think they are on a legitimate website, and not a spoofed one reached via Domain Name System manipulation. Imagine, how many accounts can be compromised if the websites are redirected to a Phishing page, instead of a defaced page.
<urn:uuid:5b254bd2-0331-4c82-80e0-b0162465da1a>
CC-MAIN-2022-40
https://manvswebapp.com/dns-attack-takes-google-morocco
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00758.warc.gz
en
0.902295
602
2.515625
3
To casual users, one person at a keyboard looks much the same as any other. Watch for a while, however, and the differences start to emerge — and whether they are using Linux or Windows is the least of them. The fact is, Linux users are different from Windows users in attitude as much as their choice of operating system. Originating as a Unix-type operating system and in opposition to Windows, Linux has developed an expectation and a philosophy in direct opposition to those promoted by Windows. Although many new Linux users have come directly from Windows, average Linux users simply do not react in the same way as Windows users. After observing Linux users for seventeen years (as well as the changes in my own behavior after switching to Linux), I would suggest that Linux users differ from Windows users in at least seven basic ways: 7. Linux users are do-it-yourselfers Windows users are conditioned against tinkering. Files like the Windows registry are difficult to read, much less edit, and when users have a problem, they are encouraged to seek professional help. In fact, to this day, there are small businesses that charge eighty dollars an hour for doing backups or removing unnecessary files. By contrast, Linux began as a hobby, and was designed for those who like to tinker. For a long time, professional help was unavailable. Instead, Linux users were encourage to exchange information and learn to help themselves. Today, that tradition has been weakened here and there — for instance, by GRUB 2, which eliminates manual editing, but much of the do-it-yourself attitude remains. 6. Linux users are comfortable with the command line Contrary to the popular myth, Linux users can comfortably survive without ever leaving their desktop. However, often, the most options are available from the command line, and, perhaps because of their do-it- yourself ethos, many average users sooner or later find themselves working there. Once there, they often find it a powerful and efficient alternative, and not nearly as scary as they feared. In comparison, while Microsoft has developed its own command shell, it has not found its way into average users’ hands. Nor is the Ubuntu port of the BASH shell to Windows likely to become widely used, given that Windows encourages a hands-off policy for non-experts. 5. Linux users are security conscious Even now, with all the media stories about crackers and malware, I know many Windows users who refuse to bother with a password or to run with a non-administrative account These practices, they say, are inconvenient (although it is not too inconvenient, apparently, to have their systems reinstalled every six months to purge them of infections). Once or twice, I have met Linux users browsing the Internet with the root account, but they are the exceptions. Designed for security from the beginning, Linux forces an awareness of security on users from the beginning. I know several who consider going online with Windows actively dangerous. 4. Linux users want customization Perhaps because many users who turned to Linux are in active revolt against Windows, Linux users have a strong preference for doing things their own way. For them, Window’s customization of wallpaper and fonts are only the beginning. Linux not only offers a choice of window managers and desktop environments, but even of minute features such as how a window on the desktop becomes active, or the positioning or non- positioning of icons in a window title bar. This preference is so strong that, in the last eight years, temporary reductions in the customization options available caused three user revolts against some of the most popular desktop environments. 3. Linux users are accustomed to diversity Unix design favors small, specialized applications. With software like LibreOffice and Krita, Linux has long ago abandoned that preference, at least in productivity applications — perhaps in the hopes of being less foreign to refugees from Windows. Yet even now, many categories of software, including music players, messaging, email readers, and web browsers can have as many as half a dozen alternative applications. Moreover, because almost all the alternatives are free, Linux users quickly become accustomed to trying all the software in a category before settling on a preference — assuming that they do. This situation could not be more different from Windows, in which many users are surprised to learn that alternatives exist. Actually, many Windows users’s first exposure to alternatives is from free software like Firefox or LibreOffice. 2. Linux users expect software to cost nothing Free-licenses (see below) refer to software’s availability and redistribution, not cost. All the same, with software free for the download and available in minutes, Linux users are not accustomed to pay for software. To compound the preference, if no free software is available, Linux users can resort to cloud services like Google Docs, which also have no cost. Some Linux users will pay for non-essentials, like games. Others will pay for high end software for professionals. However, these are exceptions. Unlike Windows users, who often assume that software is low-quality if it costs nothing, Linux users generally see no reason to pay for software. 1. Linux users are aware of licensing The awareness of licensing is part of the idealism that still prevails in many parts of the Linux community. The majority of Linux software licenses are defined as free in the sense that, instead of restricting how software is used, as most Windows software does, it defines how it may be copied, revised, and redistributed. Often, part of the terms is that the credit is given to the original creator of the software, and revisions of the software use the same license. Some Linux users will use only software with a free license. Others will use proprietary licenses if no copyleft alternative exists, or if using the proprietary license is a job requirement. However, whatever their position, few Linux users of any experience are unaware of licensing issues. A Separate Outlook These differences add up to an entirely different attitude towards computing. Part of the reason that few products are marketed for Linux users is probably their small market share, but an equally large part is that few outsiders understand that the market is completely different. For some, the solution is to make Linux more like Windows. For instance, both the Snappy and Flatpak package systems are an attempt to imitate Windows installation programs. However, enough differences remain that such efforts are likely to have little success. For better or worse, most Linux users see little advantage in change. If anything, they are likely to be convinced that their customs and expectations are superior — and, being a Linux user myself, I suspect they are right.
<urn:uuid:eb01ea53-5e10-46ce-babf-de6249ddb72e>
CC-MAIN-2022-40
https://www.datamation.com/open-source/7-ways-linux-users-differ-from-windows-users/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00758.warc.gz
en
0.955656
1,363
2.734375
3
If it wins FDA approval next year, the two-part sensor could help spot new infections weeks before symptoms begin to show. Why are pandemics so hard to stop? Often it’s because the disease moves faster than people can be tested for it. The Defense Department is helping to fund a new study to determine whether an under-the-skin biosensor can help trackers keep up — by detecting flu-like infections even before their symptoms begin to show. Its maker, Profusa, says the sensor is on track to try for FDA approval by early next year. The sensor has two parts. One is a 3mm string of hydrogel, a material whose network of polymer chains is used in some contact lenses and other implants. Inserted under the skin with a syringe, the string includes a specially engineered molecule that sends a fluorescent signal outside of the body when the body begins to fight an infection. The other part is an electronic component attached to the skin. It sends light through the skin, detects the fluorescent signal and generates another signal that the wearer can send to a doctor, website, etc. It’s like a blood lab on the skin that can pick up the body’s response to illness before the presence of other symptoms, like coughing. The announcement comes as the United States grapples with COVID-19, a respiratory illness that can present in flu-like symptoms such as coughing and shortness of breath. The military is taking a leading role in vaccine research, Joint Chiefs of Staff Chairman Gen. Mark Milley told reporters at the Pentagon on Monday. “Our military research labs are working feverishly around the horn here to try to come up with a vaccine. So we’ll see how that develops over the next couple of months,” Milley said. U.S. troops themselves are also at risk. A U.S. soldier in South Korea became the first U.S. service member to contract the virus, the Wall Street Journal reported in February. Profusa’s newest funded study, which the company announced on Tuesday, will test how well the sensor can detect influenza outbreaks up to three weeks before it’s possible to detect them using current methods. Because the gel doesn’t actually emit any signal, it wouldn’t give away a soldier’s position, so the sensor could be used in sensitive settings like behind enemy lines, Profusa CEO Ben Hwang said. Hwang said his company has received grants from the Defense Advanced Research Projects Agency, or DARPA, since around 2011. “They gave us grant money to help our research and as we prove out a certain milestone, as we de-risk the technology, they give us a second phrase and a third phase and provide support,” he said. “Their support has transitioned from grants into these types of programs that create real-world evidence.” Hwang said DARPA is helping the company reach out to other outfits within the Defense Department that might use the device on troops or servicemembers. That could include partnerships with U.S. Special Operations Command, for instance, or, Indo-Pacific Command. He declined to comment on conversations with specific military customers. NEXT STORY: DARPA Wants to Produce Invisible Headlights
<urn:uuid:15db5dd8-2d71-4f74-bd28-c15e4a9127b9>
CC-MAIN-2022-40
https://www.nextgov.com/emerging-tech/2020/03/military-funded-biosensor-could-be-future-pandemic-detection/163512/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00758.warc.gz
en
0.956026
680
2.84375
3
The hack of the Colonial Pipeline is the largest cyberattack carried out yet on a U.S. utility company. The attack caused widespread, national disruption and increased scrutiny regarding the security practices and protocols being used by the nation’s largest energy providers. Subsequent investigations and interviews into the hack revealed that it was carried out not using the most advanced, sophisticated hacking technology and expertise, but by taking advantage of the fact that Colonial Pipeline was not adhering to some of the most fundamental cybersecurity basics. How did the Colonial Pipeline hack begin? On May 6th, 2021, an Eastern Europe-based ransomware gang known as DarkSide was able to breach Colonial Pipeline’s cybersecurity defenses and steal 100 GB of data in as little as two hours. The following day, May 7th, the hackers infected Colonial Pipeline’s network with ransomware, locking down the company’s access to their billing and accounting services. DarkSide offered to allow Colonial access to they system in exchange for 75 BitCoin, which at the time would have been valued at about $4.4 million. In response to the hack, Colonial enlisted the help of Mandiant, a cybersecurity firm tasked with investigating and responding to the attack. Colonial also alerted federal law enforcement to the breach and ceased all pipeline operations in order to contain and mitigate the damage being done by the attackers. On May 9th, President Joe Biden made an emergency declaration for 17 states affected by the shutdown of Colonial’s distribution. What are the effects of the hack? Colonial operates the largest petroleum pipeline in the country. As a result of the company completely shutting down their operations, a short-term limit to a large portion of the Southeastern United States’ fuel supply took effect. Air travel was affected and gasoline stations quickly ran out of supply. Panicked citizens rushed to purchase gasoline in bulk, sometimes resorting to unsafe practices such as filling grocery bags and open containers with fuel. Fear spread as people began to worry about the implications of a long term gasoline shortage during an already stressful and chaotic pandemic. On May 12th, five days after the ransomware hack took place, normal pipeline operations were resumed. How did the Colonial Pipeline Company end the ransomware attack? On May 19th, Colonial Pipeline officially revealed that they gave in to DarkSide’s demands and paid $4.4 million in BitCoin to regain control of their network. Colonial Pipeline Company CEO Joseph Blount said that deciding to pay the ransom was not an easy decision, but ultimately had to be done “for our country.” In the following weeks, the U.S. Department of Justice was able to recover the majority of the ransom money. DarkSide, perhaps not anticipating the amount of pressure they would be under after such a high profile hack, largely went into hiding. How could the hack have been prevented? While Colonial’s official statements immediately following the hack stated that the criminals used “highly sophisticated” techniques to breach their defenses, further interviews carried out by a Senate committee on Capitol Hill revealed a much different story. DarkSide was able to infiltrate and take over Colonial Pipeline’s network by using a single leaked password and user name combination that allowed them to simply log in to a “legacy VPN” that was still active within the system. Because the VPN did not require multi factor authorization, once DarkSide had entered the credentials required for access, they were able to set up shop and completely paralyze the company’s operations. Particularly determined hackers may find ways to breach even the most highly fortified cyber defenses. It is entirely possible, however, that the hack of the Colonial Pipeline may have been entirely prevented had the company adhered to basic security principles and required multi factor identification for access to their VPN or discontinued the unused VPN in the first place. The hack of the Colonial Pipeline is sure to go down in history as one of the most far-reaching cybersecurity slip ups in the nation’s history, as well as a wake up call for the IT departments of large corporations and utility providers the world over. The mere presence of a leaked password, as well as the existence of a functional but abandoned VPN, shows that Colonial did not have good cybersecurity hygiene with regard to password security or network maintenance. Senator Ron Wyden (D-OR) said on record that “the shutdown of the Colonial Pipeline by cybercriminals highlights a massive problem. Many of the companies running our critical infrastructure have left their systems vulnerable to hackers through dangerously negligent cybersecurity.” How can future attacks like that on the Colonial Pipeline be prevented? The Biden administration has unveiled a multi-trillion dollar plan to fortify and modernize the country’s power grid and infrastructure. Additionally, President Biden has been vocal in his determination to bolster the nation’s defenses against cyberattacks. While the administration’s path forward may enhance the security of federal and government entities, it does little to encourage privately held companies like Colonial to strengthen their own defenses. As cyberattacks increase in frequency, bills are being considered and voted on that will require private companies to report cybercrime activity to the federal government. While some feel that this may constitute government overreach, in the wake of Colonial Pipeline’s shutdown, many feel that the country’s dependence on private utility companies and energy providers warrants a greater degree of federal oversight. You don’t have to be a high profile corporation to be hacked. Follow these simple steps to help keep your network and devices safe: - Create strong passwords. Be sure to use strong login credentials. Change your passwords frequently. - Delete your cookies. Cookies are pieces of information that websites use to keep track of you. This data can potentially be used by hackers for nefarious purposes. Clear the cookies saved in your browser once every couple of weeks. - Swap out your old hardware. Replace outdated hardware with refurbished firewalls or network switches from a reputable dealer. - Hide your activity with a VPN. Using a VPN is a great way to keep your network hidden from hackers. Needless to say, multi factor identification can make the difference between safety and stolen data. - CEO Reveals How Easily Colonial Pipeline Hack Could’ve Been Avoided by Aaron Weaver, 9 June 2021, Hacked - The Colonial pipeline ransomware cyberattack: How a major oil pipeline got held for ransom by Sam Morrison, 8 June 2021, Vox - Colonial Pipeline hack explained: Everything you need to know by Sean Michael Kerner, 9 July 2021, Whatis.com - Back to Basics: A Deeper Look at the Colonial Pipeline Hack 8 July 2021, SecureLink - Colonial Pipeline ransomware attack Wikipedia
<urn:uuid:1d08bf6c-8b55-4ecb-9ab7-2be732399f6c>
CC-MAIN-2022-40
https://news.networktigers.com/industry-news/how-could-the-colonial-pipeline-hack-have-been-prevented/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00758.warc.gz
en
0.959256
1,389
3.09375
3
Ghidra is an open source tool that helps security researchers better understand programs. The tool was launched and released today at the RSA Conference in San Francisco. Ghidra is a reverse engineering tool But what does Reverse Engineering actually mean? When developing software, what the program should do is set in a human-understandable programming language (e.g., Java or Python). This so-called source code is converted by a translator, the compiler, into machine code, that is to say computer-readable instructions. Reverse engineering is an attempt to reverse this step and see how a program works and runs. Manufacturers are trying to protect themselves, but reverse engineering is an important tool in IT security. Why are there tools like Ghidra? Reverse engineering is an important step for safety. With the help of reverse engineering, for example, weak spots in password safes are found and can be closed. Another use case for the tool is the analysis of malware. Malware is often analyzed to understand how it works. If a computer is encrypted after a ransomware attack, it can eventually be decrypted with a “master key” or program This is mostly due to reverse engineering. The blog post by Digital Media Women describes the work of a Malware Reverse Engineer at GData Advanced Analytics from Bochum. Why is Ghidra so exciting? The world became really aware of Ghidra in the course of the Vault7 releases in 2017 through Wikileaks code. Ghidra has a graphical interface and has been developed by the NSA since the early 2000s. The exciting thing is that there has been little movement in the reverse engineering market so far. Almost all tools are associated with enormous financial expenses which meant a high entry barrier. Ghidra makes it much easier for those interested, students and teachers to learn reverse engineering.
<urn:uuid:ea3a0366-3329-4ab1-8948-9a351c73c354>
CC-MAIN-2022-40
https://aware7.com/blog/ghidra-published-by-the-nsa/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00158.warc.gz
en
0.954677
380
3.21875
3
This article includes essential knowledge about big data technology and what it is about. Its working process, types, advantages, the leading technologies, and prospects are all mentioned briefly. Read till the end to find out all about it. Top 11 Big Data technologies in 2021 Big data technologies are specific data indicators, which help different companies gain insights and make better business decisions, leading to greater ROI. They even help online games like Book of Ra. Gaming spheres use big data technologies to detect scammers on the site and give a secure gaming environment to the players and fully immerse yourself in the world of gaming with the best gaming earbuds. But that's not it; there's way more to know, so keep reading. What Is Big Data technology? According to Gartner's Big Data Technology definition, it is a high-volume and variety of information, asserting cost-effective, innovative forms of information processing required for enhanced decision-making. One of the foremost big data technology advantages is that they can efficiently track and analyze massive data flow. Thus, it implies that with lots of data increasing exponentially in size with time, big data technology application becomes highly necessary. Components of Big Data technologies The traditional method of input processing proves ineffective when the data flow is humongous. That is when big data technologies become the need of the hour. Therefore, knowing about them is required for any business. They can be split into two big data technology components. They are as follows: - Operational Big Data Technologies: It provides information regarding the info generated daily. They mainly include the online shopping transactions through Amazon, Flipkart, etc., or any other type of online payment, social media, booking online railway or movie tickets, or any info from some specific establishment used for analysis through big data technology-based software. - Analytical Big Data Technologies: It is the real investigator of huge info. The most important work is the Analytical Big Data, the advanced adaptation of Big Data Technologies. Thus, it is quite complex as compared to the Operational one. The big data technology examples for the Analytical part include weather forecasting, stock marketing, time series analysis, and medical-health records. The 11 Best Big Data Technologies According to the current big data technology news and our survey, we have decided on the 11 top names that greatly influence the market and the IT sector. They are as follows: - Artificial Intelligence (AI): Artificial Intelligence is an interdisciplinary branch of science. It deals with the development of smart electronic devices that would carry out tasks that require human intelligence. AI is present everywhere now, starting from Apple's Siri to self-driving cars. - Hadoop Ecosystem: It is open-source software. It allows the processing of numerous data sets across many clusters of computers, with just simple programming models. It efficiently detects any problem at the application layer. There are five components available in the module. They are: Hadoop Common, Hadoop Distributed File System, YARN, MapReduce, and Ozone. - R Programming: R is well-designed programming software that includes various mathematical symbols and formulae and is helpful in the case of statistical computing in the programming language. This platform provides a wide range of functionalities like - classical statistical tests, linear modeling, non-linear modeling, time-series analysis, clustering, and graphical techniques. - MongoDB: MongoDB is a popular open-source information analysis tool and is document-oriented. It manages unstructured or semi-structured inputs for application developers and creates innovative products and services in the global market. It is also used to store them in JSON-like documents, thereby allowing dynamic and flexible schemes. - Data Lakes: It is a system of storage in raw format. It stores transformed input used for visualization, reporting, and machine learning. It can include several types of data - unstructured (emails, documents, PDFs), structured (rows and columns), semi-structured (JSON, logs, CSV, XML), and binary (audio, video, images). - Blockchain: Blockchain is the leading technology involved in cryptocurrencies like bitcoin. It distinctly stores structured information - once the data are written, it can never be altered or deleted. It makes it a highly secure and reliable ecosystem, perfect for big data technology in banking, finance, Insurance (BFSI), and other securities. It also has a role in the social welfare sectors like education and healthcare. - Predictive Analytics: Predictive analytics predicts future events based on previously stored info. It is powered by technologies including machine learning, and data, mathematical and statistical modeling. To formulate predictive models, one must know regression techniques and classification algorithms. - Apache Spark: It is a real-time input processing framework with built-in features for machine learning, SQL, graph processing, and streaming analytics. It can carry out operations like credit card fraud detection and eCommerce recommendation as well. - Prescriptive Analytics: Prescriptive analytics is one of the most popular technologies in the current year. It guides and suggests the best possible courses of action directed towards favorable results in a given situation. For example, it can help companies respond to any market change. - In-memory Database: In-Memory Computing (IMC) is an example of an In-memory Database. Through it, several computers spread across various locations and share valuable input processing tasks. The info can be accessed instantly and at any scale. - Microsoft MCSE Data Management and Analysis: This big data technology will easily help you manage and analyse bulk data. SQL Management, Machine Learning, and Business Intelligence Reporting are essential tasks you can do using this software. Big data technology in the future will not be the same as it is now. New and better ones keep emerging and replacing the existing ones for good. The change is taking place very rapidly as many big data technologies are stretching at a wider scale to meet the demands of the fast-developing IT industries. Hence, to keep at par with the fast-moving world, transforming classic info analysis models to revolutionize big data, new technology is needed. For more details regarding the same, you can go through authentic sites like Jigsaw Academy. Thus, this article has tried to teach all the important points regarding Big Data Technology. The big data technology's meaning, usefulness and types, and every other required detail are mentioned here. Please write to us in the comments below whether this review proved effective to you and how. Also, remember to share it with others.
<urn:uuid:a9fcbfa2-b661-42b2-bdf5-ae3f53fd04cf>
CC-MAIN-2022-40
https://www.lightsondata.com/big-data-technologies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00158.warc.gz
en
0.915845
1,351
2.84375
3
An information security policy is a set of rules laid out by an organization to protect sensitive and important data. This document should reflect the organization’s security goals, priorities and strategy agreed upon by management. Follow these six steps to write an effective information security policy. The first thing you should do is learn about how management views security, rather than start by trying to teach them about information security. Any good information security policy will be mandated by what management is concerned about, and not just what the security professional believes is important. Good listening skills are crucial in this early stage. Listen for common themes that are coming up as you speak with management. You’ll also need to ask the right questions. Some good questions include “What kinds of information do you depend on for your decision-making process?” and “are there certain types of information you are more concerned about keeping private than others?” Use a framework “It’s a good idea to build your policy around an industry-standard framework such as the International Standards Organization’s Security Management series. Starting with such a framework makes it much more likely that external actors will accept your policy, along with management,” recommends Tomas Guess, cybersecurity manager at Coursework Help. Remember that these frameworks are just that, frameworks that need to be fleshed out and tailored to meet the needs of the company it will protect. You should adjust these documents to suit the organization, not the other way around. Mandates are key It’s very important your information security policy reflects actual practice at your organization. The best way to ensure that is by getting your mandates right. The mandates you create must be realistic and something everyone in the organization can get behind and follow. There is no sense in creating a complicated and extensive policy if it won’t actually be followed in practice. Write a policy with a limited number of mandates but be very clear that there are no exceptions to these rules. The way you use language is very important to how effective your policy will be. Write your policy so that it is followed as closely as the organization’s sexual harassment policy. An information security policy should classify data and delineate specific handling practices for each kind. Prioritizing data in this way can save you a lot of resources. Obviously, there are types of data that need high and medium levels of security. But there is also probably a lot of data that is not worth protecting and doing so would be a big and unnecessary burden on an organization’s resources. It is recommended that data be classified into three main groups: high risk class, confidential, and public. Improve your writing with some online tools A big part of creating a good information security policy is effective communication. Write your policy so that it is easy for everyone to understand and comply with. Share it with your staff It’s very important you inform your staff about the policy via security awareness sessions. Just because they have read and signed the policy doesn’t mean they fully understand it. Hold a training session where you teach them about using and deleting data, records management, privacy, and proper social networking. Inform them about the procedures and mechanisms that exist to protect the organization’s data. Talk about subjects like levels of confidentiality and sensitivity when it comes to data. Information is crucial to the success and integrity of any organization, so make your information security policy a priority. There are many threats out there and different ways your sensitive information can be compromised. Get a good understanding of what management considers important information. Use an established framework and adapt it to your organization. Make your mandates straightforward and clear in their intentions and consequences. Educate your staff on how to comply with the policy. Your best defense against information security risks is an effective and comprehensive information security policy.
<urn:uuid:4eb3aeae-664b-41b9-a9c3-0b7fe8aeb073>
CC-MAIN-2022-40
https://www.cyberpratibha.com/blog/6-steps-to-write-an-effective-information-security-policy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00158.warc.gz
en
0.942136
791
2.96875
3
10. KDM Package The KDM package defines key infrastructure elements that determine patterns for constructing KDM representations of existing software systems. KDM representations (also referred to as KDM instances) are instances of the KDM (which is a meta-model), where each KDM element represents a certain element of the existing system. Although in the technical sense, KDM instance is a model of the corresponding existing software system, KDM instance is not a model that represents constraints, like the ones used during the design phase, rather, this is an intermediate representation that captures precise knowledge about the application Implementers of KDM tools are responsible for defining a mapping from the elements of programming languages, runtime platforms, and other artifacts of existing software systems into KDM elements, using semantic description and implementer’s guidelines of this specification. Kdm package describes several infrastructure elements which are present in each KDM instance. Together with the elements defined in the Core package these elements constitute the so-called KDM framework. The remaining KDM packages provide meta-model elements that represent the artifacts of existing systems. Each KDM package follows a uniform meta-model pattern for extending the KDM framework (the Framework Extension meta-model pattern). KDM Framework is part of the KDM Infrastructure together with the elements defined in the Source package. The Kdm package is a collection of classes and associations that define the overall structure of KDM instances. From the infrastructure perspective, KDM instances are organized into segments and then further into specific models. There are 9 kinds of models, corresponding to some well-known concerns of software engineering. From the architectural perspective, each KDM model represents a particular view of the existing system. From the infrastructure perspective, a KDM model is a typed container for model element instances. From the meta-model perspective, KDM model is represented by a separate package that defines a collection of the meta-model elements, which can be used to build representations of the particular view of existing systems. KDM framework defines a common superclass model element for all models – the KDMModel class. KDM specification uses the term “KDM model” to refer both to a meta-model element corresponding to a particular model kind and to a particular instance of such element in a concrete representation of some existing system. Explicit disambiguation (KDM model meta-model element vs. KDM model instance) will be provided when necessary. A segment is a coherent collection of one or more related models that represents a self-contained perspective on the artifacts of the existing system. It is expected that a complete segment is extracted as a unit. It is expected that each model segment describe artifacts that involve a single programming language and a single platform. - Framework – defines the basic elements of the kdm framework. - Audit -defines audit information for KDM model elements. - Annotations – provides user-defined attributes and annotations to the modeling elements - Extensions – a class diagram that defines the overall organization of the light-weight extension mechanism of KDM - ExtendedValues – the tagged values used by the light-weight extension mechanism The Framework class diagram defines the meta-model elements that constitute the so-called KDM Framework: a collection of KDM models organized into nested segments. These meta-model elements determine the structure of KDM instances. The classes and association of the Framework diagram are shown in See – Framework Class Diagram.. The KDMFramework meta-model element is an abstract class that describes the common properties of all KDM Framework elements. KDMFramework class is extended by KDMSegment and KDMModel classes. Each KDM Framework is the container for KDM light-weight extensions (extension property). The KDM extension mechanism is described further in this chapter. |name: String [0..*]||The name of the framework element.| |extension: ExtensionFamily [0..*]||Extensions for the current model segment.| - to arrange instances of the KDM model elements into models (constrained only by the definition of each model) - to arrange KDM models into one or more segments - to provide names to KDM models and KDM segments From the meta-model perspective, KDMModel extends the Element class. Each concrete KDM model follows the so-called Framework Extension meta-model pattern. This pattern involves the following naming conventions. Lets’ assume that “foo” is the name of the KDM model. The following rules describe the Framework Extension meta-model pattern: - The meta-model elements for KDM model “foo” are described in a separate package, called “foo”. - The package defines a concrete subclass of the KDMModel, called “FooModel”. - The package defines a common abstract parent for all KDM entities specific to this KDM model, called “AbstractFooElement”. This class extends the KDMEntity class from the Core package. - The package defines a common abstract parent for all KDM relationship specific to this KDM model, called “AbstractFooRelationship”. This class extends KDMRelationship class from the Core package - Class “FooModel” owns class “AbstractFooElement”. This association subsets the association between the KDMModel and KDMEntity defined at the Framework class diagram. - Class “AbstractFooElement” owns zero or more AbstractFooRelationship elements - The package “foo” includes a “FooInheritances” class diagram, describing inheritances of “FooModel”, “AbstractFooElement” and “AbstractFooRelationship” classes, as well any other common properties related to the KDM Infrastructure, such as properties related to the Source package. - The package “foo” includes “ExtendedFooElements” diagram which defines two generic meta-model elements “FooElement” and “FooRelationship”. These meta-model elements are extension points to package “foo”. |ownedElement: KDMEntity[0..*]||Instances of KDM entities owned by the model. Each KDM model defines specific subclasses of KDMEntity class.| |aggregatedRelation:AggregatedRelationship[0..*]||Instances of KDM aggregated relations owned by the model.| It is the implementer’s responsibility to arrange instances of the KDM model elements into models (constrained only by the definition of each model) and to provide name attributes for each KDM model instance. A KDM model instance may be empty or may contain one or more instances of the elements allowed for this KDM model. A particular KDM instance may contains no KDM models of a given kind, or several such models. Usually, KDM models corresponding to the KDM Resource Layer and Abstractions Layer have associations to the models in KDM Program Elements and Infrastructure layer. There should be no associations from the Program Elements and Infrastructure layer models to Resource and Abstractions layer models. |getModel(): KDMModel[0..1]||This operation returns the KDM model which owns the current KDM Entity| The Segment element represents a coherent set of logically related KDM models that together provide a useful view of an existing software system. Segment is a unit of exchange between the tools of the KDM ecosystem. Segment without owners is the top segment of the KDM model. It is the implementer’s responsibility to arrange KDM models into segments and to provide name attributes for each KDM segment instance. A KDM segment instance may be empty or may not contain KDM models of a given kind or may contain one or more KDM models of a given kind. KDM meta-model patterns make is possible to arrange KDM instances in such a way that the models in KDM Program Elements and Infrastructure layers are placed in a separate KDM segment, which becomes a reusable asset for multiple derivative models from the Program Elements and Infrastructure layer. <?xml version="1.0" encoding="UTF-8"?> <kdm:Segment xmi:version="2.1" xmlns:xmi="http://www.omg.org/XMI" xmlns:code="http://kdm.omg.org/code" xmlns:kdm="http://kdm.omg.org/kdm" xmlns:source="http://kdm.omg.org/source" name="Framework Example"> <audit xmi:id="id.0" description="Illustration of KDM Framework" author="KDM FTF" date="04-03-2007"> <attribute xmi:id="id.1" tag="approved" value="yes"/> </audit> <segment xmi:id="id.2" name="foobar"/> <model xmi:id="id.3" xmi:type="code:CodeModel" name="foo"> <annotation xmi:id="id.4" text="This is a sample instance of a Code model"/> </model> <model xmi:id="id.5" xmi:type="source:InventoryModel" name="bar"> <annotation xmi:id="id.6" text="This is a sample of an Inventory model"/> </model> </kdm:Segment> The classes and associations of the Audit class diagram are shown in See – Audit Class Diagram.. Each Framework element can have zero or more Audit instances associated with it. The collection of Audit elements is not ordered however the data property can be used to establish the temporal ordering. The Date format is “dd-mm-yyyy”, where “dd” is the date, the “mm” is the number of the month, with leading zero, if needed, and “yyyy” is the year. For example, “04-03-2007” corresponds to the “4th of March 2007”. The implementers of KDM import tools should not make any assumptions regarding the content of the Audit element, other than the “human-readable text”. It is however expected that at least one Audit element is related to the origin of the model or segment. The Extensions class diagram defines the meta-model elements that constitute the basis of the KDM light-weight extensions mechanism. Some additional meta-model elements are defined by the ExtendedValues class diagram. The KDM light-weight extension mechanism is a standard way of adding new “virtual” meta-model elements to KDM. A “virtual” meta-model element is a base meta-model element with extended meaning, and possibly with extended attributes. The base meta-model element can be a “regular” element (a concrete class, not marked as “generic”), or “generic” (concrete class with under specified semantics, marked as “generic”). This mechanism is defined as part of KDM. The light-weight extensions mechanism allows introducing new “extended” meta-model element kinds (called stereotypes). The exact meaning and the intention of the “extended” meta-model elements is outside of KDM and should be communicated by implementers to the users of the extended representations. - a stereotype definition includes the name of class of the allowed base elements. This class can be a particular concrete meta-model element, a generic element or an abstract meta-model element 2) define tags associated with a stereotype. Tags are additional attributes to the extended elements. Tag definition includes the name of the extended attribute and the name of the type of the element (represented as a string). Values of extended attributes can take the form of a string, or reference to some modeling element. Each stereotype defines its own set of tags. Tag definitions are owned by the corresponding stereotype definition. - concrete tag values can be added to the “virtual” element if the stereotype defines tags - each tag value is associated with the corresponding tag definition - The complete kind of the new element is defined as the union of all stereotypes added to the element KDM supports the light-weight extension mechanism through a Generic Element meta-model pattern. KDM defines a relatively large number of concrete meta-model elements with under specified semantic. In KDM these elements are the common superclasses for classes with specific semantics. However, generic elements can be used as extension points of the light-weight extension mechanism by using them in KDM instances with a stereotype. In addition, each KDM model defines two “wildcard” generic elements: a generic entity and a generic relationship for the given KDM model. They too can be used as extension points. Pure KDM instances should use only concrete meta-model elements which have semantics specified in KDM, without stereotypes. KDM instances which use stereotypes are called extended KDM instances. Extended KDM instances can add stereotypes to concrete meta-model elements. KDM light-weight extension mechanism does not support multiplicity of tags, constraints on tags, relationships between tags or stereotypes. It is the implementer’s responsibility to select the most specific extension points, by defining stereotypes in such a way that they use the base elements with the most specific semantic description (closer to the bottom of the KDM class hierarchy). The classes and associations of the Extensions diagram are shown in See – Extensions Class Diagram. The stereotype concept provides a way of branding (classifying) model elements so that they behave as if they were the instances of new virtual meta-model constructs. These model elements have the same structure (attributes, associations, operations) as similar non-stereotyped model elements of the same kind. The stereotype may specify additional required tagged values that apply to model elements. In addition, a stereotype may be used to indicate a difference in meaning or usage between two model elements with identical structure. |name:String||Specifies the name of the stereotype.| |type:String||Specifies the name of the model element to which the stereotype applies.| |tag:TagDefinition[0..*]||Stereotype owns the set of tag definitions that determine the additional tagged values associated with the model elements that are branded with the given stereotype.| - Tags associated with model element should not clash with any meta attributes associated with this model element. - A model element should have at most one tagged value with a given tag name. - A stereotype should not extend itself. - A Stereotype can be added to ModelElement if its class is the same as the baseClass, or one of its subclasses - The values of the Type attribute of the TagDefinition are restricted to the names of the KDM meta-elements. Names of the core datatypes (“Boolean”, “String”, “Integer”) define attributes of the extended meta-model element. The corresponding values are represented as instances of the TaggedValue class. Names of other KDM meta-elements (for example, “KDMEntity”, or “Audit”) define associations of the extended meta-element and the corresponding values are represented as instances of the TaggedRef class. A KDM model element with one or more stereotypes has the semantics of the base element with some additional semantic constraints. The complete kind of the new element is defined as the union of all stereotypes added to the element. Extended values are the attributes of the extended element. A KDM element with one or more stereotypes is equivalent to a situation in which KDM has been extended by adding a new meta-model class that extends the base class, with the new attributes corresponding to the tag definitions. <?xml version="1.0" encoding="UTF-8"?> <kdm:Segment xmi:version="2.1" xmlns:xmi="http://www.omg.org/XMI" xmlns:action="http://kdm.omg.org/action" xmlns:code="http://kdm.omg.org/code" xmlns:kdm="http://kdm.omg.org/kdm" name="Stereotype Example"> <extensionFamily xmi:id="id.0" name="Example extensions"> <stereotype xmi:id="id.1" name="Java method"/> <stereotype xmi:id="id.2" name="C++ method"/> <stereotype xmi:id="id.3" name="C++ procedure"/> <stereotype xmi:id="id.4" name="C++ friend"> <tag xmi:id="id.5" tag="friend_of" type="ClassUnit"/> </stereotype> <stereotype xmi:id="id.6" name="IsFriendOf"/> <stereotype xmi:id="id.7" name="native call"> <tag xmi:id="id.8" tag="implemented in" type="String"/> </stereotype> </extensionFamily> <model xmi:id="id.9" xmi:type="code:CodeModel" name="Example"> <codeElement xmi:id="id.10" xmi:type="code:ClassUnit" name="myclass"> <codeElement xmi:id="id.11" xmi:type="code:MethodUnit" stereotype="id.2" name="foo" type="id.12"> <codeElement xmi:id="id.12" xmi:type="code:Signature" name="foo"/> </codeElement> </codeElement> <codeElement xmi:id="id.13" xmi:type="code:CallableUnit" stereotype="id.4 id.3" name="bar" type="id.16" kind="regular"> <taggedValue xmi:id="id.14" xmi:type="kdm:TaggedRef" tag="id.5" reference="id.10"/> <codeRelation xmi:id="id.15" xmi:type="code:CodeRelationship" stereotype="id.6" to="id.10" from="id.13"/> <codeElement xmi:id="id.16" xmi:type="code:Signature" name="bar"/> </codeElement> </model> <model xmi:id="id.17" xmi:type="code:CodeModel"> <codeElement xmi:id="id.18" xmi:type="code:ClassUnit" stereotype="id.1"> <codeElement xmi:id="id.19" xmi:type="code:MethodUnit" stereotype="id.1" name="foobar" type="id.23"> <codeElement xmi:id="id.20" xmi:type="action:ActionElement" stereotype="id.7" name="a1"> <actionRelation xmi:id="id.21" xmi:type="action:Calls" stereotype="id.7" to="id.13" from="id.20"> <taggedValue xmi:id="id.22" xmi:type="kdm:TaggedValue" tag="id.8" value="C"/> </actionRelation> </codeElement> <codeElement xmi:id="id.23" xmi:type="code:Signature" name="foobar"/> </codeElement> </codeElement> </model> </kdm:Segment> Lightweight extensions allows information to be attached to any model element in the form of a “tagged value” pair, (i.e., name=value). The interpretation of tagged value semantics is outside of the scope of KDM. It must be determined by the user or tool conventions. It is expected that tools will define tags to supply information needed for their operations beyond the basic semantics of KDM. Such information could include specific entities, relationships and attributes for a particular programming language, runtime platform or engineering environment. Even though TaggedValues is a simple and straightforward extension technique, their use restricts semantic interchange of extended information about existing software systems to only those tools that share a common understanding of the specific tagged value names. |tag:String||Contains the name of the tagged value. This name determines the semantics that are applicable to the contents of the value attribute.| |type:String||Specifies the type of the value attribute.| - The “value” attribute of the TaggedValue should be valid according to the type specified in the corresponding TagDefinition. - The target of the “ref” association of the TaggedRef should be of the type specified in the corresponding TagDefinition, or one of its subtypes. - If the type of the TaggedDefinition is one of the primitive datatypes (for example, “BooleanType”, “StringType”, “IntegerType”) the corresponding value should be an instance of the TaggedValue class. - If the type of the TaggedDefinition is a names of some other KDM meta-elements (for example, “KDMEntity”, or “Audit”) the corresponding value should be an instance of the TaggedRef class. Names of the tags, defined by each stereotype constitute an isolated namespace which does not interfere with the names of other stereotypes or the names of the attributes of the base type. The meaning and the intention of the stereotypes and tags is outside of the KDM specification and should be communicated by implementers to the users of the extended models. Extensions should not change the semantics of the base KDM meta-model elements, so that the model with extensions can still be interpreted as an approximation of the full extended meaning when extensions are ignored and only the basic KDM semantic rules are applied. |name:String||Provides the name of the extension family.| |stereotype:Stereotype[0..*]||The set of stereotypes that are owned by the extension family.| ExtensionFamily provides a named container for stereotype definitions. It is implementer’s responsibility to arrange stereotype definitions into meaningful families. Some stereotype definitions may be made available as reusable assets in a separate segment. KDM analysis tools should not make any assumptions regarding the organization of stereotypes into families. |taggedValue:TaggedValue[0..*]||The set of tagged values determined by the stereotype.| - Each tagged value added to a ModelElement must conform to a certain tag definition owned by the stereotype of that ModelElement (the tag association of the TaggedValue should refer to a TaggedDefinition that is owned by a Stereotype of the ModelElement). A tagged value conforms to a tag definition when the value conforms to the type of the TagDefinition. Conformance of lightweight extensions can only be validated dynamically by a suitable KDM import tool, since lightweight extensions are not defined by the KDM standard. - stereotype can be associated with a certain instance of a ModelElement if the type of the ModelElement is the same as the type property in the stereotype definition, or one of its subclasses ExtendedValues class diagram defines additional meta-model constructs as part of the KDM light-weight extension mechanism. These constructs represent extended values that can be added to extended KDM model elements. The key of the light-weight extension mechanism is a meta-model construct called stereotype. Stereotypes provide additional meaning to KDM meta-model constructs. While the meaning o a stereotype is not defined within the KDM, stereotypes allow differentiation within existing KDM metamodel elements and allow adding attributes to them with extended value construct. The classes and associations involved in the definition of extended values are shown at See – ExtendedValue Class Diagram.. ExtendedValue class is an abstract superclass for the two concrete classes that represent tagged values: the TaggedValue and the TaggedRef. ExtendedValue class defines common properties for these classes. |tag :TagDefinition||the reference to the tag definition of the corresponding stereotype| ExtendedValue is a “virtual” attribute to an extended KDM meta-model element. ExtendedValue element represents the value of the attribute. KDM defines two concrete subclasses of ExtendedValue: TaggedValue and TaggedRef. The definition of the attribute is provided by the TagDefinition element owned by a Stereotype element. The Stereotype element defines the “virtual” meta-model element which provides the context for the new attributes. “Virtual” attributes are instantiated every time a new “virtual” meta-model element, defined by a Stereotype is instantiated. This is an important difference between ExtendedValues and KDM attributes, which are not related to any particular meta-model element. A tagged value allows information to be attached to any model element in the form of a “tagged value” pair, (i.e., name=value). The interpretation of tagged value semantics is outside of the scope of KDM. It must be determined by the user or tool conventions. It is expected that tools will define tags to supply information needed for their operations beyond the basic semantics of KDM. Such information could include specific entities, relationships and attributes for a particular programming language, runtime platform or engineering environment. |Value:String||Contains the current value of the TaggedValue.| TaggedValue element represents simple atomic “virtual” attributes. The type constraint of the value, defined in the corresponding TagDefinition can be the name of any KDM primitive type, for example “StringType”, “BooleanType”, etc. A TaggedRef allows information to be attached to any model element in the form of a reference to another exiting model element. Each TaggedRef must conform to the corresponding TagDefinition: the actual type of the model element that is the target of the TaggedRef must be same as the type specified in the corresponding TagDefinition, or one of its subtypes. In the meta-model, TaggedRef is a subclass of ExtendedValue. |ref:ModelElement||Designates the model element referred to by the extended value.| - The model element that is the target of the ref association must be of the type, specified by the type attribute of the tag definition that is the target of the tag association of the tagged ref element. TagRef represents complex “virtual” attributes, which are associations to other KDM elements. TagDefinition can be a name of any KDM meta-model element, for example, “KDMEntity”, “AbstractCodeElement”, “ControlElement”, or “CallableUnit”. The Annotation class diagram defines meta-model elements that allow ad hoc user-defined attributes and annotations to instances of KDM elements. The mechanism of ad hoc user-defined attributes provides a capability to add pairs <tag, value> to an individual element instance. This is complimentary to the light-weight extension mechanism, which provides a new meta-model element, each instance of which has <tag, value> pair specified by the definition of the extended element. An ad hoc user-defined attribute is owned to an individual element instance. This means that different instances of the same meta-model element may own completely different user-defined attributes (and some may have none at all). An Annotation is an ad hoc note owned by an individual element instance. Annotations and attributes are applied to the elements of KDM instances. They may be used by implementer to add specific information to an individual element. They may also be used by an analyst, annotating a given KDM instance. On the other hand, stereotypes and tag definitions as first defined as extensions to the KDM (meta-model) and then systematically used by the implementer. The classes and associations that make up the Annotations diagram are shown in See – Annotations Class Diagram.. An attribute allows information to be attached to any model element in the form of a “tagged value” pair (i.e., name=value). Attribute add information to the instances of model elements, as opposed to stereotypes and tagged values, which apply to meta-model elements. Tagged value is part of the extension mechanism (stereotypes define virtual new model element, and tagged values specify additional attributes of these virtual model elements). Tagged values are only associated with model elements branded by a stereotype, and the set of tagged values for a particular instance of a model element is determined by its stereotype. On the other hand, arbitrary attributes may be associated with individual instances of model element. In particular, two different instances of the same model element may be annotated by different attributes. |tag:Name||Contains the name of the attribute. This name determines the semantics that are applicable to the contents of the value attribute.| |value:String||Contains the current value of the attribute.| The interpretation of attribute semantics is outside the scope of KDM. It must be determined by the user or the implementer conventions. It is expected that some tools will provide capability to add arbitrary attributes to the instances of the model to supply information needed for their operations beyond the basic semantics of KDM. Such information could support analysis of KDM models by analysis, etc. An attribute element is not related to a particular meta-model element. It does not define a “virtual” attribute to an extended meta-model element, that is instantiated with every instantiation of the new element. Instead, an attribute element can be added to any KDM element. It defines a property of a particular instance, not a property of a class of instances. |text:String||Contains the text of the annotation to the target model element.| |attribute:Attribute[0..*]||The set of attributes owned by the given element.| |annotation:Annotation[0..*]||The set of annotations owned by the given element.|
<urn:uuid:c597123b-05b3-4197-95f6-b35b23a0e687>
CC-MAIN-2022-40
https://kdmanalytics.com/resources/standards/kdm/technical-overview/kdm-1-0-annotated-reference/part-1-kdm-infrastructure-layer/10-kdm-package/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00158.warc.gz
en
0.791023
6,431
2.5625
3
Published On July 09, 2021 Home offices, homeschooling, online hang-outs with friends, online shopping, movie streaming, gaming nights, etc. – today’s living under COVID-19 lockdowns seems impossible without the latest gadgets. Therefore, many experts foresee a sizable increase in the consumption of electrical and electronic equipment and a simultaneous increase in disposal, partly as a result of the house-cleaning in the first lockdowns in 2020. However, the statistics from a recently released UN University study show that lower consumption of electronic and electrical equipment in the first three quarters of 2020 led to a reduction of 4.9 million metric tons in e-waste generated due to the COVID-19 pandemic, especially in the first and second quarter of 2020. The reductions were 30% in low and middle-income countries and only 5% in high-income countries. This inequality has a large social side effect: whereas the population in low and middle-income countries is continuously growing, the gap of having access to modern communication technologies and other electronics, the so-called "digital divide" is increasing. The ability to adapt to digitization and earn a living or simply to own and benefit from electronics is decreasing in some parts of the world. The reduction, most likely temporary, leads to less e-waste in regions where mismanagement of e-waste leads to large environmental and health damage. You can access the full report here:
<urn:uuid:5fa17327-4584-4a92-890d-55c909a6d364>
CC-MAIN-2022-40
https://www.ironmountain.com/blogs/2021/the-pulse-of-itad-un-research-digs-into-e-scrap-impacts-of-covid-19
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00358.warc.gz
en
0.951348
299
3.09375
3
New Training: Identify Data Center Components In this 12-video skill, CBT Nuggets trainer Jeff Kish covers the components of a data center infrastructure. Learn about the benefits of backup and disaster recovery solutions and how hyperconvergence can simplify your data center architecture. Gain an understanding of data center servers, hypervisors, storage arrays, and the properties of storage area networks (SANs). Watch this new Cisco training. Watch the full course: Cisco CCT Data Center This training includes: 1.2 hours of training You’ll learn these topics in this skill: Introduction to Identify Data Center Components Data Center Networking Storage Area Networks (SANs) Backup and Disaster Recovery HDDs, SSDs, and NVMe Reviewing Data Center Components What is a Hypervisor? A hypervisor is software that you can use to manage and run virtual machines (VMs). Known as well as a virtual machine monitor (VMM), a hypervisor is what lets you host multiple VMs on a single computer, with each of these sharing the computer's resources. This can include processing power, storage and memory. There are two types of hypervisors: Type 1 and Type 2. A Type 1 hypervisor is the most common type. It operates directly upon a computer's hardware, functioning like an operating system. A Type 2 hypervisor, on the other hand, runs as software within a system's operating system, much like other applications. The advantages of Type 1 hypervisors are that they are more secure because they are isolated from the operating system. They also usually perform better than Type 2 hypervisors. Hypervisors are one of the major factors behind the development of the cloud, as they lie at the foundation of it.
<urn:uuid:cbf641b5-aa19-4564-8ad8-668d070d589d>
CC-MAIN-2022-40
https://www.cbtnuggets.com/blog/new-skills/new-training-identify-data-center-components
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00358.warc.gz
en
0.891385
391
2.921875
3
Active Directory Overview For my latest CBT Nuggets course, you and I are going on an intense exploration of the wonders of Active Directory (AD). AD is a Network Operating System (NOS) that Microsoft originally built on top of Windows 2000! Obviously, with Windows Server 2016 powering many data centers today, this NOS has seen many change and improvements. It is fair to think of AD as a sophisticated database. It holds information about your users, groups, computers, printers, and any other objects you need to define in order to make your network thrive. When Microsoft first introduced Windows NT, they were struggling with what to do about a NOS. In fact, the original “domain” concept from Microsoft featured information stored in a flat file structure and constrained administrators to a fixed number of objects they could add to the domain. It is amazing to think about this today with the vastly scalable network architectures of Server 2016. The key technology that changed everything for Microsoft was the Lightweight Directory Access Protocol (LDAP). Microsoft was so impressed with this open standard for NOS functions they based their own Active Directory on these principles and ensured the compliance of AD with LDAP. It is no coincidence that LDAPv3 became a reality in 1997 and Microsoft released AD in Windows 2000. The Database Revealed While Active Directory presents a hierarchical structure to users and administrators, it is still actually stored in a flat file database structure. Users never see this, however. They see container objects and non-container objects (leaf nodes). The most common container we use today is the OU (OrganizationUnit). These incredibly powerful structures allow us to group similar objects and then apply security and management policies to these devices as a whole.
<urn:uuid:152b938a-0864-4499-ab7b-d2f0e38c309f>
CC-MAIN-2022-40
https://www.ajsnetworking.com/microsofts-active-directory/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00358.warc.gz
en
0.939986
354
2.828125
3
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726. Recognizing the Need for More Bandwidth When do you have to add more bandwidth to a congested link? When it runs out of bandwidth is the obvious answer. But how do you know that a link is congested? If you look at most network management link utilization displays, they rarely show anywhere close to 100% link utilization. First, the effect of polling every few minutes effectively averages the link utilization over the time between each poll. Second, most products average several samples to arrive at aggregate data used in longer-interval displays. Figure 1 shows one-minute samples while Figure 2 shows the same data plotted with 10-minute resolution. The peaks are significantly less in the 10-minute plot because of averaging. The 95th Percentile level, shown below in Figure 3 as horizontal blue (Receive) and red (Transmit) lines, displays more useful data. When calculated over 24 hours, the 95th Percentile shows the minimum utilization of the busiest 72 minutes of the day. Since 72 minutes is just a bit over an hour, I like to think of it as the minimum utilization during the "busy hour." In Figure 3, the Transmit 95th Percentile utilization is 25% and because the reporting interval is 12 hours, it shows the minimum utilization for the busiest 36 minutes of the day. Regardless of the type of measurement, utilization is not a good indication of congestion. Fortunately, there is a simple way to identify link congestion. Look for interface output drops (sometimes called "discards"). These occur when the network device has no free buffers in the output interface queue. Because packets awaiting transmission occupy all the buffers, the device must drop the new packet. Since a few drops are part of normal network operation, use the network management system (NMS) "Top-N" reports to show the interfaces that have the highest drop counts. A second check is that these drop counts are greater than 0.0001% of link bandwidth. This is the point at which packet loss causes TCP performance to degrade. The combination of the Top-N drops and drop counts greater than 0.0001% identifies interfaces that are candidates for bandwidth upgrades. Now we need to determine if other tools, such as quality-of-service (QoS) monitors, can handle the congestion. Is there any important or time-sensitive traffic to prioritize? Conversely, can any packets be dropped (i.e., much less important or less time-sensitive data)? You need to analyze the traffic using a tool that can show which applications are running so that you can prioritize the important applications and de-prioritize the least important applications. At one end of the tool spectrum is Wireshark, a basic packet capture tool that can perform application analysis (refer to this Sharkfest '12 presentation, "Application Performance Analysis"). Other tools include Riverbed's SteelCentral AppResponse for doing application performance analysis, and any of several types of flow data analysis tools. At one customer, we were tasked to investigate slow application performance at a remote site. Our traffic analysis showed that entertainment traffic from Pandora.com, Akamai Networks, and Limelight Networks accounted for more than 50% of the 40-Mbps capacity available (Pandora is a streaming music site while Akamai and Limelight are content providers that deliver things like movies.) We de-prioritized the entertainment traffic into a low-priority queue, and gave higher priority to voice calls and business applications. This worked well at this site. What About Buffering? But couldn't the excess packets simply be buffered and transmitted when the congestion subsides? Well, that depends. If the congestion is of very short duration, buffering may work. However, do not use large amounts of buffering to try to control and reduce drops on heavily utilized links, especially if the links have low latency. TCP will retransmit any unacknowledged packets after twice the round trip time (2 * RTT) because it thinks the packets have been dropped. My rule of thumb is to not use more buffers than the bandwidth-delay product of the link, in bytes, divided by the typical packet size (BW * RTT / typical-packet-size). It is important to identify a good packet size for use in this calculation. Voice packets are about 220 bytes long while file transfers are the maximum packet size (normally 1,500 bytes, but may be up to 9,000 bytes if jumbo frames are allowed). A value of 500 bytes is normally a reasonable starting point. A good example may help you understand. Let's assume a 1Gbps link (125MBps) between two nearby data centers, with a normal RTT of 2 milliseconds (0.002 seconds). Most of the applications involved file transfers, so the typical packet size was close to 1,500 bytes. Plugging into our equation: If you configure an oversubscribed interface with too many buffers, it confuses the TCP retransmit algorithm. When queues build, the TCP retransmit timer expires and another copy of transmitted data is sent. This effectively reduces the link's "goodput" because it sends two copies of the same packet. So too much buffering actually hurts more than it helps. When to Add More Bandwidth If some of the traffic in our example could be put into a low-priority queue, then this might be a suitable solution. However, what if there isn't any low-priority traffic? That's when it is necessary to add more bandwidth. But how much bandwidth? Doubling the available bandwidth is typically a good starting point. I've not seen any calculations that would allow us to calculate how much bandwidth. It may be possible to calculate it using the Mathis Equation. We encountered this situation at one customer. The link was dropping a lot of packets and was running at high utilization levels during working hours (this was an example where the NMS average utilization plots did show useful data). None of the applications were low priority, so we couldn't use QoS to prioritize the traffic. Adding bandwidth was the only recourse. Fortunately, the customer had already started the process of getting a link upgrade. Based on drop counts, we were able to determine that, at a minimum, the company needed to double link capacity. What's That Again? Interface utilization plots from network management systems are often not very useful for identifying congested interfaces. Instead, use interface drop counts, sorted from highest value to lowest value. Begin working on the Top-10 or Top-20 interfaces, focusing on those interfaces that show drop rates greater than 0.0001% of the link capacity (remember to convert interface speed from bits/second to packets/second, so you'll need a rough idea of packet size). Analyze the traffic on the links with the most drops to determine the application mix. Identify any applications that must have high priority and those applications that can be given a low priority (i.e., that can be dropped when congestion occurs). If low-priority applications are consuming a significant amount of bandwidth, use QoS to force those packets to be dropped when congestion occurs. Check the QoS queue drop counts to determine if additional buffering is needed, but never add so much buffering that it confuses TCP. If all applications are important and you can't select some traffic to drop using QoS, then you need more bandwidth. In this case, doubling the existing link capacity is a good start.
<urn:uuid:38888228-4159-40b9-85b9-5ca023bc7f87>
CC-MAIN-2022-40
https://www.nojitter.com/recognizing-need-more-bandwidth
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00358.warc.gz
en
0.932057
1,593
2.59375
3
Geoengineering, a term that refers to various theoretical ideas for altering the Earth’s energy balance to combat climate change. An international team of atmospheric scientists investigates for the first time the possibility of using a “cocktail” of geoengineering tools to reduce changes in both temperature and precipitation caused by atmospheric greenhouse gases. Solar geoengineering aims to cool the planet by deflecting some of the Sun’s incoming rays. Includes dispersion of light-scattering particles in the upper atmosphere, which would mimic the cooling effect of major volcanic eruptions. However, climate-modeling studies have shown that the scattering of sunlight reduces the warming caused by greenhouse gases in the atmosphere. It would tend to reduce rainfall and other types of precipitation less than would be optimal. Another approach involves thinning of high cirrus clouds, which involved in regulating the amount of heat that escapes from the planet to outer space. It reduces warming, but would not correct the increase in precipitation caused by global warming. Scientist Ken Caldeira from Carnegie Science Center, said, one method reduces rain too much. Another method reduces rain too little. For that theoretical cocktail shaker gets deployed. Researchers used models to simulate what happens if sunlight scattered particles at the same time as the cirrus clouds thinned. Their simulations showed that both methods deployed in concert. It would decrease warming to pre-industrial levels, as desired, and on a global level rainfall would also stay at pre-industrial levels. But, the global average climate largely restored, substantial differences remained locally, with some areas getting much wetter and other areas getting much drier. Caldeira said, the same amount of rain fell around the globe in our models, but it fell in different places. Which creates a big mismatch between what our economic infrastructure expects and what it will get. More information: [Geophysical research letters]
<urn:uuid:735931d1-c860-421d-be28-33a2e93d3fa3>
CC-MAIN-2022-40
https://areflect.com/2017/07/25/cocktail-geoengineering-may-reduce-the-climate-changes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00558.warc.gz
en
0.925766
394
4.03125
4
A technology developed by Microsoft for embedding objects created in one program into another. An example of OLE is inserting Excel spreadsheets into Word documents. The method supports two modes: • Full embedding, where the link with the data source is lost after saving the receiver object. • Dynamic object linking, where a change in the data in the original object results in an update of the embedded container that includes a link to the source.
<urn:uuid:873518eb-b4ed-4267-8a56-04d93b8aa684>
CC-MAIN-2022-40
https://encyclopedia.kaspersky.com/glossary/object-linking-and-embedding-ole/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00558.warc.gz
en
0.902426
89
2.796875
3
Some hacks require intense technical know-how, but the majority of data breaches are relatively low-tech and low-effort. Even monumental hacks like the DNC email attack were the result of simple spear-phishing campaigns. Just two days after sending a phishy login email to Clinton Campaign Chairman John Podesta, Russian hackers had gained access to a treasure trove of over 50,000 emails. The Damage Done Phishing sounds like a relatively harmless practice, but it’s the technique behind some of the most ruthless data breaches, including attacks on J.P. Morgan Chase, Target, eBay, Anthem, Sony Pictures, the White House and the Pentagon. Affecting industries as diverse as retail and banking, the hackers behind these attacks are motivated by profit, state secrets, wire fraud and even terrorism. How It Starts Spear-phishing emails are designed to look harmless. Attackers engineer them to resemble emails that could come from a trusted source — a friend or coworker. But once they get the keys to the kingdom, hackers gain access to a wealth of sensitive information like financial data, email communication and more. And while the emails that targets receive appear benign, successful spear-phishers are far more sophisticated than spammers. They research their targets to craft messages that will convince them to click. This process involves probing their social networks and the rosters on their company websites. Phishing for Profit By most accounts, these socially engineered attacks are quite successful. A recent survey found that 38% of cyber-attacks in the last year came from phishing. Even more alarming, 91% of advanced persistent threat attacks (APT) start with a spear-phishing email. And 94% of victims are infected through a malicious attachment like a remote access trojan that activates when the subject opens an attached file. So, how do you keep your company safe from phishing in a world where even government entities and Fortune 500’s can fall victim? Check out these 12 tried-and-true ways to stay safe from phishing scams. Know Your Attacker While always socially engineered, phishing comes in two forms — malicious emails that get you to click on a malware-infested attachment, or a link to a fake website that convinces you to enter your username and password. The more you know about how hackers attack, the safer you’ll be. Watch Your URLs Most hackers are too lazy to write malicious code that can fool antivirus software. As a result, fraudsters who want to make a quick buck turn to phishing scams. In most instances, hackers create replicas of secure websites. Fooled by legitimate-looking corporate logos, users submit usernames and passwords to these fake sites and hackers have a field day with their data. To avoid this unfortunate situation, type in web addresses directly, and look out for strange or suspicious URLs that could indicate you’ve reached a fake website. And always make sure you’re on a website with an HTTPS address. A staggering 100,000 new phishing attacks are reported every month. But those who receive regular anti-phishing education are much more likely to spot and avoid threats. Educate your employees through mock phishing scenarios and develop security policies dedicated to phishing awareness and the creation of complex passwords. Beware of Sender Whenever you get a suspicious email, it’s important to scrutinize the email address. Does it look strange or unfamiliar? Apple would never send a corporate email from firstname.lastname@example.org, and that means you shouldn’t click on it either. Think Before You Click Smart scammers prey on familiarity and emotion, so if something feels off, don’t click. A healthy skepticism about clicking on links and downloading attachments can save you — and your data — lots of trouble. If you get a suspicious email from a company that requires immediate action, visit the company’s website and log in directly. Upgrade Your Toolbar Available on most browsers, anti-phishing toolbars run checks on websites you’re browsing and compares them to known phishing sites. These free tools will alert you whenever you stumble on a malicious site. Pop-up blockers can also help you thwart phishing attacks. If you do encounter a pop-up, carefully click the “x” at the top of the window. Clicking anywhere else could take you to a malicious site. Protect Your Personal Info Avoid sharing personal or financial information on the internet that could be used in socially engineered phishing attacks. And never enter confidential information into a weblink you receive via email. When logging in, always visit the website directly. Protect Your Resources To stay safe in the event of an attack, keep all corporate and personal files encrypted and up to date. These protections are especially important for employees who are telecommuting. Get Smart Software Updated operating systems and antivirus programs offer the best protection against malware. When current, these programs protect your devices by guarding against the latest viruses and phishing attacks. They also scan every file that enters your system — preventing anything malicious from damaging your system. Hackers can operate undetected for a while. That’s why it’s important to stay up to date on your accounts by regularly checking your bank statements and account information, and updating your password on a monthly basis. Create a Firewall For maximum protection, install a desktop firewall to keep intruders from infiltrating your software, and a network firewall to keep fraudsters out of your devices and your network. With these tools in place, you can dramatically cut down on hacking threats. Safeguard Your Data Sometimes, even security experts fall victim to sophisticated phishing scams. That’s why it’s important to safeguard your most precious data. With two-factor identification, hackers can’t breach the additional security safeguards and infiltrate your accounts. Additionally, use a password manager to store complex, hard-to-hack passwords. Better With Bluefin While Bluefin can’t protect you from phishing scams, our advanced technology ensures that credit and debit cards are encrypted the moment they enter a payment system. To find out how our P2PE and tokenization services protect your organization from a data breach, contact a Bluefin representative today.
<urn:uuid:c8f24bcd-b48c-4e03-8878-b7f34e9432ab>
CC-MAIN-2022-40
https://www.bluefin.com/bluefin-news/protecting-your-company-from-phishing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00558.warc.gz
en
0.912525
1,342
2.703125
3
The terms “cybersecurity” and “environmental protection” aren’t typically used together. You might think of trees, clean air, or endangered species when you hear the phrase “environmental protection.” You might think of hackers, email scams, or identity theft when you hear the term cybersecurity. So, what is the link between cybersecurity and environmental protection? There’s a lot more to it than meets the eye. Infrastructure is one of the most serious cybersecurity dangers to the environment. Take, for example, water infrastructure. Most municipal drinking water supplies in industrialised economies come from utility districts, which rely on huge infrastructure to capture, treat, and distribute drinking water. Municipal water is then transported away via wastewater infrastructure, where it is treated before being returned to natural systems. The drinking water and wastewater systems have one thing in common: they both require a lot of infrastructure. Pipelines, huge treatment facilities, and distribution networks are required to treat municipal water at both the consumption and disposal ends of the spectrum. Command and control centres link all of this infrastructure together. And those centres are all run on interconnected computer networks, which are exposed to a variety of security risks ranging from external hacking to malicious insiders. The moral of the storey is that cybersecurity isn’t just a worry for the digital world. Having safe digital and information systems is equally crucial for the physical world, which is very real and very concrete. The environment’s (and public health’s) health is becoming increasingly dependent on cyber command and control systems (which are controlled by computer networks). That means a cybersecurity failure could result in massive pollution events and critical infrastructure failure. Furthermore, cyber criminals compromising massive infrastructure would have a greater psychological toll (or social sentiment) on the public than what are now becoming commonplace data breaches affecting things like credit card accounts. Infrastructure hacks that harm the environment or public health would be much more expensive than just the physical damage. This tutorial aims to give a broad overview of cybersecurity’s role in environmental protection. It will primarily focus on the function of cyber-physical systems’ command and control elements in protecting environmental health. However, the lessons acquired at the infrastructure level can easily be transferred to other networks and systems that contribute to environmental and public health protection. The Environment and Cybersecurity There are 153,000 public drinking water infrastructure systems and 16,000 public waste water districts in the United States alone, according to the Cybersecurity and Infrastructure Security Agency (CISA). Around 80% of residents in the United States acquire their drinking water from a public water system, while about 75% rely on municipal wastewater services. The CISA list of National Critical Functions includes environmental services such as drinking water and wastewater. National Critical Functions (NCFs) are defined by the CISA as “government and private-sector functions so important to the United States that their disruption, corruption, or dysfunction would have a crippling effect on security, national economic security, national public health or safety, or any combination thereof.” The list of NCFs is divided into four major categories: - Connect — is a term that refers to information networks and the internet, as well as communications and broadcasting, as well as telecommunications and navigation services. - Distribute — has to do with logistics and supply chains. - Manage — refers to key services including election management and sensitive documents and information, as well as infrastructure, capital markets, medical and health facilities, public safety and community health, and hazardous materials and wastewater. - Supply — relates to the distribution of fuel and energy, food, critical materials, housing, and drinking water. Water supply and wastewater management are two of the four key types of infrastructure deemed vital to the continuous safe operation of local, regional, and national government, according to the CISA’s National Critical Infrastructure classification. Environmental infrastructure, such as water treatment plants, are desirable targets for cybercriminals such as ransomware seekers, disgruntled employees, and terrorists for this reason alone. The magnitude of a possible infrastructure attack is similarly enormous. With the same amount of effort that it takes to hack one account or system, a cybercriminal might potentially influence the daily lives of millions of people through social engineering or insider attacks. It’s worth noting that this type of critical infrastructure classification and designation isn’t restricted to the United States. The European Commission (the EU’s executive arm) also maintains a list of vital infrastructure, which includes drinking water and wastewater management as two of the most crucial systems to protect against attack or disruption. Another parallel that can be drawn between environmental protection and sustainability and cybersecurity is that both are frequently viewed as issues that are subject to the “tragedy of the commons.” Like regulating the ocean or developing comprehensive climate change policy, cyberspace is seen as vast and poorly defined in terms of boundaries and responsibilities. Most entities (in this case, companies, organisations, and people) only have a reactive or defensive posture when it comes to cybersecurity, much as most legal jurisdictions do not deal with concerns like carbon emissions, sea level rise, or ocean acidification proactively. At the moment, there isn’t much in the way of a proactive cybersecurity police force that acts in the public interest. The laws and procedures regulating cybersecurity best practises are proving difficult, just as environmental regulation and enforcement. Cybersecurity Challenges to Environmental Protection Infrastructure: A Case Study The first case study of a cybersecurity breach that could have an impact on the environment and public health was published in 2016. In some ways, this instance, in which authorities withheld numerous details in order to safeguard the investigation into the breach, serves as a fantastic example. The main reason for this is that it might have happened anywhere. The attack was carried out by a group of hackers who were able to get access to the backend of a network that controlled a drinking water treatment plant. To mask identifying characteristics, this plant was given the name Kemuri Water Company in press sources (such as this one in Infosecurity Magazine). Employees noticed strange behaviour in the company’s system’s programmable logic controllers at first. Computer programmes regulate the controllers, which are responsible for releasing predetermined amounts of chemicals at specified moments during the drinking water treatment process, among other things. The pace of drinking water flow from the plant was also hampered by the attack. After some digital forensics by Verizon Solutions, which handled portions of the water treatment plant’s networking, it was revealed that hackers (apparently with connection to Syrian operations based on the IP addresses used) acquired access to 2.5 million ratepayers’ credit card and billing information. The assailants appeared to be for money and personal information, and it was unclear from the later investigations whether the assailants were even aware. The second case study of cybersecurity hacking with environmental consequences resembles the first case study in certain ways. The majority of the information are shrouded in obscurity once again, although a few of media accounts provide a few peeks into the incident. According to media reports, a group of hackers purportedly based in Russia infiltrated the computer networks managing drinking water infrastructure in two American communities in 2011. The first occurrence took place in an undisclosed city in Illinois, while the second took place in Houston, Texas. To summarise the cyberattack, a hacker took control of a pump that distributes drinking water via a pipeline. The hacker repeatedly turned one of the pump’s valves on and off, causing the pump to break. Following the FBI and Department of Homeland Security’s investigation and comments, the hacker revealed that he or she (or they) had also acquired access to the South Houston Water and Sewer Department’s system, which was protected by a three-letter password. It’s no accident that both of the above case studies were linked to hackers based outside of the United States. After being fired, unhappy employees of energy infrastructure corporations, including a nuclear reactor in Texas and offshore oil rigs in California, have hacked into proprietary systems to purposefully cause disturbances. All of this is to suggest that infrastructure operators must account for and defend against multiple cybersecurity threats. What Makes Cybersecurity Challenging with the Environmental Protection and Environmental Health Field? Developing cybersecurity best practises in the environmental industry is difficult for a variety of reasons. To begin with, as previously stated, cybersecurity and environmental protection are not commonly related. Second, while critical infrastructure such as water and electricity systems are vital, they have never been subject to cyberattacks in the past. However, as more infrastructure becomes connected, the number of cyber attack surfaces continues to rise. Finally, in the past, bad actors have increasingly targeted environmental services or essential infrastructure as a strategy to multiply the effects of an attack by hurting social sentiment and public trust. One of the most difficult aspects of adopting cybersecurity in the environmental domain is the requirement for a comprehensive and complete regulatory framework that is both tactical and surgical in nature. Individual environmental infrastructure operators should have considerable room to respond to specialised dangers and immediate incidents, ideally through regulation or rules. Coming up with generally agreed-upon cybersecurity policies is difficult, as it is in other areas of environmental control. The problem is exacerbated by the fact that different drinking water and wastewater utilities (as well as other forms of infrastructure and environmental service providers) operate their systems using different types of technology and computer networks. In other words, cybersecurity policy and best practise advice for infrastructure operators must be both specific and general in order to be effective and influential. Finding a happy medium is a difficult undertaking. While some of the higher-level organisational and policy elements may appear out of reach for local drinking water and wastewater treatment plant operators, there are a few fairly basic things that can be done to assist protect environmental infrastructure against cyber attacks. Some basic ideas are included in the Water Information Sharing and Analysis Center’s (additional information about this organisation can be found below) list of 15 Fundamentals for Water and Wastewater Utilities. - Regularly assess the risks. - User controls must be enforced (and password best practices) - Physical access to digital infrastructure should be restricted. - Create policies and processes for cybersecurity. - Prepare for cyber-attacks and emergencies. The Water Information Sharing and Analysis Center’s (WaterISAC) website has the whole list of suggestions. Cybersecurity Solutions for the Environmental Field Understanding all of the vulnerabilities encountered by environmental and infrastructure service providers is the first step in designing cybersecurity solutions for the cybersecurity area. The good news is that a number of specialised companies are forming that are knowledgeable with and capable of dealing with the rise in cyberattack-related activity, especially as it relates to environmental infrastructure. Here are a few instances of companies that are now reporting and investigating cyberattacks that are ecologically sensitive: The Water Information Sharing and Analysis Center (WaterISAC) is a non-profit organisation situated in Washington, DC, that collaborates with the Environmental Protection Agency. The 2002 Bioterrorism Act established WaterISAC as an official information sharing and operations body. WaterISAC collects data on verified and suspected cyber events from water treatment and waste water treatment infrastructure operators. The Cybersecurity and Infrastructure Security Agency (CISA) was established as a new governmental agency to address the growing threat of cyberattacks on infrastructure. The agency has a number of cybersecurity materials, as well as standards for reporting cyber incidents. The American Water Works Association is a Denver-based nonprofit organisation dedicated to the water business. The organisation offers a variety of resources on cybersecurity protocol and best practises. Preparing for and averting cyber attacks and cyber catastrophes will only grow more crucial in the long run. Lani Kass, a former adviser to the US Joint Chiefs of Staff on security issues, told the BBC that everyone needed to do a better job of understanding cybersecurity and the vulnerabilities of critical infrastructure after a cyberattack on water infrastructure in two American cities by hackers linked to Russia. In the news report, she was reported as saying, “The going in notion is always that it’s simply an occurrence or happenstance.” “And it’s difficult — if not impossible — to establish a pattern or connect the dots if each instance is seen in isolation. We were caught off guard on 9/11 because we failed to connect the dots.” Additional Resources and Reading American Water Works Association — Water sector cybersecurity risk management guidance, 2019. Cybersecurity and Infrastructure Security Agency — Assessments: Cyber resilience review, 2020. Institute for Security and Development Policy — Climate change, environmental threats, and cybersecurity in the European High North, (Sandra Cassotta, 2020). WaterISAC — 15 cybersecurity fundamental for water and wastewater utilities — Best practices to reduce exploitable weakness and attack, 2019.
<urn:uuid:f8099412-8229-44a4-967f-55cc4d107739>
CC-MAIN-2022-40
https://cybersguards.com/cybersecurity-in-the-environmental-protection-field/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00558.warc.gz
en
0.949009
2,645
3.015625
3
One of the latest attempts at wireless power transmission is through sound. Californian startup uBeam has developed a technology that broadcasts power through ultrasound. This refers to sound above the range of human hearing, which nominally starts at 20,000Hz. The technology uses a speaker-like system to transmit acoustic energy over the air that gets converted to electrical power at the device via ultrasonic transducers. "uBeam can safely transmit watts-level power within a couple of feet, and milliwatt-level of power at tens of feet away," the company says uBeam says that the technology can be used to provide constant power for industrial IoT applications. It also plans to offer the technology for smart homes and consumer electronics. The company claims that "ultrasound is inherently safe" and has been used in various applications, such as parking sensors and cleaning equipment, for over 50 years. The company says it is not limited by FCC power regulations because of its use of acoustic power rather than electromagnetic energy, like rivals Powercast and Wi-Charge. This means uBeam can be used in airplanes, cars and hospitals, the company says. The company has raised over $31 million in funding so far. uBeam first started in January 2012. The company claims to have scored "more than 125 granted patents" for its technology so far. The company is partnering up to build the technology into devices. "uBeam is licensing the wireless energy technology, and we work with our partners to build it into their products," uBeam says. The company released initial reference designs for customers in March this year. - Powercast's Power-Over-Air System Takes Off - Wi-Charge Readies Infrared Wireless Power System for IoT Devices — Dan Jones, Mobile Editor, Light Reading
<urn:uuid:e4df76ae-9708-4e60-9ea6-61a5e613604b>
CC-MAIN-2022-40
https://www.lightreading.com/iot/industrial-iot/startup-ubeam-uses-ultrasound-for-over-the-air-power/d/d-id/753615?piddl_msgorder=asc
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00558.warc.gz
en
0.953371
377
2.890625
3
Whenever a new virus threat appears, there is a period in which users are exposed to infection. This is because the virus has time to spread during the gap between the moment when the virus appears and the moment when the laboratories of antivirus companies provide their customers with an update to neutralize the new code. This time period is usually short – a few hours, in the most difficult cases – but the authors of today´s viruses and worms, aware of the existence of this unprotected period, have managed to ensure that their creations spread far more quickly. If we focus on the most rapid threats to have appeared to date – such as Sasser, Blaster or SQLSlammer – all of these managed to cause damage within a matter of minutes, well before any virus laboratory had time to react. In order to alleviate this problem, some antivirus companies tried to implement automatic systems for detecting new viruses, allowing the user to send in suspicious files, and automatically generating a disinfection system for the suspected virus. The major drawback of this system is that the computer user needs to be aware that the file is suspicious in order to send it, and this is a lot to ask of users with limited IT knowledge. One way of detecting unknown threats has been to use heuristics to analyze the internal code of a program in order to detect possible malicious instructions. This system is fine when the code implements actions that are directly harmful to the system, such as overwriting boot sectors of a hard disk. Unfortunately, today´s threats don´t use instructions as obvious as “format c:”. Virus creators are well aware that even the most basic heuristic engines will quickly detect this sort of attack. This is why they use systems that allow their creations to go unnoticed by the classical detection methods. For example, SQLSlammer and Sasser entered computers through an instruction given directly in TCP/IP. Exploiting a vulnerability (in SQL Server and in Windows), both SQLSlammer and Sasser entered computers without arousing suspicion. Because they did not come in a file that arrived by e-mail or on a disk, and did not use a potentially harmful instruction, classic antivirus programs did not detect them. What possible solution can there be to this problem? In principle, once a malicious code has entered the system it must perform some kind of action in order to reproduce itself, such as exploiting a vulnerability that causes the system security to fail. By using a specific process to monitor the basic elements of the system, it is possible to detect anomalies. So, for example, a system that monitors the number of outgoing e-mails would identify an increase in activity if a worm was resending copies of itself on a massive scale. Faced with a sharp rise in the level of e-mail activity, there would be no doubt that something strange was happening, and it would be sufficient to look for the process which was generating the sending of these e-mails to find a probable e-mail worm. In the same way, if we monitor certain actions within vital elements of the system it is possible to avoid some of the typical problems associated with viruses. To return to the example of SQLSlammer, the only way to stop it is through deep inspection of all the TCP/IP packets in a communication, both incoming and outgoing. But in this instance we must go much further than this. If a virus of this type is capable of attacking in a few minutes, it is not enough just to look for a simple pattern that matches a virus signature; the reason for the transmission of the packet and its contents must be analyzed in order to identify it as a dangerous action. The analysis must be performed in TCP, UDP and ICMP, in order to detect attacks at levels above that of the network. With a system for detecting potential threats like that described, it is possible to avoid service denial attacks, port scanning, direct hacker attacks, IP-Spoofing, MAC-Spoofing, etc. However, technology that performs this detection process cannot on its own offer users or network administrators an adequate level of protection against viruses and intruders. Classic antivirus protection should not be neglected, as the protection if provides is highly effective against known threats, even if it needs to be complemented with detection systems which go beyond simple scanning to extend to the analysis of processes being executed. The arrival of these technologies is imminent, and this will certainly boost the level of protection that all Internet users in 2004 need in the face of new, unknown threats.
<urn:uuid:ba16dc52-c4d0-469b-8359-f5ae97ab33c6>
CC-MAIN-2022-40
https://it-observer.com/beyond-antivirus-software.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00758.warc.gz
en
0.951602
926
3.40625
3
To address a large number of security concerns, it is often recommended that web applications make effective use of “the principle of least privilege“. The idea is that one should only grant the privileges on the basis that they are needed. In a previous post, I suggested that Kaspersky’s database compromise would not have been so bad if they made better use of separation of privileges on their databases. The fact that the same database user apparently had access to so many SQL tables is what caused concerns for some security professionals. Similarly, correct server permissions might be able to prevent a server compromise when an attacker tries to execute custom PHP or Perl scripts through a vulnerable upload script. However even with these precautions, a skilled attacker may be able to compromise a server through an SQL injection vulnerability. The truth is that most backend software has traditional security flaws such as buffer overflows. PHP 5.2.8 fixed various buffer overflow bugs that could affect scripts on the server to run arbitrary code in memory. Most database servers have previously issued fixes for memory corruption, for example in 2007, MySQL issued patches for privilege escalation issues. Oracle and MSSQL had their fair share of similar issues. This leads us to the conclusion that web application security is a process that involves different people. In the case of a custom application, developers need to make it easy for the administrator to implement the principle of least privilege. They also need to test their code to reduce the chances that attackers will not be able to find security flaws in their code. However security does not stop there. The systems administrators need to keep the backends abreast the latest threats. They would also do well to test their servers with security scanners (such as Acunetix) to identify system flaws and to confirm that the web applications were carefully audited. Finally, those making business decisions need to make sure that their options do not jeopardize the security efforts of those designing and implementing their systems. Get the latest content on web security in your inbox each week.
<urn:uuid:c04ac096-427a-43f9-8067-52f50955f764>
CC-MAIN-2022-40
https://www.acunetix.com/blog/articles/how-can-low-privilege-bugs-lead-to-a-server-compromise/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00758.warc.gz
en
0.961802
410
2.921875
3
This module introduces core principles that are essential to forming a security mindset, which include the CIA Triad (Confidentiality / Integrity / Availability), Defense-in-Depth, Authentication, and Authorization. Cybersecurity is a broad subject so this module provides a general overview of the different domains within security including Application Security, Network Security, Hardware Security, Physical Security, Mobile Security, Operational Security (SecOps/OpsSec), Incident Response, Identity and Access Management, Governance, Risk & Compliance (GRC), and Disaster Recovery / Business Continuity. Networking is the process of how connections are made and how computers and systems communicate with each other which forms the basis of cybersecurity. This module provides the foundational understanding for computer networking and covers network protocols that are used for data communication, networking hardware, subnetting, networking utilities and traffic analysis. TOOLS: Kali Linux, TCPDUMP, Netcat, Netstat, Dig, Nslookup, Whois, Packet Tracer, Nmap, Wireshark This module focuses on how to understand, implement and manage a security program within an enterprise. Security professionals must have strong knowledge of how a company operates to implement effective security policies and procedures. You must understand who the organization’s employees, customers, suppliers, and competitors are and how digital information is created, accessed, and shared. Students will learn about the various compliance standards and security frameworks that are most used in the industry. TOOLS: NIST Cyber Security Framework, CIS Top 18 Framework OSINT & Social Open Source Intelligence (OSINT) is a tradecraft used for conducting reconnaissance and information against an adversary or target to gain insights into people or organizations. Students will learn how to gather information about a victim and craft a phishing campaign to compromise the victim’s organization. Knowledge of these simple offensive techniques will help students understand how to craft security awareness campaigns and defend against phishing attacks. TOOLS: Kali Linux, Google Hacking Database, VPN, TOR Browser, Whois, DNS records, Traceroute, GoPhhish, Email Header Analysis, Shodan, theHarvester, OSINT Framework An enterprise cannot properly defend their information unless they understand who they are defending against. This module discusses the current threat landscape, Advanced Persistent Threats (APTs) and dives into where threats are coming from and what is motivating the nation-state and non-nation-state threat actors. TOOLS: Data Breach Reports, MITRE ATT&CK Framework Python has become increasingly popular in the offensive and defensive cyber communities. Tool developers and hackers primarily used Python, but with the rise of analysis-driven and proactive cyber activities, it is now a staple in the cybersecurity industry. In this self-paced module, students will learn the basics of scripting with Python to solve common and emerging cybersecurity challenges. TOOLS: Python, Jupyter Lab, Git, PIP, VirtualENV The ability to prevent and detect cyber-attacks as they occur in real time is the epicenter of defending against cybercrime. Good defense starts with thoughtful network architecture design that leverages Cloud and on-prem technologies. In this module, students will learn about network architecture / design, virtual private cloud (VPC) configuration, end-point hardening and deployment, and secure two-factor authorization. TOOLS: Kali Linux, AWS, Terraform, ScoutSuite, DUO, Lynis, Windows Security Policies, OpenVPN Security Operations Detection Cybersecurity should always be proactive rather than reactive. It is much more difficult and expensive to address security after a system has been deployed. In this module, students will learn best practices for network threat modelling, prevention, and continuous monitoring and mitigation. TOOLS: Kali Linux, IPtables, Snort, SIEM Dashboards, ELK Stack A cybercriminal cannot do as much damage to an enterprise if they are unable to read the stolen data. This module covers the core concepts of encryption (Boolean Logic, Modulus Arithmetic, hashing) and how it is used within secure protocols (SSL, TLS, SSH). It focuses on how to implement and manage encryption policies within an enterprise (signatures, key management, PKI). Students will learn the importance of understanding the vulnerabilities and misconfigurations that commonly go wrong during implementation. At the end of the module, students use brute force, rainbow tables and various other hacker tools to crack passwords from hashed data. TOOLS: OpenSSL, CertBot, Nginx, MD5 / SHA, CertBot, Hydra, Hashcat AppSec & Offensive How do we know if all the policies, procedures, firewalls, ACLs, or intrusion detection and prevention systems are working unless we test them? This module focuses on how to review security programs and perform various security vulnerability assessments throughout an enterprise. Students will embody the mindset of a hacker and perform penetration tests where they mimic real-world attacks to identify methods to circumvent the security features of an application, system, or network. A successful penetration tester has in-depth knowledge of how networks, systems, and applications are defended. This module allows students to test everything they have learned thus far. Students participate in a competitive red team / blue team exercise to showcase their skills in a team environment. TOOLS: Nmap, Metasploit, Nikto, Nessus, Burp Suite, Veracode, Shodan, Discover scripts, DNSExfiltrator, Qualys Threat Hunting & There are two types of companies: companies that know they have been breached and companies that do not know they have been breached. One-third of U.S. businesses were breached last year and nearly 75% were unaware of how the incident occurred. In this module, students learn why it is vital for companies to properly respond after a breach and have a process to perform a forensics analysis to learn how the breach occurred and understand how it affects the company. TOOLS: NetworkMiner, Windows Event Logs, Volatility, BinText, Incident Response Tabletop exercises During the final portion of the bootcamp, students apply what they have learned to a real-work environment by serving as a security apprentice to complete a security assessment for a non-profit organization. Students will work with their small group to conduct a vulnerability assessment, analyze the results, recommend appropriate remediation measures and develop a report for the executive management, thereby gaining valuable experience that will aid them in their job search. Students will follow the Evolve Security services methodology, used by our cybersecurity engineers, to conduct the assessment and learn what it is like to work directly with a client. In addition to the technical cybersecurity training, students will learn the skills and strategy to confidently navigate their job search with our support. Job preparation training is integrated throughout the bootcamp and available after graduation. Through career coaching, cybersecurity resume prep, mock interviews, networking strategies, alumni connections and employer partners, students land jobs that launch their cybersecurity careers. Students are encouraged to take advantage of virtual and in-person EvolveSec Meetup events to connect with industry professionals and expand their network. We know what it takes to get a job in cybersecurity and support our students throughout their job search process. Curious? Read testimonials and learn what it takes to get started. Students are expected to spend a total of 20 hours per week on Bootcamp studies, including in-class and individual work. The learning experience involves watching pre-recorded lectures and lab tutorials outside of class, which allows more time in class for discussion and hands-on labs, resulting in a deeper and more practical understanding of cybersecurity. At the conclusion of each module, students complete verbal competency assessments to ensure they fully grasp the concepts and are prepared to communicate their skills in a job interview.
<urn:uuid:a73777c1-21f3-45ed-b198-53b7cb6b59a3>
CC-MAIN-2022-40
https://www.evolvesecurity.com/academy/curriculum
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00758.warc.gz
en
0.917112
1,678
2.671875
3
There has been a lot of misinformation spread recently about the COVID-19 pandemic. Not only have we had to wade through numerous conspiracy theories, but we’ve also had to navigate misinformation that comes from pandemic-related phishing and scam attempts. It’s easy to get caught up in these types of disinformation campaigns. When dealing with a relatively unfamiliar virus, we’re learning on our feet — which means the information is constantly evolving. That makes it tough to decipher real info from hoaxes and conspiracies. Need guidance? This online game can help you spot COVID-19 conspiracies. In fact, there are a couple of COVID-related hoaxes and conspiracy theories going around as we speak. These are more than just your average disinformation campaign, though. They could also be dangerous to your gadgets and are spreading on popular apps. Here’s how you can avoid falling for these fake campaigns. Watch out for these fast-spreading COVID-19 hoaxes Two different hoaxes are spreading through popular apps right now. The first is a COVID-related hoax using WhatsApp video messaging to spread the fake news that people could be hit by a “hack” that will take over your phone. The WhatsApp COVID hoax Here’s how it works. Victims are sent messages through WhatsApp warning they’re going to receive a video file called “Argentina is doing it.” While vague, that impending video file is supposedly related to Argentina’s response to Covid-19. The message warns that if the video is open or looked at, it will take over your phone in 10 seconds. That video is never received, though. It’s unclear whether the video even exists, but what is clear is that the messages are just a hoax. Snopes even debunked this type of hoax back in July 2020, but the fake campaign is still making the rounds. There are several variations of the message, but it always includes Argentina. Some of these messages also tie in CNN, stating that the national news outlet has reported on the malware scheme or video to try and give the message credibility. Here’s one recent example of this type of hoax message: “They are going to start circulating a video on WhatsApp that shows how the Covid19 curve is flattening in Argentina,” reads one example. “The file is called ‘Argentina is doing it,’ do not open it or see it, it hacks your phone in 10 seconds and it cannot be stopped in any way. Pass the information on to your family and friends.” According to Snopes, this hoax can be chalked up to a high-tech equivalent of chain letters that used to be passed around. There is no threat to your device if you receive it. However, if it goes viral, it could turn into a malware distribution tool over time. The COVID vaccine conspiracy theory The other hoax making the rounds is a diagram with a message telling people that there’s a 5G chip hidden in the new COVID-19 vaccine. This hoax has gone viral, and if you see it being shared, you need to know the message is fake. This new conspiracy theory is based on a couple of other conspiracy theories making the rounds. The first is that the coronavirus vaccine will secretly implant people with microchip tracking devices, which is obviously not true. The next is about the dangers of 5G broadband cellular networks. Tap or click here for a deep dive into 5G conspiracy theories. When those two are combined, you get this: a crazy conspiracy theory that’s spreading on all social media platforms and warns people not to get the COVID vaccine. If they do, it says, they’ll be secretly injected with a COVID 5G chip that will track your every move. The message even offers an image of the “COVID 5G Chip Diagram” to try and add credibility to the unfounded information. Only problem? That’s not a COVID 5G microchip diagram. It’s actually a diagram for a guitar pedal — or a schematic of a circuit for a BOSS MT-2 Metal Zone electric guitar pedal, to be exact. Despite the obvious issues with this message, some are still buying into it. The hoax has already spread like wildfire on Twitter and Facebook. What to do if you see either of these messages If you come across the chip diagram above on social media, you can rest assured that there is no truth to it. There is no COVID 5G microchip being inserted into your arm when you get vaccinated. It’s false information, and it’s a little wacky — even by conspiracy theory standards. There’s no immediate danger in receiving this message, but you may want to flag it on Twitter or Facebook so the social media platforms can handle it. Don’t share it, though. Others may not know it’s a hoax. If you receive the Argentina message on WhatsApp, it’s probably best to delete it. You can take steps to protect yourself from being hacked, but there’s almost no likelihood that you’ll be hacked via that message. To protect yourself from being hacked via WhatsApp messages, go into your settings and turn off the option to automatically download files, photos, and videos sent to your device. You’ll have to choose what to download after changing this setting, but it’s worth the hassle. And, as with anything else, don’t click on links from unknown parties. Don’t click on any unexpected links from people you know, either. Check with the sender first to make sure it’s something they actually sent. There’s always a chance they were hacked, which could be why you’re receiving unsolicited downloads to your device.
<urn:uuid:717ccf4e-79bb-4e49-b41f-197b1f7ad9ce>
CC-MAIN-2022-40
https://www.komando.com/security-privacy/covide-vaccine-hoaxes/772627/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00758.warc.gz
en
0.950021
1,237
2.765625
3
Over the years, the advancement of technologies has brought about numerous solutions in the healthcare industry through the introduction of various of innovations of new technologies and devices. Nowadays, the application of spatial computing is becoming essential in the healthcare sector especially in hospitals as it shows high potential in promoting better medical practices and services. What are the contribution of spatial computing in healthcare? At its core, spatial computing represents the integration between traditional computing with 3D location, giving it the ability to provide virtual outputs and display them above the real environment. In healthcare, spatial computing not only brings medical research and virtual assistant roles to a new level but also helps in preventing the spreading of Covid-19. Spatial computing in medical research Continuous medical research aims to find better treatment or prevention for diseases. Spatial computing allows researchers to view and better understand the subject of their studies through 3D environment. For example, the study of 3D tumours inside a virtual reality (VR) program by Cancer Research UK. Spatial computing and virtual assistant One of the most beneficial virtual assistant powered by spatial computing is called NuEyes. NuEyes is an AR glasses that functions to provide visual prosthetic to individuals with severe vision disability. Equipped with camera, the AR glasses projected the image of the real world to the user with a better focus. Spatial computing against Covid-19 Pandemic Covid-19 pandemic had changed the way we are living in the world today. In order to prevent the spreading of Covid-19, we ought to practice social distancing as much as possible. However, certain medical procedures such as medical imaging requires physical meeting which may lead to the spreading of the disease. The application of spatial computing such as ProjectDR promotes social distancing by allowing medical images like MRI and CT Scan data to be displayed directly on the patient’s body thus providing live feedback to the physician. Until now, the contribution of spatial computing in healthcare is still new to the sector. However, let’s look out for its contribution, not only in healthcare but also other industries since it is believed to bring more promising results in the future as the technology advances. E-SPIN Group in the enterprise ICT solutions supply, consulting, project management, training and maintenance for multinational corporations and government agencies across the region E-SPIN do business. Feel free to contact E-SPIN for your enterprise digital transformation initiative, project requirement and inquiry.
<urn:uuid:1ea9a678-1e68-43ed-85c0-10bfcba756c2>
CC-MAIN-2022-40
https://www.e-spincorp.com/contribution-of-spatial-computing-in-healthcare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00758.warc.gz
en
0.943601
502
2.890625
3
When you see this term, SoD, with a little O, what does it mean? It can mean two things — Separation of Duties or Segregation of Duties. They actually have the same meaning; splitting a task into parts so that more than one person required to complete it. The principle of least privilege means workers only will be given access to the information and resources that are necessary for a legitimate purpose. In both cases, the idea is to allow the right people to have the access that they need to do their jobs, but not enough access to bypass controls. Controls are a big idea that we will discuss in a moment. Alice, Bob, and Ordering PCs As an example, let’s take a person who is in accounts payable. Their job is to approve and print checks to suppliers for goods. We also have a role in the system for people who manage vendors and are allowed to create new vendor records in the system and enter invoices. Finally, we have people who buy from vendors, such as the IT or marketing departments. Let’s look at Alice, who is in the IT department, and Bob, who is in the finance department. Alice wants to buy some PCs from ABC company. If Bob is in the vendor management role, he can create a new vendor and enter purchase orders and invoices for ABC company so that Alice can order her PCs. And if Bob also is in the accounts payable role, he would be authorized to approve payment to ABC company for Alice’s PCs. This doesn’t sound like a problem, right? In fact, it streamlines the process for acquiring new PCs. But if Bob is not as honest as he should be, it is possible for him to create a new vendor called BobCo, enter purchase orders and invoices for BobCo, then approve payment. This situation violates the separation of duties rule and circumvents the financial controls that the organization should have in place. And this is why Separation of Duties is such a big deal. And of course, neither Alice nor Bob need access to the payroll system nor do they need administrative access to whatever application and database are used to support the purchasing and payment systems. This is Really Big! Separation of duties could be a bigger issue than you think it is. In its annual report published in March of 2017, power and robotics firm ABB said losses from the fraud at its South Korean unit discovered last month would total $73 million. How could this happen? Managers failed to maintain sufficient segregation of duties in the treasury unit of its subsidiary in South Korea and did not provide enough oversight of local treasury activities, ABB CEO Ulrich Spiesshofer and Chief Financial Officer Eric Elzvik said. Think about all of the people and their roles in your organization and how complicated it is to understand what all of them have access to and the possibilities for accidental or intentional fraudulent behavior. According to EY, a consultancy, organizations must gain an understanding of the scope of sensitive transactions and conflicts that drive their key business processes. These are also the transactions that pose the greatest fraud risk to the organization should someone possess excessive access. Access thresholds must be determined based on the risk and impact to the organization for each potential SoD conflict. The output of the definition phase is a matrix of potential conflicts, including the corresponding risk statement related to each conflict. The risk statement answers the question, “Why do we care about this transaction combination?” and demonstrates what could go wrong should someone have too much access or authority. The risk statement might say, “A user could create a fictitious vendor or make unauthorized changes to vendor master data, initiate purchases to this vendor and issue payment to this vendor.” In this case, the “vendor” might be the fraudulent employee with an excessive and inappropriate level of access in the system. Once the matrix is created, it can be used to assess and flag existing violations so that they can be investigated, or it can be used to prevent access from being granted in the first place. The assessment can be done manually through the use of system access reports and spreadsheets, or the matrix can be loaded into automated systems which will do the work for you. Automated identity and access management software can be used limit SoD violations from the time a worker hires on until they leave or retire. SoD and least privilege are just two examples of organizational controls which are put in place to assure achievement of an organization’s objectives in operational effectiveness and efficiency, reliable financial reporting, and compliance with laws, regulations and policies. And in fact, most financial laws such as SOX, J-SOX, and Basel II directly address financial controls which mandate separation of duties. While the European Union’s GDPR isn’t prescriptive, proper implementation of SoD and least privilege can help keep system and database administrators out of user data. We Can Help Micro Focus can help you easily implement SoD and least privilege without disrupting your business. Remember that risk matrix that I talked about earlier? Once the matrix is created, it can be used to drive Micro Focus Identity Governance and Administration (IGA) software. IGA implements access modeling and transparency by automating the detection of SoD and excessive access so that management can evaluate the risk and take appropriate action. IGA also will help enforce SoD and least privilege when new employees are brought on board or when their responsibilities change. To Sum Up In summary, Separation of Duties is part of compliance and it is good management practice. Micro Focus can make life easier for both IT and people managers by automating a worker’s access from the time they hire on until they leave or retire. But what if you already have another Identity Manager in place? Micro Focus IGA will work with most of them, allowing you to help secure your environment from fraud without a disruptive rip and replace effort. Separation of Duties and least privilege can go a long way to help your organization achieve and maintain the security of your data and comply with many government and industry regulations. And Micro Focus software and services can help you get the most out of both, allowing you to implement management best practices, be more secure, and meet compliance rules.
<urn:uuid:b8762ae6-1f78-4777-bfcf-551179850460>
CC-MAIN-2022-40
https://blog.microfocus.com/separation-of-duties-and-least-privilege/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00758.warc.gz
en
0.949501
1,305
3.109375
3
A new report from Trend Micro has revealed that the use of ransomware by cyber attackers increased by 752 per cent in 2016. In the last year alone, the security software company estimates that Locky, Goldeneye and other malware designed to extort a ransom from its victims earned attackers $1 billion. Although ransomware did the most damage to businesses and individuals in 2016, the threat of falling victim to this type of malicious code was maintained by the recent WannaCry (opens in new tab) attacks and by the Eternal Rocks worm that has no kill switch and continues to infect users. For those unfamiliar with the history of ransomware and how it rose to become the threat it is today, Trend Micro's report, Ransomware:Past, Present and Future (opens in new tab), sheds some light on how it began in Russia between 2005 and 2006 along with an explanation of the business model behind ransomware-as-a-service (RaaS). RaaS has provided criminals who are unfamiliar with computer hacking a way to launch their own attacks by hiring code and renting out the command and control infrastructure need to pull off a successful cyber attack. Organisations are often targeted by ransomware attacks over individuals because they need access to the files that have been encrypted by attackers to operate and will be more likely to pay the ransom needed to unlock their documents. Although many businesses were financially affected by the WannaCry attacks, they did at least help to spread information about ransomware and inform the public that such a threat is only a click away. Image Credit: Nicescene / Shutterstock
<urn:uuid:4b3bbe07-9b18-46ce-8fd3-5cc3d2a76072>
CC-MAIN-2022-40
https://www.itproportal.com/news/ransomware-attacks-skyrocketed-in-2016/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00758.warc.gz
en
0.964688
322
2.703125
3
With cyber risks rated as one of the top concerns for businesses worldwide, ensuring your security system is protected against cyberattack is vital. A vulnerability within your system could expose your data, allow unauthorised access into your buildings or provide an entry point for hackers to enter your systems. Through the practice of logging Common Vulnerabilities and Exposures (CVE), security vendors can help ensure customers are aware of any potential risks within their systems and enable them to access information and tools to mitigate the risks. If a vulnerability is discovered, logging a CVE is considered best practice to ensure customers are fully aware of cybersecurity risks; however, only a handful of physical security and video management vendors have published CVE’s. CVE is an industry standard list of common identifiers for publicly known cybersecurity vulnerabilities. CVE provides a single unique identifier to each vulnerability or exposure, which allows quick and accurate access to information, tools, or services designed to fix an issue, across multiple sources. CVE is free and publicly available information. The CVE database is widely used by IT product and service vendors, but there are many potential reasons why security vendors may not consider the need to behave like IT vendors when it comes to their critical enterprise software systems. Traditionally, facilities departments have been responsible for overseeing physical security systems within organisations; however, as security technologies have evolved and integrated systems have become common place, IT departments are becoming increasingly involved in managing security system requirements and updates. It is also possible some security vendors are concerned about the poor perception of publicly announcing their vulnerabilities. Most security vendors will be fixing and releasing updates in response to vulnerabilities within their system, although some will be releasing these without publicising them out of fear their customers will view the information negatively and believe they are operating a weak security system. The reality is most security systems will present vulnerabilities. Sneaking out system fixes leaves customers with no way of knowing their version of their application has vulnerabilities which could compromise the security of their system. Logging a CVE ensures this information is easily accessible and that customers have a chance to upgrade to mitigate any risks that could leave them vulnerable to attack. Responsible disclosure process If a person discovers a vulnerability within a system, the normal process is to report it directly to the vendor and keep the issue confidential for a period of 90 days. This allows the vendor an opportunity to fix the problem or issue a patch before the vulnerability becomes public information. At the end of the 90 days, the vendor or the person who discovered the vulnerability can log a CVE. Anyone can log a vulnerability or exposure within an application via CVE.mitre.org. Organisations can subscribe to CVE feeds to receive updates on vulnerabilities. If a security vendor logs an issue, they will usually simultaneously inform their customers and provide information on how to mitigate the risk. But emails can get lost or go unopened and customers are unlikely to regularly check a vendor’s website for security advisory updates. CVE feeds can help ensure customers are informed in a timely manner, giving them a chance to act before an issue becomes a major security concern. It is in a security vendor’s best interests to log CVE’s for any known vulnerabilities to ensure their customers can stay up to date and protected against emerging cyber risks. By Steve Bell, Chief Technology Officer at Gallagher
<urn:uuid:403a7207-dc7c-48a4-8609-5e05c8e3cb90>
CC-MAIN-2022-40
https://internationalsecurityjournal.com/exclusive-take-common-vulnerabilities-and-exposures-seriously/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00758.warc.gz
en
0.945038
671
2.6875
3
Tags are used to group assets, e.g. Database servers or Datacenter location 1. An asset can have any number of tags. Examples what tags can be used for are: - Type of software - Physical location You can build a tag hierarchy by using the parent tag function. By doing this you can group tags. Static and dynamic tags A static tag is a tag that you manually apply to one or more assets. A dynamic tag is a tag applied and removed to assets based on a rule. These rules are found under Asset Manager > Tags > Tag rules. As examples you can apply a tag based on the operating system, or based on an open port. Business risk and tags You can set business risk for each tag. If an asset has more than one tag the tag with the highest business risk will be applied. If a business risk is set on the asset, that business risk will be the actual one for that asset regardless of the business risk set on that are applied to the asset.
<urn:uuid:f989be77-a0b8-461b-9759-4ef4ba97d123>
CC-MAIN-2022-40
https://support.holmsecurity.com/hc/en-us/articles/212841669-What-is-a-tag-and-how-does-it-work-
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00758.warc.gz
en
0.891975
210
2.609375
3
Cryptojacking: Mitigating the ImpactHow to Deal With the Growing Threat of Browser-Based Cryptocurrency Mining Cryptojacking, the infiltration of malware to enable browser-based mining of cryptocurrencies on infected websites, is on the rise. What can be done to minimize the impact of these intrusions, which can lead to poor device performance, high electricity use and even potential damage from overheating? Among the key steps that can be taken are: using browser extensions that block mining scripts, adopting the browser isolation model and carefully monitoring endpoint devices' use of resources (see: How To Shield Browsers From Bad Guys) Cryptocurrency mining refers to solving computationally intensive mathematical tasks, which are used to verify the blockchain, or public ledger, of transactions. As an incentive, anyone who mines for cryptocurrency has a chance of getting some cryptocurrency back as a reward. Some miners hack into computer systems and install malware to enable them to get extra computing power. Coinhive's website provides a number of legitimate use cases for mining Monero digital currency in the browser, including, for example, creating a new revenue stream that enables an ad-free experience. But in most cases, cryptocurrency mining is unsanctioned, and those whose computing power is being used are unaware of the intrusion until they see a slowdown in performance or a significant increase in electricity consumption. Since September 2017, about 50,000 websites have been hit with mining malware, with over 80 percent of these unsanctioned intrusions using Coinhive's script, according to the Bad Packets Report. In February, for example, a single cryptojacking attack in the U.K. infected over 5,000 websites. Although many cryptojacking incidents are browser-based, in March, Apple removed a Monero-mining calendar app from its Mac App store after errors in the software caused the mining program to run indefinitely. The app also used significantly more than the planned 10 percent to 20 percent of a Mac's computing power. A Victimless Crime? The damage caused by cryptomining is nowhere near as egregious as that caused by other forms of malware. "Cryptojacking is not a victimless crime, but, it does come close to that because it does not steal your data, spy on you or stay persistent on your computer," says Nick Bilogorskiy, cybersecurity strategist at Juniper Networks. "It is a relatively lightweight attack and uses the spare capacity of your CPU and your power grid, so as long as you are not paying extra for those things, there is no damage to you and virtually no cost to cryptojacking." Troy Mursch, a security researcher at Bad Packets Report, notes: "Affected users will notice their device slowing down due to the high CPU usage in addition to higher electricity bills. This process also generates a lot of heat, and we've seen physical damage in some cases with mobile devices." What's the Fix? One way to diminish the impact of cryptojacking is to use browser extensions to block cryptojacking scripts, although some of these could be too heavy-handed. But there's an inherent flaw with using browser extensions: They use blacklisting and are therefore only functional for sites where there is prior awareness of cryptojacking malware infection. "It is hard to avoid visiting cryptojacking sites completely as any website can be hijacked and embedded with a mining library," Bilogorskiy says. "Because this is a blacklisting approach, it is unlikely to hold long term." Browser isolation, a cybersecurity model that physically isolates an internet user's browsing activity away from their local networks and infrastructure, can also play a role in diminishing the impact of cryptojacking malware. "Because the cryptojacking software is executing away from the end user's machine, various other execution throttling strategies become possible without affecting the end user's browser," says Kowsik Guruswamy, CTO at Menlo Security, which provides browser isolation technology. Fighting the Malware Beyond browser-based protection, there are broader ways to address the cryptojacking problem. "Standard anti-malware strategies apply," David Houlding, director of healthcare privacy and security at Intel Health and Life Sciences says. "Another very important tactic is monitoring endpoint devices' resource usage." Those strategies include, for example: safe browsing; not installing questionable software; using anti-malware software; whitelisting executables - especially on servers that have more computational resources and therefore are more appealing targets; and hardening and timely patching of systems to close vulnerabilities. "Another very important tactic is monitoring endpoint devices' resource usage, especially CPUm and remediate as needed," Houlding adds. A Long-Term Threat? With the advent of the Coinhive script at the peak of the 2017 cryptocurrency gold rush, the code provided yet another "get rich quick" scheme in decentralized mining. But illicit earnings reported from cryptojacking attacks generally have been minimal. For instance, a cryptojacking campaign operator using Coinhive on 11,000 websites for three months made just $7.69, according to Bad Packets Report. And in the U.K. attack cited earlier, despite infecting over 5,000 sites, the perpetrator only managed to mine about $24 worth of Monero before the operation was shut down, according to The Guardian. Despite recent declines in the value of certain cryptocurrencies, including Bitcoin and Ethereum, decentralized mining is expected to persist. "I don't think cryptojacking will die out in the short term - quite the opposite actually," says Juniper's Bilogorskiy. "Our team has seen a trend of attackers switching away from malvertising and instead embracing cryptojacking. We are only in the beginning of the cryptojacking cycle. Its popularity will go up and down, correlating with Monero and Electroneum prices, which are quite volatile." Any device with a processor is a potential candidate for the activity. And the rise in IoT connected devices opens up a whole new swath of targets. "The browser is a transient session and therefore one way, but not the only way - and not the best way long term - of cryptojacking," Houlding says. "Installing cryptojacking malware on devices is a major issue."
<urn:uuid:a74b7b63-b277-49c3-b8b8-893cb1f2c7c4>
CC-MAIN-2022-40
https://www.inforisktoday.asia/cryptojacking-mitigating-impact-a-10762
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00758.warc.gz
en
0.927248
1,443
2.609375
3
Published On June 07, 2018Analog security breaches, also known as non-digital security breaches, are still common and can wreak havoc on organizations. Modern day security breaches are generally linked to genius hackers who can code their way into vulnerable systems and pilfer vital information. However, analog security breaches, also known as non-digital security breaches, are still common and can wreak havoc on organizations. In fact, according to the University of Texas at Austin’s Identity Theft Assessment and Prediction Report, 53% of documented identity theft cases from 2006 through 2016 came from an analog source. While it might seem like an old-school method of stealing valuable information, dumpster divers still exist. Documents are considered public property once tossed into a dumpster. Therefore, dumpster divers pose a great risk as organizations generally dispose of documents that have several levels of confidential information. Dumpster divers typically strike when the iron is hot, meaning they know when fresh documents, shredded or non-shredded, have been tossed. They have a general idea of when the documents are thrown away and when they can dive into the dumpster without being caught. In most cases, they are looking for personally identifiable information (PII) such as names, addresses, emails, identification numbers, fingerprints, dates of birth, genetic information, credit/debit card numbers, telephone numbers and login information. But documents are not the only items that divers target. They also look for sticky notes or pads that contain passcodes, usernames or other information that can help them access confidential records. Additionally, phone lists, calendars and organization charts can be used for social engineering techniques to gain access to business networks. Dumpster divers also target recycling plants. Recycling plants do not only consist of shredded and non-shredded paper documents, but also electronic devices that may still contain accessible information. This can include computers, laptops, smartphones, tablets and iPads. These devices likely have traces of stored information, allowing thieves to gain access. Garbage dumps also have these items lying around. Outside of simply breaking into an office and stealing a laptop, social engineering attacks are also common when it comes to analog security breaches. These attacks normally involve psychological manipulation. In other words, the identity thief uses certain tactics to trick unsuspecting individuals into giving him or her confidential information. In many cases, the attacker incites fear via phone, email or other electronic communication channels. Phishing is the most common form of social engineering. This is when the attacker sends emails that use misleading content and links in order to con the recipient into clicking on a malicious link or downloading a malicious file. Examples of phishing emails include: Bank Link: Attackers will send an email with a fake link to your bank with the goal of having you type in your bank user ID and password. Dropbox Link: Attackers will send an email with a fake Dropbox email password reset. When clicked, it leads to a page stating the browser is out-of-date and needs an update. Facebook Message Link: When a person in the public eye, usually a celebrity, dies, a fake Facebook message appears inviting users to click a link or button to view a video of the celebrity stating his or her final words. Cyberattacks have become a dangerous threat, but it is imperative that business leaders do not discount the impact of analog security breaches. In addition to prioritizing security breaches, organizations should consider working with a trusted third party vendor that can securely and efficiently dispose of their physical and electronic assets.
<urn:uuid:24b9aba5-69c8-4041-abc5-76c735a62835>
CC-MAIN-2022-40
https://www.ironmountain.com/blogs/2018/analog-security-breaches-why-they-are-still-a-threat-to-your-organization
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00758.warc.gz
en
0.942137
725
3.015625
3
With small business cyber crime on the rise, business owners need to do everything they can to protect themselves and their data. Here’s how encryption can help. Encryption is a way to secure your data, either while it is stored on a system or device, such as a hard drive or smartphone, or while it is in transit, such as being transmitted across networks. Encryption comes from the Greek word “kryptos,” meaning hidden or secret. When data is encrypted, it is transformed so only the intended parties can read it by utilizing a secret key. This is done automatically with the help of encryption technology, which uses an algorithm called a cipher to “disguise” your data and allows people with the right key to decrypt, or unscramble, the information and view the plain text. (For a more in-depth description of how encryption works, review this article from MakeUseOf.) Encryption is used routinely in the digital realm to keep businesses and customers secure. For example, encryption protects your financial information at the ATM, or when you are making an online purchase if you are patronizing a site using SSL. For small businesses, encryption is an underutilized form of protection. When your information is not encrypted, you make a hacker’s job easier. Should they infiltrate your network, they will be able to easily use the plain-text information they steal. However, if your data is encrypted, they won’t be able to interpret it, or you will have at least made it much more challenging for them to do so. (Cyber criminals can take steps to decrypt data, but it requires tools, expertise, and time, so you’re very likely deterring all but the most persistent ones.) The Role of Encryption in Healthcare Cyber Security Cyber criminals target the healthcare industry more frequently than any other sector. IBM’s 2016 Cyber Security Intelligence Index, a survey of IBM’s Security Services clients, found that companies storing patient data experience 36 percent more security threats than organizations in other verticals. These companies are targeted frequently because of the high-value customer data they possess. People’s personal health and financial information are prime targets for thieves who use it for identity theft or ransomware attacks. While many businesses use some form of encryption to protect data in transit, too few use the strategy to protect data at rest. Healthcare data encryption is especially critical. Considering the increased role portable technology devices like laptops, mobile phones, and flash drives play in business operations, and the rise in data security threats, this is particularly important. A security breach isn’t just bad news for your clients whose information has been compromised, it is also bad news for your organization. According to the HIPAA Breach Notification Rule, organizations must “provide notification following a breach of unsecured protected health information.” If the breach affects more than 500 individuals, the organization also has to inform the media. That is certainly not the kind of press anyone is looking for. Here is where encryption comes in. The incident is not considered a breach if the individual’s information is protected and the business can prove that the data has a low probability of being compromised. This is assessed using a variety of risk factors. Encryption is cited as one of the technologies and methodologies for “rendering protected health information unusable, unreadable, or indecipherable to unauthorized individuals.” In short, encryption can protect your customers and your company in the event of a security breach. (More information about this rule is available here. Businesses should read and understand HIPAA rules in their entirety and work with their legal counsel to understand their ramifications.) According to the latest healthcare cyber security report by Redspin, Breach Report 2016: Protected Health Information (PHI), there was a 320 percent increase in the number of providers victimized by hackers in 2016 compared to the previous year. Most of these attacks targeted smaller offices. This annual report routinely includes recommendations for reducing vulnerabilities, and year after year, encryption makes the list. The latest iteration acknowledges the growing role laptops, smartphones, and flash drives play in companies’ day-to-day operations and, in light of this, describes encrypting data “at rest and in motion” as a “sure-fire, but still often neglected, way to avoid the breach report.1” Encryption is a valuable protective measure for all small businesses regardless of industry segment. It is a proven way to help protect your valuable data and should be part of your small business’s approach to data security. Do you need assistance with small business data encryption? Anderson Technologies, a team of cyber security specialists in St. Louis, has extensive experience working with small businesses to keep their organizations secure. To learn more, call 314.394.3001 or email email@example.com today. 1 “Breach Report 2016: Protected Health Information (PHI), February 2017, by Cynergistek and Redspin, pg. 18
<urn:uuid:5ce8f9b0-d965-4a8c-b9de-0d0a202f630d>
CC-MAIN-2022-40
https://andersontech.com/encryption-small-business-owners-secret-weapon/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00158.warc.gz
en
0.932295
1,041
3.015625
3
The Evolution of Computer Vision You can easily find computer vision technology in everyday products, from game consoles that can recognize your gestures to cell phone cameras that can automatically set focus on people. It is impacting many areas of our lives. In fact, computer vision has a long history in commercial and government use. Optical sensors that can sense light waves in various spectrum ranges are deployed in many applications: Like quality assurance in manufacturing, remote sensing for environmental management or high resolution cameras that collect intelligence over battlefields. Some of these sensors are stationary while others are attached to moving objects, such as satellites, drones and vehicles. In the past, many of these computer vision applications were limited to some closed platforms. But when combined with IP connectivity technologies, they create a new set of applications that were not possible before. Computer vision, coupled with IP connectivity, advanced data analytics and artificial intelligence, will be catalysts for each other, giving rise to revolutionary leaps in Internet of Things (IoT) innovations and applications. Advancements in Multiple Fields Driving Computer Vision Environment Designed for Vision Sight or vision is the most developed of the five human senses. We use it daily to recognize our friends, detect obstacles in our path, to complete tasks and to learn new things. We design our physical surroundings for our sense of vision. There are street signs and signal lights to help us get from one place to another. Stores have signs to help us locate them. Computer and television screens display information and entertainment that we consume. Given the importance of sight, it’s not a big leap to extend it to computers and automation. What is Computer Vision? Computer vision starts with the technology that captures and stores an image, or set of images, and then transforms those images into information that can be further acted upon. It is comprised of several technologies working together (Figure 1). Computer vision engineering is an interdisciplinary field requiring cross-functional and systems expertise in a number of these technologies. As an example, Microsoft Kinect uses 3D computer graphics algorithms to enable computer vision to analyze and understand three dimensional scenes. It allows game developers to merge real-time full body motion capture with artificial 3D environments. Besides gaming, this opens new possibilities in areas like robotics, virtual reality (VR) and augmented reality (AR) applications. Sensor technology advancements are also happening rapidly at many levels beyond conventional camera sensors. Some recent examples include: - Infrared sensors and lasers combine to sense depth and distance, which are one of the critical enablers of self-driving cars and 3D mapping applications - Nonintrusive sensors that track vital signs of medical patients without physical contact - High frequency cameras that can capture subtle movements not perceivable by human eyes to help athletes analyse their gaits - Ultra low power and low cost vision sensors that can be deployed anywhere for a long period of time Computer Vision Gets Smart The surveillance industry is one of the early adopters of image processing techniques and video analytics. Video analytics is a special Applications of computer vision that focuses on finding patterns from hours of video footage. The ability to automatically detect and identify predefined patterns in real world situations represents a huge market opportunity with hundreds of Applications. The first video analytics tools use handcrafted algorithms that identify specific features in images and videos. They were accurate in laboratory settings and simulation environments. However, the performance quickly dropped when the input data, like lighting conditions and camera views, deviated from design assumptions. Researchers and engineers spent many years developing and tuning algorithms or coming with new ones to deal with different conditions. However, cameras or video recorders using those algorithms are still not robust enough. Despite some incremental progress made over the years, poor real world performance limited the usefulness and adoption of the technology. Deep Learning Breakthrough In recent years, the emergence of deep learning algorithms has reinvigorated computer vision. Deep learning uses Artificial Neural Networks (ANN) algorithms which mimic the neurons of the human brain. Starting in the early 2010s, computer performance, accelerated by graphics processing units (GPU), have grown powerful enough for researchers to realize the capabilities of complex ANN. Besides, partly driven by video sites and prevalent IoT devices, researchers have large diverse libraries of video and image data to train their neural networks. In 2012, a version of the Deep Neural Network (DNN), called the Convolutional Neural Network (CNN), demonstrated a huge leap in accuracy. That development drove renewed interest and excitement into the field of computer vision engineering. Now, in applications requiring image classification and facial recognition, deep learning algorithms even exceeded their human counterparts. More importantly, just like humans, these algorithms have the ability to learn and adapt to different conditions. With deep learning, we are entering an era of cognitive technology where computer vision and deep learning integrate to address high level, complex problems once the domain of the human brain (Figure 2). We are just scratching the surface of what is possible. These systems will continue to improve with faster processors, more advanced machine learning algorithms and deeper integration to edge devices. Computer vision is set to revolutionize IoT. Other interesting Applications include: - Agricultural drones that monitors the health of crops (http://www.slantrange.com/) (Figure 3) - Transportation infrastructure management (http://www.vivacitylabs.com/) - UAV drone inspections (http://industrialskyworks.com/drone-inspections-services/) - Next generation home security cameras (https://buddyguard.io/) These are just some small examples of how computer vision can greatly increase the productivity in many fields. We are entering the next phase of IoT evolution. In the first phase, we focus on connecting devices, aggregating data and building up big data platforms. In the second phase, the focus will shift to making “things” more intelligent through technologies like computer vision and deep learning, generating more actionable data. There are many problems to overcome in making the technology more practical and economical for the masses: - Embedded platforms need to integrate deep neural design. There are difficult design decisions to be made around power consumption, cost, accuracy, and flexibility. - The industry needs standardization to allow smart devices and systems to communicate with each other and share metadata. - Systems are no longer passive collectors of data. They need to act upon the data with minimal human intervention. They need to learn and improvise by themselves. The whole software/firmware update process has new meanings in machine learning era. - Hackers could exploit new security vulnerabilities in computer vision and AI. Designers need to take that into account. In this post, we had a brief introduction to computer vision and how it is becoming a critical component of many connected devices and applications. Most of all, we predicted its imminent explosive growth and listed some of the hurdles in practical applications. In the next series of posts, we will explore new frameworks, best practices and design methodologies to overcome some of the challenges.
<urn:uuid:eb4db91d-708b-43b9-9ade-8886848b2d50>
CC-MAIN-2022-40
https://www.iotforall.com/computer-vision-iot
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00158.warc.gz
en
0.925423
1,444
3.25
3
Network & Communication Cables That Power Your Internet Quick Guide to Major Types of Network Cables Specifications. Network cables are often applied for connecting one network device to another, which cover twisted pair cable, coaxial cable, power line, etc. The first three network cable types are categories that are most often referred to. The following description will introduce the basic information, specification and application of these network cables. Twisted Pair Network Cable Types & Specifications A twisted pair network cable is a type of wiring in which two conductors (usually copper) of a single circuit are twisted together. By twisting two insulated copper wires together at a certain density, the electric waves radiated by each wire in transmission will be offset by the electric waves emitted from the other wire, effectively reducing the degree of signal interference. Twisted pair cabling is often used in data networks for short and medium-length connections because of its relatively lower costs compared to optical fiber and coaxial cable. Twisted Pair Network Cable Types & Standards According to the frequency and signal-to-noise ratio, the twisted pair network cable can be divided into Cat 3, Cat 4, Cat 5, Cat 5e, Cat 6, Cat 6a, Cat 7, Cat 7a and Cat 8 Ethernet/copper cables. Cat is standard for Category, all of them are applied for short distance transmission. The detailed specifications of twisted pair network cable types are listed in the below chart. |Category||Typical Construction||Max Bandwidth||Transmission Speeds||Applications| |Cat 3||UTP||16 MHz||10Mbps||10BASE-T and 100BASE-T4 Ethernet| |Cat 4||UTP||20 MHz||16Mbps||16Mbit/s Token Ring| |Cat 5||UTP||100 MHz||10-100Mbps||100BASE-TX & 1000BASE-T Ethernet| |Cat 5e||UTP||100 MHz||1000Mbps-1Gbps||100BASE-TX & 1000BASE-T Ethernet| |Cat 6||STP||250 MHz||10Gbps (55m)||10GBASE-T Ethernet| |Cat 6a||STP||500 MHz||10Gbps (55m)||10GBASE-T Ethernet| |Cat 7||STP||600 MHz||100Gbps (15m)||10GBASE-T Ethernet or POTS/CATV/1000BASE-T over single cable| |Cat 7a||STP||1000 MHz||100Gbps (15m)||10GBASE-T Ethernet or POTS/CATV/1000BASE-T over single cable| |Cat 7a||STP||2000 MHz||4Gbps (30m)||40GBASE-T Ethernet or POTS/CATV/1000BASE-T over single cable| Note: For more specific information on how to buy a right copper cable, please refer to Best Ethernet Cable Buying Guide. T568A and T568B are two basic wiring standards that are used by twisted pair network cables. They are telecommunications standards from TIA and EIA that specify the pin arrangements for connectors (often RJ45) on UTP or STP network cables. The number 568 refers to the order in which wires within the twisted pair cables are terminated and attached to the connector. The only difference between T568A and T568B is that the orange and green pairs are interchanged (see the figure below). Shielded or Unshielded Twisted Pair Network Cable? Shielded and unshielded twisted pair cables are often recognized as the common twisted pair network cable types in networking solution, which can be described as STP and UTP twisted pair cable respectively. The UTP refers to the cable that lacks metallic shielding around copper wires, which can help minimize electronic interference by providing balanced signal transmission. Thus it is mostly used in short-distance transmission for indoor telephone applications, computer networks, and even in video applications. As for STP cable, the wires are enclosed in a shield that functions as a grounding mechanism to provide greater protection from electromagnetic interference and radio Infrequency interference, allowing it to carry data at a faster rate of speed. Therefore, STP twisted pair network cable is often used in high-end applications that require high bandwidth and outdoor environments. What Is Coaxial Network Cable? Coaxial cable is a type of network cable that has an inner conductor surrounded by a tubular insulating layer, surrounded by a tubular conducting shield. The inner conductor and the outer shield share a geometric axis. Many coaxial cables have an insulating outer sheath or jacket. Coaxial cable is used as a transmission line for radio frequency (RF) signals. Its applications include feedlines connecting radio transmitters and receivers with their antennas, computer network connections, digital audio, and distributing cable television signals. Coaxial cable has an obvious advantage over other types of radio transmission line. In a good coaxial cable, the electromagnetic field carrying the signal exists only in the space between the inner conductor and the outer conducting shield. For this reason, coaxial cables are allowed to be installed next to metal objects without power losses that occur in other types of radio transmission line. What Is Fiber Optic Cable? Optical fiber cabling is an excellent transmission medium for its high data capacity and supported long distances. It has a fiber/glass core within a rubber outer coating and uses beams of light rather than electrical signals to relay data. The fiber optic cables can run for distances measured in kilometers with transmission speeds from 10 Mbps up to 100 Gbps or higher due to the fact that light doesn't diminish over distance the way electrical signals do. Generally, fiber optic cable consists of single mode and multimode fibers, the difference is the core diameter. The former fiber core is 9/125µm wide and the latter fiber core can be 62.5/125µm or 50/125µm wide. Both multimode fiber (MMF) and single mode fiber (SMF) can be used for high-speed transmission. MMF is often for short reach while SMF is for long reach. Twisted pair, coaxial cables and fiber optic cable are three major network cable types in the communication systems. They have different cable structions, speed, bandwidth, and applications. All of them will benefit both in our daily life and in network construction work. For more detailed differences, please check it here: Fiber Optic Cable vs Twisted Pair Cable vs Coaxial Cable.
<urn:uuid:2efb6e8b-3d1d-4476-bdcf-51004c0aee0e>
CC-MAIN-2022-40
https://community.fs.com/blog/network-communication-cables-that-power-your-internet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00158.warc.gz
en
0.891499
1,369
3.3125
3
Saturday, September 24, 2022 Published 3 Months Ago on Friday, Jul 01 2022 By Ahmad El Hajj Cybersecurity and artificial intelligence (AI) are two very trending topics at the moment. AI has been the pivotal elements modifying business strategies, improving decision-making processes, and triggering automation in every industry in the world. The latest sentience debate is a clear indicator on serious and advanced AI is becoming nowadays. Cybersecurity is the other important element of today’s technological world. With an increasing reliance on data and the move to online services that require an individual’s biometrics, security essential in preventing data thefts and associated cybercrimes. AI has undeniably improved cybersecurity practices by allowing a real-time analysis of internet traffic to discover possible threats at the earliest and take defensive actions. This important learning process hides however several disadvantages of AI in cybersecurity. The touted advantages of AI in cybersecurity are real and very useful. However, the increasing adoption of AI solutions for security is actually causing problems at different levels. When it comes to maturity in technology, hackers are the best at it. These individuals sitting behind computer screens logging data and doing advanced analytics to identify any loophole or vulnerability they can use to their benefits. The use of AI as far as cybersecurity is concerned is a double-edged sword. It is actually a race of who can develop a better algorithm that caters better to the data which is circulating online. In this sense, the use of AI is a big threat to security. Another issue is that while a company is analyzing and learning from data to discover threats, a hacker is concurrently analyzing the company’s cyber-defense mechanisms and policies to find “open doors’ that will take it into the system to complete the intended attack. AI algorithms are associated with the analysis with large volumes of data, a key requirement for the developed algorithms to produce accurate outputs. The data a company deals with contains normal traffic related to daily transactions and activities, but also sensitive information related to the clients including their biometrics and personal information. What happens to our data when it goes to the AI-agent though is another thing. Protecting the data is key when AI is used for cybersecurity reasons. The secrecy of the clients’ data should not be compromised for any reason. The field of cybersecurity is constantly evolving with ingenious attacks and threats emerging every now and then. Browser-in-the-browser attacks and increasingly advanced ransomware attacks have been notable examples in 2022. In order to discover attacks at a later stage, the AI algorithm needs to have data to do the proper training. The increasingly dynamic environment with threats emerging and evolving will lead to a surge in the required volumes of data, which can potentially not be readily available to have a fast response to the attack itself. Whether it is the ability of AI to keep track of the exponential growth in data or the availability of data for the AI-algorithm to produce results is a big disadvantage of this approach for cybersecurity With the drive towards more and more automation, it is questionable whether this can be applied as well for cybersecurity practices. AI can certainly assist in processing and learning from data and produce insights. However, the real decision maker in such as sensitive area where no errors can be tolerated is the cybersecurity expert himself. The only way for AI to replace cybersecurity is when it becomes sentient or developed enough to think and act like humans do. There is still a long way for that to concretize. Explainable or interpretable AI is a key intermediate step in reaching this target. First, we need to understand how AI produces results. Proper cybersecurity practices require a reduction in bias while optimizing the performance of the algorithm. The adoption of AI will certainly cause major shifts in the cybersecurity job market as in the case in other industries, but probably at a smaller scale. The level of skill and experience needed to thwart cyberattacks will safeguard the need for security experts to provide the final decision regarding suspicious data patterns. On the other hand, the incorporation of AI will call for new skilled workers that can manage and optimize the performance of the algorithms. Another alternative would be for existing workforce to be upskilled and retrained to handle the new analysis tools. As data is becoming the basic unit for decision making, AI has invaded all industries and businesses, including cybersecurity. Companies are starting to incorporate learning algorithms to their offered services in order to have a more intelligent management of the different security threats. However, the role of AI in cybersecurity should be considered with enough judgment. The addition of AI would increase the complexity in the data management process, notably in terms of data privacy and the continuous need for more data. “Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Cybersecurity and Artificial Intelligence space to stay informed and up-to-date with our daily articles.” The fastest-growing waste stream in America, according to the Environmental Protection Agency (EPA), is electronic garbage, yet only a small portion of it is collected. As a result, the global production of e-waste may reach 50 million metric tons per year. Sustainably manufactured green phone have, as a result, risen in popularity. When you purchase […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:74f03504-79e8-4bc7-9591-e8069046d976>
CC-MAIN-2022-40
https://insidetelecom.com/disadvantages-of-ai-in-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00359.warc.gz
en
0.947525
1,112
3.046875
3
Schools now have access to technology that uses robots to stream content onto their classroom computers or TVs. This could radically change how teachers relay information and give them access to new kinds of material. Music teachers could use a robot to share the experience of a live concert with their students. Science teachers could show their students what a working lab looks like. Second-grader attends class – as a robot Devon Carrow is a second-grader whose life-threatening allergies don't allow him to attend school. But for the past year, reported the Associated Press, he's been able to experience elementary school for the first time: through a 4-foot-tall robot. Devon attends classes by sitting in front of his home computer's camera. He controls the robot using a mouse and keyboard, with his teacher helping him navigate from class to class. At the top of the robot – about where a head might be – is a screen that shows Devon's face, and above that is a camera that wirelessly streams video to his computer. He can hear through the robot's speakers (his teacher wears a microphone), and if he wants to speak, he activates a light on the robot. Devon told AP that going to school is like playing a videogame. And the other kids? They don't think it's that weird either. AP pointed out that 7-year-olds are used to video games, avatars and remote-controlled toys, so the robot isn't far outside their realm of experience. Most students don't even acknowledge that Devon isn't actually there. "In the classroom, the kids are like, 'Devon, come over, we're doing Legos. Show us your Legos,'" teacher Dawn Voelker told AP. The robot is part of Devon's special education plan. The school district paid about $6,000 for the robot and pays an additional $100 in monthly fees. The robot, known as VGo, was introduced in 2011 by VGo Communications, based out of Nashua, New Hampshire. Currently, there are a handful of students across the country that use the robot to attend school. VGo is also being eyed by medical and business professionals. VGo broadens learning at the Pittsburgh Zoo VGo has also been used to teach students at the Pittsburgh Zoo. Last September, the robot arrived and was given a giraffe-patterned coat of paint. The Verizon Foundation, which donated VGo, wrote that so far the robot has been used to enhance zoo education and outreach programs by broadcasting learning to classroom computers. One program where VGo has made an impact is the Zoo's Sea Turtle Second Change program. In the past, students had to travel to the Zoo to meet the turtles and learn from scientists. Now, aquarium staff can broadcast lessons and show turtles live while students are in their classrooms. How would you react to VGo being used in your classroom or office? Do you think this technology will improve education? Please share your thoughts below!
<urn:uuid:9ab82940-d892-407c-8f5f-dbda862ec111>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/robot-helps-students-learn-from-a-distance
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00359.warc.gz
en
0.979026
616
3.453125
3
Artificial intelligence is often touted as one of the most groundbreaking technological innovations of the decade, and many are turning to AI-based systems to confront the pandemic and tackle changing workplaces. At the epicentre of this unfolding global crisis, the healthcare industry has been hit the hardest; however, with AI finding applications in disease detection and prediction, patient triaging, administrative processes, and more, the future of the healthcare sector is promising. - AI’s role and application in the healthcare industry - How AI can help the healthcare industry tackle the pandemic and other challenges - AI’s contribution to healthcare IT management - How AI can accelerate digital transformation in the healthcare sector
<urn:uuid:7655ff2e-2102-4269-a9e0-95405f71985a>
CC-MAIN-2022-40
https://insights.manageengine.com/artificial-intelligence/leveraging-ai-to-transform-the-healthcare-industry/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00359.warc.gz
en
0.93741
143
2.515625
3
Establishing a proper routine helps to inculcate good habits from early childhood. Bit Guardian Parental Control allows a parent to set a schedule for children. Kids thrive with structure; parents must get them accustomed to routines from an early age. Whether kids are waking up in the morning, having meals, playing outdoors, or getting ready for bed, they need to have a routine in place. We know that changes provide you with a learning opportunity. But children take a while to get adapted to the things inciting a disturbance in their schedule. The fear of the unknown makes them stressed. Kids don’t have a lot of control over their lives, but routines give them a sense of organization, stability, and comfort. Apart from having emotional benefits, the routine provides health benefits as well. Assure your children the safety and security that they deserve with phone monitoring apps. What can be a daily routine for a child? - Bedtimes, bath times, mealtimes, and nap times - The time they take to get ready in the morning - Household chores such as dusting, vacuuming, cleaning, gardening, and cooking - Family time must include ‘cuddling time’, particularly when they are young. - Playtime, comprising outdoor activities in the sunlight - A restrictive screen time 10 Reasons a routine is essential for your child’s development Helps your child to get accustomed to day-to-day activities Have you heard of a circadian rhythm? It is a biological process that regulates the sleep-wake cycle and repeats roughly every 24 hours. Yes, it is a body clock that functions optimally when we follow the routine. A well-organized routine helps your child with many day-to-day basics, such as: - Ability to take naps at definite times - Going to bed/waking up at a fixed time - An inclination to eat nutritious and full meals regularly ensures a healthy immune system - Playing outdoors paves a path for a well-built adult. Strengthens a parent-child relationship Children learn a lot by observing the elder’s behaviors and their surroundings. That’s how they start knowing what is expected of them. They begin understanding the values of a family, interests, and culture. For example, the child might notice that due to professional commitments or other reasons, their family can’t have lunch together. But everyone tries to remain present for dinner. Following a routine also makes sure of saving time, which family can utilize to make their bonding stronger. Even if your child is young, they soon pick up on the family traditions. Never let this bonding be severed, utilize app install blocker to avoid your child falling in the trap of devices. Limits disagreements as kids are aware of what to expect When a child is used to brushing/bathing routine before/after bedtime, you don’t have to persuade or scold them every day. If a child is accustomed to pick up the toys after playing or clear the table after meals, they do not consider those tasks as arduous. As a parent, you don’t have to keep ordering them for completing the tasks or finishing meals. You become more like a friend to them rather than being a dictator, which certainly helps when they become techno-savvy teenagers. To monitor your kids in the digital world, use child monitoring apps. Maintains a peaceful environment at home Everyone knows what to expect when, so stress and anxiety are reduced. The child feels responsible when their opinions are asked and valued. They turn out to be happier and more co-operative, sparing you the need to yell. Kids having a peaceful environment at home grow to be more emotionally intelligent adults who exude qualities of consideration and self-discipline. Overall, you may form a bond that will carry your relationship with your kids through their adult years and beyond. Instils confidence and independence in your child With a routine, a child soon becomes independent and start taking pride in knowing what they are supposed to do. Whether at a family or a school, they feel confident to go ahead and take charge. You can’t help but believe that a leader is in the making. Empowered by independence, children are less likely to rebel and fight back. Parental control apps are a blessing to parents who are anxious about their child’s safety in the digitally advanced universe. Develops positive and healthy habits Routine teaches and creates boundaries while a lack of routine gets in the way of respecting boundaries. If you allow them to play after they are done with their school-work, at times they might miss out on playing for not being serious in homework. It helps them realize that there are consequences to their behavior. They might be asked to enjoy a limited screen time only after they finish their nutritious meals. If you find your child getting addicted to the devices, use a screen time control app to manage their screen time. Child begins to identify the level of priorities A routine helps you stay on track. It might be trivial tasks such as remembering to pay the bills every month or the sequence in which you pursue ordinary jobs. Some days, you might be occupied with professional commitments, but you still do not let important details slip from your mind. So, the household runs smoothly despite having everybody busy with their tasks. Constantly changing world demands that parents keep an eye on their children all the time with parental controls. Expectations for favorite ritual keeps them going through mundane tasks If you as a family, follow a particular ritual on weekends, such as going to a church, planning play dates for your children or going to the park, the child remains excited with the anticipation. While going through these activities, the child gets a sense of safety and contentment. The feeling that the family loves them soars their confidence high, and they spend the weekdays merrily following a routine. Excites the child for some special family rituals A child is always ready for cuddling and pouring out a jumble of feelings with parents. You must try to make the bed-timings as comfy and converse as much as you can with your child. These tiny moments play a huge role in developing a happy, satisfied, and confident child. The family that eats together, spend quality time together, stays together! Trust your child explicitly but make them understand the necessity to use child tracking app for their security. Pacifies a child during times of change Changes invariably have stressful impacts on a child’s life. Instances like parents going through a divorce or separation, changing the city, neighborhood, or school, or family on the way to expansion; challenges a child’s sense of security. It helps if a child is prepared in advance with positive suggestions. For example, if the kid is changing a school, you can say, “you are going to this amazing new school, where you make friends for life, have wonderful teachers to help you with education.” Comforting your child with affirmative words about the change helps for sure. How to incorporate flexibility in a routine that does not threaten the kids? Children are capable of handling the change if it is expected and occurs in the context of a familiar routine. Neither the change nor a routine should be imposed without sensitivity. There are times when rules need to be broken, like staying up late to see an eclipse or leaving the dinner dishes in the sink to play charades. Structure need not be oppressive. Establishing and maintaining routine has many benefits, no doubt, but it’s essential to remain flexible. There are situations in life, which demand attention to keeping rules aside. When you try to follow a schedule being stubborn all the time, the benefits will be reduced, and children may feel oppressed by it rather than freed by it. How to set a child’s routine with parental controls Bit Guardian Parental Control is a kid safety app that offers various functionalities, addressing all the safety-related concerns of parents. It has a feature called ‘Time Schedule,’ which allows you to set bedtime. During those hours, the kid’s phone would remain locked, denying them access to all the apps. For games, social-media-education apps, or any other apps, you can decide the day and the time when kids can use the individual ones. It takes longer to understand that time is a luxury that no one can buy. Scheduling is the art of planning your activities such that you can achieve your goals and set priorities according to the time you have available. With a screen time control app, help your child realize the effects of following a schedule on accomplishing the essential tasks.
<urn:uuid:be156fc0-4de1-4e3b-b502-8d41bbe09e76>
CC-MAIN-2022-40
https://blog.bit-guardian.com/childs-routine-with-parental-controls-apps/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00359.warc.gz
en
0.959945
1,822
3.09375
3
More and more, developers are becoming aware of the threats posed by malicious code, and SQL injection in particular, and by leaving code vulnerable to such attacks. However, while SQL is the most popular type of code injection attack, there are several others that can be just as dangerous to your applications and your data, including LDAP injection and XPath injection. While these may not be as well-known to developers, they are already in the hands of hackers, and they should be of concern. In addition, much of the common wisdom concerning remediation of malicious code injection attacks is inadequate or inaccurate. Following these flawed recommendations will not improve the security of your application, but will only leave you with a false sense of security until the next time your application is compromised and your data is stolen, erased, or tampered with. It is important for developers to acquaint themselves with all code injection types that exist as well as the proper ways to fix any vulnerabilities to malicious code. The Basic Premise of All Code Injection Types Many people mistakenly think that they are safe from malicious code injection attacks because they have firewalls or SSL encryption. However, while a firewall can protect you from network level attacks, and while SSL encryption can protect you from an outside user intercepting data between two points, neither of these options offers any real protection from code injection attacks. All code injection attacks work on the same principle: a hacker piggybacks malicious code onto good code through an input field in the application. Therefore, the protection instead has to come from the code within the application itself. There are many motives that hackers using malicious code injection attacks may have. They may wish to access a website or database that was intended only for a certain set of users. They may also wish to access a database in order to steal such sensitive information as social security numbers and credit cards. Other hackers may wish to tamper with a database – lowering prices, for example, so they can steal items from an e-commerce site with ease. And once an attacker has gained access to a database by using malicious code, he may even be able to delete it completely, causing chaos for the business that has been attacked. Click here to download the full paper
<urn:uuid:32da9abb-1556-4426-8db0-9f219dae2afa>
CC-MAIN-2022-40
https://it-observer.com/malicious-code-injection-not-just-sql.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00359.warc.gz
en
0.95645
450
2.859375
3
Bluecasing is the act of finding devices to intrude or steal via Bluetooth (general scanning is sometimes know as War Nibbling). For those who do not know, Bluetooth is a wireless networking technology that is geared more towards PANs (Personal Area Networks), while Wi-Fi (802.11a/b/g etc) is geared more towards LANs. An over simplified way to look at it is that Bluetooth is meant to be a wireless replacement for some of the functions USB fulfills, and Wi-Fi is more of a wireless replacement for Ethernet. Many high-end phones, laptops, PDAs, car stereos and other electronics are being shipped with Bluetooth capability so they can communicate, either by sending audio or digital data to each other. For example, your PDA tells your phone to dial a number and send the output to your headset or car stereo.Read Full Story
<urn:uuid:7b79048e-e267-44df-bc60-361f6af25c7e>
CC-MAIN-2022-40
https://it-observer.com/bluecasing-war-nibbling-bluetooth-petty-theft.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00359.warc.gz
en
0.932707
181
2.703125
3
Establishing a program baseline is much easier said than done. To examine the challenges Project’s face, we need to have a common perspective on what constitutes a baseline. Below are three common definition of a project baseline: - “The approved version of a work product that can be changed only through formal change control procedures and is used as a basis for comparison.” – Project Manager Book of Knowledge - “A project’s baseline is defined as the original scope, cost and schedule.” – Various Sources - “The integration of planning, scheduling, budgeting, work authorization, and cost accumulation management processes provides the capability for establishing the performance measurement baseline (PMB).” -ANSI/EIA-748-A Intent Guide What it boils down to is that planning a project is such a crucial element to the successful execution of a project, that the elements that make up the plan should be preserved. Now that we have a common reference of a baseline, let’s explore how… …baselines are similar to King Tut. King Tut died untimely and unexpectedly and his tomb was prepared hastily.Some project plans can be described as being developed quickly or the project came at the most inconvenient or unexpected time. After King Tut’s death, a series of rituals began to ensure the preservation of his three soulsKa, Ba, and Ahk.Most projects are also believed to have three souls; cost, schedule, and performance, with preservation in the form of a baseline. King Tut’s burial rituals are estimated to have taken place over 70 days.Most projects take 30 to 90 days to develop a baseline. King Tut’s burial rituals were followed by an elaborate funeral procession conducted by the new Pharaoh, the royal family, the vizier, the generals and court dignitaries with mourners, priests, and slaves on looking. A project baseline review is conducted by significant stakeholders and is often observed by the actual project members in what constitutes a formal meeting for the organization. After King Tut’s funeral he quickly faded from public consciousness until six centuries later when archaeologists discovered the nearly intact tomb and placed many of the artifacts on exhibit for the public to see. Similarly in many projects, after the baseline is established (approved) it is filed away and forgotten once the project is underway. More often than not, when a project is not meeting stakeholder expectations, someone is sure to drag out the “baseline” and use its artifacts to address the projects shortcomings. …baselines are similar to Google Maps. In order for Google to provide it’s web-based mapping services there has to be a reference by which places are mapped, ok so let’s start with the world. Google uses satellite imagery and data to obtain geolocational points of reference.To develop a solid plan, many project start at a higher level of detail and begin integrating the information about scope and constraints so that it forms the lines that will make up the picture (the project plan). As the project’s picture comes into focus, the need for more detailed information arises. Google answered their need to obtain higher resolution imagery for cities through aerial photography and cameras attached to cars, trikes, trolleys and snowmobiles.Projects answer the need for more detailed information by decomposing the project elements into smaller pieces and refining the information another layer down. With a clear picture, we are ready to begin our journey. Google use a route planner to present multiple routing and traveling options, coupled with estimated time to travel and traffic status to aid the traveler in making an informed decision on how to get from one location to the next. When a project understands its picture (scope and constraints) it provides the map by which alternatives can be developed to determine the best route to complete the project. If a traveler encounters unexpected delays on the chosen route, Google Maps provides the traveler the ability to obtain information that will help them to decide to stay on current route, take a detour route, reroute, change mode of transportation or abandon the trip.Similarly, a baseline provides the ability for a project to collect information about its current state and compare it against expectations. This will provide the information needed to make an informed decision on how the project should proceed: continue with plan, re-plan, re-baseline, or terminate.
<urn:uuid:436595fb-7b40-4715-b5e0-21b2b7a210c4>
CC-MAIN-2022-40
https://caskgov.com/how-baselines-are-like-ancient-burial-rituals-google-maps-and-cliches/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00359.warc.gz
en
0.942242
896
2.703125
3
The ongoing rise in the global average temperature near the surface of the Earth is evidence of the dramatic change in global climate patterns. This is caused by the increasing concentrations of greenhouse gases produced by human activities: deforestation, industrial processes, and certain agricultural practices that emit gases into the atmosphere. Although the greenhouse effect is natural and necessary to support life on earth, an increase of gases will change the Earth’s climate and affect the overall health of the planet. Many places have experienced increased changes to the weather resulting in intense rain, severe heat waves, floods, and droughts that could affect your company’s ability to stay up and running. Disaster recovery planning can help businesses prepare in case a natural disaster unexpectedly strikes. A disaster recovery plan will help your company prepare for significant damage done from the effects of climate change. A disaster recovery plan involves a series of procedures and a set of policies that enable the continuation of vital technology information following a natural or human-induced disaster. When a company focuses on the technology systems, it will be able to efficiently react to any natural or manmade threat. Common Strategies for Disaster Recovery: - Backups made to tape and sent off-site at regular intervals. - Backups made to disk on-site and automatically copied to off-site disk, or made directly to off-site disk. - Replication of data to an off-site location, which overcomes the need to restore the data. - Hybrid Cloud solutions that replicate both on-site and to off-site data centers. - Private Cloud solutions which replicate the management data into the storage domains which are part of the private cloud setup. Help your company overcome the problems arising from greenhouse gas emissions and the risks associated with climate change by properly implementing a disaster recovery plan. If you have any questions or concerns, allow us to help: Hill Country Tech Guys Your HOME for IT Support & Consulting.
<urn:uuid:3e39f4c0-c12e-480d-bbbc-b98744323af5>
CC-MAIN-2022-40
https://hctechguys.com/overcome-climate-change-disaster-recovery-plan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00359.warc.gz
en
0.918476
389
3.046875
3
Test design concerns making the decisions on (1) what to and what not to test, (2) how to stimulate the system and with what data values, and (3) how the system should react and respond to the stimuli. It is a separate task from test execution and is done before executing the tests against the system. Test design techniques, on the other hand, are used to identify the test scenarios through which the test cases are created. There are numerous effective and efficient test design techniques for identifying those test scenarios which are applied in the industry. The purpose of this short blog post is to give an overview of some of the most well-known techniques and how you can apply those using tools like Conformiq Designer and Conformiq Creator. These techniques can be divided into static and dynamic techniques where, in this post, we really focus on dynamic techniques only. The dynamic techniques can be broadly divided into two categories called black-box and white-box testing techniques where the “color of the box” refers to the visibility that the test has to the internals of the SUT. Also check out a post about Black-box vs. White-box Coverage at https://www.conformiq.com/2013/11/black-box-vs-white-box-coverage/. The techniques listed below all have their strengths and weaknesses; they are good in finding certain types of defects while they easily miss other types of defects. It is really up to the end user to decide what is important from the testing point of view and then to design test assets by applying a combination of techniques. For any real system, carrying this out manually or even with limited automation easily becomes prohibitively difficult. In addition, while techniques listed below have shown to be an effective way of finding software faults, it is crucial to keep in mind that these techniques are only a part of the whole picture and a technique by itself has only limited value and use. Even if we design an “optimized” set of test inputs for stimulating the system under test by applying a certain test design technique, we also need to be able to understand how the system should react to each input. This is part of the “test oracle problem” which when implemented manually is a huge endeavor, takes considerable time, and is a very error prone process. The beauty of the Conformiq test generation platform is that it can apply those techniques automatically but also fully automate the test oracle generation, saving a lot of time and increasing the quality of testing at the same time. This means that the Conformiq test generation platform – the engine core of both Conformiq Designer and Creator – will automatically design and create test cases with input combinations used as stimuli to the system under test plus an exact expected response from the SUT. There is no need to do additional (manual) test design. Black-box testing techniques Black-box approach assumes no internal details of the system and is based on judging the quality and correctness of the system using an external reference such as system specification. The following is a list of major types of test design techniques. Equivalence class partitioning The idea of equivalence class partitioning is to divide the all possible inputs to the system into “equivalence classes”, i.e. sets of inputs that should produce “analogous” results and “work the same”. By applying the equivalence class partitioning technique we test only a few conditions (maybe even one) from each partition as it is assumed that all the conditions in a given partition will trigger the same behavior in the SUT. If that condition is working properly, then all the other conditions within the partition are assumed to work properly, meaning that we can dramatically narrow down the number of test scenarios that we need to execute. Similarly, if that one condition happens to trigger an error in the SUT then it is also assumed that all the other conditions will have the same effect. For more information about equivalence class partitioning, check out this post https://www.conformiq.com/2013/12/equivalence-class-partitioning-and-boundary-value-analysis-as-black-box-test-design-heuristics/. Boundary value analysis Boundary value analysis or BVA is a refinement of the equivalence class partitioning method which is an applicable method when the equivalence classes involve numbers and there are decision boundaries between them. These decision boundaries are places where the behavior of the system changes. What makes boundary value analysis an interesting method is that it is widely recognized that values on the boundaries cause more errors in the system than at other locations. Therefore we should be always checking the boundaries since if the system fails, it is likely to fail on these decision boundaries. For more information about BVA, check out this post https://www.conformiq.com/2013/12/equivalence-class-partitioning-and-boundary-value-analysis-as-black-box-test-design-heuristics/. Decision or cause-effect tables Decision tables are used represent how combinations of inputs result in different actions taken and thus it is a testing technique focused more on business logic and rules. While the above mentioned equivalence class partitioning and boundary value methods are extremely useful methods for narrowing down the number of inputs and apply to a specific condition or input, decision tables are used define the system action given a combination of these inputs. It is a systematic way of describing complex business rules and they help the tester to understand and see the effect of combinations of different inputs. This way they help the tester to identify a subset of the combinations to test, but decision tables themselves do not provide an efficient way of selecting the actual combinations. Therefore, applying decision tables in testing may well result in an inefficient and poor test result. Use case testing Use case testing is a method that utilizes specification assets called Use Cases which are descriptions of a particular use of the system by the end user. Use Cases can be used to design and capture test cases that exercise the whole end-to-end system. As Use Cases are described in terms of the end user and they describe what the user does with system. They do not describe the inputs that should be used to stimulate the system nor the (exact) expected response. However Use Cases may be very valuable in the test design process for establishing an understanding about the type of test scenarios that should be included in to the testing work. Combinatorial testing (all-pairs, n-wise, etc) The basis for combinatorial testing is the interaction principle which states that most software failures are induced by single “factor” faults or by a combinatorial effect–that is interaction–of two factors, with progressively fewer failures induced by interactions between more factors. Therefore, if a significant part of the faults in the system under test can be induced by a combination of N or fewer factors, then testing all N-way combinations of values provides a high rate of fault detection. This is also what has been empirically observed over the years. Combinatorial testing is applied to input parameters and tools for combinatorial testing then attempts to optimize the number of tests you need to run. For more information about combinatorial testing, check out this post https://www.conformiq.com/2014/10/combinatorial-test-data-generation-with-conformiq/. Classification tree method The classification-tree method is an approach to partition testing which uses a descriptive tree-like notation. The basic idea is to separate the inputs of the SUT into different classes that directly reflect the relevant test scenarios or classifications which is then followed by test case selection by combining classes of the different classifications. The selection of classes typically follows techniques such as equivalence class partitioning and boundary value analysis. All the classifications form the classification tree. In the next step the test cases are composed by selecting one class from every classification of the classification tree. The selection of test cases can be done manually but there are tools existing that automate and optimize this process. White-box testing techniques White-box approach assumes that the tester has a full visibility to the box being tested and how the system internally operates. White-box methods are based on the structure of the implementation and therefore the applied techniques are also referred as structure-based techniques. Statement or line coverage Statement coverage identifies statements or code lines executed by a test suite and then it calculates the percentage of executed statements versus those that are not executed. Decision or branch coverage Decision or branch coverage answers the question has each branch of a control flow construct been executed or to put in in different terms, has each possible outcome of a decision been executed. For example, with an “if” statement this means do we have a test that exercises the “then” branch and a test that exercises the “else” branch. Condition or predicate coverage Condition or predicate coverage answers the question has each Boolean sub-expression been evaluated for both true and false where a condition is a Boolean sub-expression that cannot be broken down into a simpler Boolean expression. More advanced condition / decision coverage There are in addition numerous more advanced and exhaustive ways to test and asses the completeness of testing in terms of more advanced condition and decision coverage heuristics such as MC/DC (Modified Condition / Decision Coverage) that requires that each condition should affect the decision outcome separately and MCC (Multiple Condition Coverage) that requires that all the combinations of conditions inside decisions are tested. For more information about various condition and decision coverage options, check out this post https://www.conformiq.com/2014/11/conditional-coverage-heuristics/. With path testing we aim to cover every control path at least once which also includes all the iterations / loops that the code might have for zero, one and multiple times. If the code includes unbounded iteration the number of tests needed for testing the code is also unbounded which naturally is not practical, therefore for path testing in practice we fix some upper bound for these constructs to limit the number of test cases to a practical size. Putting it all together The basic idea is that a test design technique helps us to identify a “good” subset of test cases from infinitely many more. Each of the techniques listed above focuses on certain specific characteristics of the system and has its own strengths and weaknesses. Focusing only on one particular test design technique is therefore not really enough, as each technique is good at identifying only certain types of issues with the implementation and is hopeless in finding others. It is really up to you to select the best set of techniques for identifying the test cases that meet your testing requirements and goals. When it comes to the classification of test design techniques discussed in this post, it is also worth noting that black-box and white-box test design are not mutually exclusive and even if I have categorized structural coverage aspects under white-box test design techniques, it does not mean that you could not apply those techniques also when doing black-box design. What we need to remember is that black-box techniques assume no visibility in to the tested system meaning that we cannot apply those design techniques on white-box aspects. What we can do however when we conduct model-based black-box test design, is to apply those techniques classified under white-box techniques to the model and not to the actual implementation. By applying structure based test design techniques for black-box test design from models we can improve the quality of the testing because we increase the coverage of the testing and therefore we increase the chances of finding defects in the implementation that singular test design techniques might overlook. In addition, it is important to note that a test suite designed using a black-box approach (even when also applying white-box test design techniques on the model) with demonstrably good coverage of the functional specification (whether implicit or explicit) does not necessary mean a high white-box coverage measurement. This is because there can be disproportionate amounts of code in the system that are either unused or dead code or that are related to unspecified functionality. It is known that good coverage of functional requirements does not always lead to high white-box coverage due to the aforementioned issues. On the other hand, good white-box coverage can be of only limited value if the tests are derived from the code itself. The key point of black-box testing is that the system is judged against an independent reference. For more information, refer to this article https://www.conformiq.com/2013/11/black-box-vs-white-box-coverage/. Testing techniques with Conformiq test generation platform As I mentioned early in this writing, an additional and a very big problem with the test design techniques mentioned in this blog post is that they are typically explained with simple examples, while in the real world these methods need to be applied—for example in the case of equivalence class partitioning—in relation to every equivalence class of the reachable system states of the application that we are testing. For real world systems, carrying this out manually easily becomes prohibitively difficult or too time consuming to be practical. A major benefit of the Conformiq test generation platform is the it implements the selected set of test design techniques automatically across the whole system specification (as encoded in a model). The tool is able to generate test suites that cover the aspects of all the user selected set of techniques in combination (and if not produce a report of what failed to be covered during the test generation) that at the same time are of reasonable size with full coverage and traceability information. By using an automated test design tool, you are relieved from doing test design by hand; a time-consuming and error-prone process for comprehensive coverage that is also very complicated, if not nearly impossible, for larger systems. The combination of coverage methods is a unique value proposition of the Conformiq test generation platform that separates this platform from other model-based testing approaches and tools available in the market. To learn more about test design I encourage you to check out a white paper about the topic https://www.conformiq.com/wp-content/uploads/2015/02/Why-Automate-Test-Design.pdf.
<urn:uuid:11a52fde-37ca-44b3-a69a-0928d4812e2b>
CC-MAIN-2022-40
https://www.conformiq.com/2015/06/test-design-techniques/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00359.warc.gz
en
0.92721
2,963
3
3
Big Data Management can connect to clusters that run different Hadoop distributions. Hadoop is an open-source software framework that enables distributed processing of large data sets across clusters of machines. You might also need to use third-party software clients to set up and manage your Hadoop cluster. Big Data Management can connect to Hadoop as a data source and push job processing to the Hadoop cluster. It can also connect to HDFS, which enables high performance access to files across the cluster. It can connect to Hive, which is a data warehouse that connects to HDFS and uses SQL-like queries to run MapReduce jobs on Hadoop, or YARN, which can manage Hadoop clusters more efficiently. It can also connect to NoSQL databases such as HBase, which is a database comprising key-value pairs on Hadoop that performs operations in real-time. The Data Integration Service pushes mapping and profiling jobs to the Blaze, Spark, or Hive engine in the Hadoop environment.
<urn:uuid:0400356b-8a03-4087-8a46-7c2791916852>
CC-MAIN-2022-40
https://docs.informatica.com/data-engineering/data-engineering-integration/10-2-hotfix-1/big-data-management-user-guide/introduction-to-informatica-big-data-management/big-data-management-component-architecture/hadoop-environment.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00359.warc.gz
en
0.928183
212
2.65625
3
AUTHENTICATION VS. AUTHORIZATION What is authentication? Authentication is the process of validating who a user claims to be. For example, verifying an employee's identity or user ID to log in to a system. What is authorization? Authorization is the process of giving a user specific access. For example, determining what resources or facilities a user will be able to access. What's the difference between authentication and authorization? Even though gaining access to resources requires both authentication and authorization, they are two unique steps in the process. Authentication is the key. Authorization is whether or not that key gives you the permission to access. Authentication is initiated by the user, whereas authorization is determined by a policy and given by the application, system, or resource being accessed. What comes first – authentication or authorization? Authentication of the user must occur before authorization is provided to grant specific access. Both are integral to the process of a user gaining access to a specific application, system, or resource. What are common authentication methods? - Transparent Authentication - Physical Form Factor Authentication - Non-Physical Form Factor Authentication Transparent authenticators that validate users without requiring day-to-day involvement. - Digital Certificates - Device Authentication Tangible devices that users carry and use when authenticating. - One-Time Passcode (OTP) Tokens - Display Card - Grid Authentication - One-Time Passcode List Methods of verifying user identities without requiring them to carry an additional physical device. - Knowledge-Based Authentication - Out-of-Band Authentication - Mobile Smart Credentials - SMS Soft Tokens What are common approaches to authorization? The ultimate approach to authorization is a zero trust framework based on the concept of least privileged access. The common types of authorization that can be used to build such a framework are: - Token-based, where a user is granted a token that provides specific privileges and access to data. - Role-based Access Control (RBAC), where users are divided into roles or groups with specific kinds of access and restrictions. - Access control lists (ACL), where only certain users on a list may be granted access to a particular application, system, or resource. How can Entrust help me with authentication and authorization? Entrust’s identity and access management (IAM) platform provides user authentication and authorization for an unparalleled number of use cases including workforce, consumer and citizen.
<urn:uuid:dd7a92eb-2f68-4c18-8903-d50c3dc0cf73>
CC-MAIN-2022-40
https://www.entrust.com/resources/faq/authentication-vs-authorization
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00359.warc.gz
en
0.868514
528
3.453125
3
LA IT support experts will always emphasize protecting your data from malware attacks. The global cost of online crime will be $61 million by 2021. What that tells you is that cybercrime is on the rise and if it is not mitigated in good time, millions of businesses will crumble. Ransomware is one of the most common malware attacks. What is Ransomware? You must be wondering what ransomware is. It is a virus that encrypts data making it impossible for the user to access it. The idea is to solicit a ransom fee from the owner of the data. The data system gets decrypted, and the user regains access to the data only after paying the ransom. What is Ransomware as a Service (RaaS)? While talking to your LA IT support provider, you may have come across the term Software-as-a-Service (SaaS). RaaS operates more or less, under the same model. Ransomware-as-a-Service is a subscription-based model which has been created to help even the novice hackers launch a ransomware attack. The packages have eliminated the need for hackers to code malware. The motivation behind the establishment of this kind of service is that the Return on Investment (ROI) is better than that of mere data theft or breach. Look at it this way. If a hacker gets a hold of data, he or she needs to find someone who would be willing to buy the data. It might take time before they get any returns. Ransomware, on the other hand, is a sure bet as most businesses will pay to gain access. How to Protect your Business from Ransomware The availability of RaaS has made it even easier to launch ransomware attacks: ransomware attacks are increasing by the day. There is, therefore, a need for businesses to provide an extra layer of security for their data. There are ways in which one can prevent ransomware. - Data backup – One is by doing a backup. With that, you will not have to pay for ransom for your business data to be released. If you have back up all you need will be a data restore in case of an attack. - Use a firewall or a good antivirus – Having a firewall will go a long way in protecting your data from such attacks. - Updating software – Ignoring software updates creates a weak point, which makes it easy for cybercriminals to launch their attacks. Ensure that your application software and operating systems are up to date. Organizations need to be on the lookout for attacks from cybercriminals, and especially, ransomware attacks. LA IT support companies like Advanced Networks have in-depth knowledge on cybersecurity that will come in handy in combating data security threats. Consider including us in your data security plan— contact us today.
<urn:uuid:e2afdf6e-5c2a-4c67-bf07-b6434c998db8>
CC-MAIN-2022-40
https://adv-networks.com/la-it-support-things-need-know-about-ransomware-as-a-service-raas/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00559.warc.gz
en
0.949436
576
2.640625
3
After four years of development, the Energy Exascale Earth System Model (E3SM) will be unveiled today and released to the broader scientific community this month. The E3SM project is supported by the Department of Energy’s Office of Science in the Biological and Environmental Research Office. The E3SM release will include model code and documentation, as well as output from an initial set of benchmark simulations. The Earth, with its myriad interactions of atmosphere, oceans, land and ice components, presents an extraordinarily complex system for investigation. Earth system simulation involves solving approximations of physical, chemical and biological governing equations on spatial grids at resolutions that are as fine in scale as computing resources will allow. The full press release is on LLNL web site. The E3SM project will reliably simulate aspects of earth system variability and project decadal changes that will critically impact the U.S. energy sector in the near future. These critical factors include a) regional air/water temperatures, which can strain energy grids; b) water availability, which affects power plant operations; c) extreme water-cycle events (e.g. floods and droughts), which impact infrastructure and bio-energy; and d) sea-level rise and coastal flooding, which threaten coastal infrastructure. The goal of the project is to develop an earth system model (ESM) that has not been possible because of limitations in current computing technologies. Meeting this goal will require advances on three frontiers: 1) better resolving earth system processes through a strategic combination of developing new processes in the model, increased model resolution and enhanced computational performance; 2) representing more realistically the two-way interactions between human activities and natural processes, especially where these interactions affect U.S. energy needs; and 3) ensemble modeling to quantify uncertainty of model simulations and projections. “The quality and quantity of observations really makes us constrain the models,” said David Bader, Lawrence Livermore National Laboratory (LLNL) scientist and lead of the E3SM project. “With the new system, we’ll be able to more realistically simulate the present, which gives us more confidence to simulate the future.” Simulating atmospheric and oceanic fluid dynamics with fine spatial resolution is especially challenging for ESMs. The E3SM project is positioned on the forefront of this research challenge, acting on behalf of an international ESM effort. Increasing the number of earth system days simulated per day of computing time is a prerequisite for achieving the E3SM project goal. It also is important for E3SM to effectively use the diverse computer architectures that the DOE Advanced Scientific Computing Research (ASCR) Office procures to be prepared for the uncertain future of next-generation machines. A long-term aim of the E3SM project is to use exascale machines to be procured over the next five years. The development of the E3SM is proceeding in tandem with the Exascale Computing Initiative (ECI). An exascale computer refers to a computing system capable of carrying out a billion billion (10^18) calculations per second. This represents a thousand-fold increase in performance over that of the most advanced computers from a decade ago. “This model adds a much more complete representation between interactions of the energy system and the earth system,” Bader said. “The increase in computing power allows us to add more detail to processes and interactions that results in more accurate and useful simulations than previous models.” To address the diverse critical factors impacting the U.S. energy sector, the E3SM project is dedicated to answering three overarching scientific questions that drive its numerical experimentation initiatives: - Water Cycle: How does the hydrological cycle interact with the rest of the human-earth system on local to global scales to determine water availability and water cycle extremes? - Biogeochemistry: How do biogeochemical cycles interact with other earth system components to influence the energy sector? - Cryosphere Systems: How do rapid changes in cryosphere (continental and ocean ice) systems evolve with the earth system, and contribute to sea-level rise and increased coastal vulnerability?
<urn:uuid:b4c68043-c6f8-4e39-85cd-86650eff5a65>
CC-MAIN-2022-40
https://www.hpcwire.com/2018/04/23/new-exascale-system-for-earth-simulation-introduced/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00559.warc.gz
en
0.898962
849
2.734375
3
Typically, digital currencies are private monetary values that exist only in ledgers. That is, a digital currency—offered and administered by a private organization—is bypassing the conventional banking system and governmental currencies, allowing monetary worth to change ownership for the purchase of goods and services. Because these currencies have no physical aspect to protect, their protection and integrity is safeguarded by heavy cryptographic means, and for that, they are referred to as cryptocurrencies. Parallel to the soft cryptocurrencies, that is currencies that exist only in database-ledgers, also exist hard cryptocurrencies, that is, currencies that have a physical representation. Examples of such currencies are the Peercoin. While the most well-known cryptocurrency is the Bitcoin with its Marketplace, there are numerous other such currencies publicly or privately available. The use of cryptocurrencies allows anonymity in the transaction, thus offering privacy over the performed transaction. The Blockchain Concept Physical monetary units, such as paper money, are numbered to indicate their uniqueness and are made relatively difficult to duplicate. Similarly, metal monetary units, such as gold and silver coins, have their own intrinsic value, and their duplication is impractical. On the other hand, the digital currency units are traced through their use, as they are being utilized, changing owners in the process. This tracing is being implemented via a highly secure open ledger— a database—that records all transactions digital currency units go through. Each transaction is referred to as a block, and the ledger keeps track of each and every transaction that takes place within a particular cryptocurrency system. Each transaction, that is each use of the digital money, has a timestamp and a link as to where that digital money came from. Cryptocurrency management organizations claim that “By design, blockchains are inherently resistant to modification of the data”*. Because the blocks are linked to each other—they are chained—the concept is called blockchain. In a general sense, the concept of creating a blockchain, that is the recording of a sequence of events permanently stored in a database, is applicable to many areas, including the medical field and the military one. There are numerous organizations that issue cryptocurrencies. It is believed that they are over seven hundred digital currency issuing organizations. OneCoin claims that “Regulatory challenges related to cryptocurrencies are mainly linked to the anonymity of transactions and the decentralization of financial dealings.” Monero claims that it “… is a secure, private, untraceable currency.” Many such organizations employ external auditing supervision in order to establish a reputation on maintaining integrity and above all privacy. Authorities are primarily concerned over two main issues regarding the cryptocurrency industry, if we may call it that way. The first is that the possibility that criminal/illegal activities are concealed behind the anonymous transactions. It is believed that cryptocurrencies “… have been quite a debatable vehicle for payments due to the poor regulations and high attractiveness for international criminal activities.” The US Federal Bureau of Investigation is very concerned over the use of cryptocurrencies for criminal-illegal activities, having made many arrests on such crimes. The second concerns cryptocurrency integrity, whereby the minting of the respective currency corresponds to the available conventional monetary assets. If unregulated, cryptocurrencies may become insolvent, unable to convert crypto money to conventional money— dollars, euros, etc. Addressing such possibilities, insurance companies offer coverage over cryptocurrency losses due to cybercrime, such as theft, hacking, or insolvency. There are numerous cryptocurrencies, and new ones pop up continuously. The most visible to the public are the following: Bitcoin is the most well-known of the cryptocurrencies. According to the Bitcoin organization “… the supply of Bitcoin is mathematically limited to twenty-one million bitcoins and that can never be changed” and the Bitcoin network is unstoppable and uncensorable. The last statement concerns crime prevention agencies, worldwide, while it is a clear invitation for illegal activities. This statement is not unique to Bitcoin but applies to all digital currencies that maintain uncensorable accounts. A law firm specializing in cyber matters states: “This digital currency (the Bitcoin) that is processed via the Internet is not subject to oversight by a central issuing authority.” Cryptocurrencies might not be subject to oversight by a central issuing authority, and also be uncensorable, but there is a US law in the books, covered in Chapter Nine, the Communications Assistance for Law Enforcement Act of 1994, that obliges telecom companies “… pursuant to a court order or other lawful authorization, to intercept all of the subscriber’s wire and electronic communications….” It is hard to believe that cryptocurrencies are above this law. In the cryptocurrency context, the wallet is an account that is anonymously maintained by the cryptocurrency issuing organization, or by a third party. Such accounts may be uni-currency or multi-currency. Using the wallet, one can make direct person-to-person transactions, pay invoices, request or send payments by email. The cryptocurrency wallet may also be a hardware device that holds the cryptocurrency account password. Such device interfaces via a USB connection, and it may hold passwords for other accesses as well. Cybercrime in the Cryptocurrencies Domain Undoubtedly, the use of digital currencies provides privacy and anonymity not offered by the conventional funds transferring institutions. However, this privacy, and especially the offered anonymity, has made cryptocurrencies crypto criminals’ heaven. One type of cybercrimes that is being harbored by the anonymity of cryptocurrencies is the cyber extortion—the demand for ransom— for the prevention of future attacks, or for the restoration of already- locked data. Ransomware, as it is named in cyberspace, is illegal software self-installing in Web-connected computers with the ability to encrypt, or to delete, directories or files that can be decrypted only through a password, or through a remote deactivation by the cybercriminals. Such type of software is available for purchase in the Dark Web, a term referring to the websites that deal in illegal activities, and it is not listed in the search engines. As for the mode of payment, cryptocurrencies are being used, thus, concealing both the seller and the buyer. According to the FBI, “… earlier ransomware scams involved having victims pay ransom with pre-paid cards, victims are now increasingly asked to pay with Bitcoin,…”* Through ransomware, criminals access PCs with the ability to view the directories and to selectively damage, delete or encrypt files, or even change passwords. Often, such malware is installed and the ransom is demanded not activating it. However, there are ransomware scanning and ransomware removal tools that can provide defense in many ransomware cases. Should one fall victim of such cyber extortion, always notify the cognizant authorities, who may possibly remove the criminal’s lock over the data of the attacked PC. Either way, one should always record at least the cryptocurrency account number where the ransom is to be paid, for subsequent investigation by the cognizant authorities. Besides anti-virus software that examines all access attempts to a PC and rejects the illegal ones, there is a practice that can make ransomware less harmful. This is, backing up all files in write-only storage, that is, accessing only if disconnected from the Web. Cryptocurrencies are treated as commodities that are bought and sold in a variety of ways. Similar to the stock exchanges, there are cryptocurrency exchanges where one may trade units of cryptocurrencies for other cryptocurrencies or for traditional currencies, dollars, euros, etc. One may buy cryptocurrencies using one’s own bank account, credit cards, ATMs, or using PayPal. However, such purchases are not totally anonymous, since such accounts have the customers’ identity. Of course, one may purchase cryptocurrency from the wallet of an already cryptocurrency holder. Either way, while the identity of the cryptocurrency purchaser might be traceable, the use of the cryptocurrency is claimed to be untraceable.
<urn:uuid:2e5c2e42-dc7b-4aa6-86ea-fefcfeb7cbb5>
CC-MAIN-2022-40
https://cybersecuritymag.com/cybersecurity-and-digital-currencies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00559.warc.gz
en
0.942747
1,648
3.546875
4
In digital transformation, companies leverage new tools and technology to improve workflows, develop smart products and processes, achieve greater customer engagement, and create a more efficient operation. It can happen incrementally, with small changes to modernize legacy systems at different levels of the organization, or through a broad shift in the culture and operating model of the entire business. In either case, companies typically pursue one or, depending on the specific goals and digital maturity of an organization, a combination of the 4 main types of digital transformation outlined in this article. Table of Contents - What Is Digital Transformation? - The Benefits of Digital Transformation - The Types of Digital Transformation A digital transformation strategy is when a business creates new processes and practices or when already existing ones are modified through the use of technology in order for the business’ operations to become more efficient. Companies tend to attempt a digital transformation strategy to evolve along with the advancement of technology and continue meeting the market’s ever-changing needs. Many companies that have been active for quite a long time adjust their operations to the digital age by adopting new processes at a slower pace, changing a smaller number of functions each time. Others attempt to make the transition rapidly and all at once. The full integration of technology in a company’s culture and workflow is called digital transformation. The Benefits of Digital Transformation There are a lot of benefits for companies that go through a digital transformation. By incorporating advanced technology, a company can improve internal processes, lower labor costs, and increase productivity. This is because the use of software, applications, automation, digital tools, and data collection allows the business to utilize staff members better, manage a more significant number of projects, and meet shorter deadlines. This results in the provision of better products and services. Of course, one of the most important benefits of digitization for a company is that it allows for a better customer experience. The COVID-19 pandemic made many businesses digitize their services and the way they communicate with their customers. While the circumstances around their digital transformation strategy were not favorable, the leap that many of these businesses made will prove to be a great achievement for many of them in the long run. Our society is generally fast-paced, and one of the most highly valued commodities today is time. People are looking for businesses that provide easily accessible products and quick services. Therefore, many companies that have augmented their business models to offer their services and interact with the customers online are often preferred by customers over others that have not. For example, many people might prefer a clothing store with an online store option over one requiring a customer to shop only in person. Digitization allows for customers to have more options, increasing customer satisfaction. Another great benefit of a company’s digitization is the fact that through online interactions with its customers, it can collect data that can inform its future marketing and sales approach. Through the collection and study of data, a company can monitor its performance and make more strategic decisions. Moreover, innovation allows companies to surpass competitors that have not yet leaped in digital transformation. Let’s take a look at the four types of digital transformation. Business process transformation refers to a company’s use of technology to update some of its internal processes. As mentioned earlier, these types of technological improvements can significantly benefit a company by minimizing costs, accelerating the supply chain, and improving the quality of the provided services or products. For example, if a company decides to use Robotic Process Automation (RPA) to automate a routine task previously completed manually by employees, that would be part of transforming an internal business process. In that case, the company would be able to lower labor costs as employees would not have to be involved in that task anymore. Moreover, the productivity of the employees that previously completed the job would increase as they would be able to now work on less tedious tasks. Business Model Transformation Business model transformation attempts to change the traditional way the industry operates. When a company proceeds to go through a business model transformation, it employs technology that will enable it to offer services or products differently than most other competing companies in the same field do. For example, Uber changed the way people look at transportation today. It solved a vital need: people needed to travel from place to place but either did not drive themselves, found public transportation time-consuming and unreliable, or deemed taxis too expensive. Uber’s winning element was that the way they provided the service was different from any other competitors at the time. Uber connected clients that needed transportation with drivers, and the overall service was significantly less expensive. Today, many other companies, like Lyft, follow Uber’s business model and offer similar services. Related: Digital Transformation Industries Report (Across 15 Different Sectors) Domain transformation refers to a company utilizing new technology to expand its offerings. Many companies often choose to grow by entering new fields and offering a wider net of services or products that do not necessarily all belong to the same market. For example, Facebook started as a strictly social media platform. However, due to the popularity of video-on-demand streaming services, Facebook Watch was launched in 2017. Since its debut, Facebook Watch has developed a lot of original content like scripted TV series, talk shows, sports shows, documentaries, and much more. Its competitors include other popular streaming services like Netflix, Amazon Prime Video, Disney+, etc. This type of digital transformation journey relates to transforming a business’ culture. Total digital transformation for a company cannot take place without the support of the organization’s workforce. A company that enters such a process needs to shift its practices, goals, and overall way of thinking towards supporting digital innovation and embracing more forward-thinking attitudes and emerging technology. To change a company’s culture, it usually needs to start from the top with staff members following the example of their leaders. In some cases, a reconfiguration in staffing might also need to take place as new talent might need to be recruited to help the company succeed in its new direction. However, the change in mindset often happens organically in some companies. Digital transformation is a kind of evolution that currently takes place in many businesses and organizations. Digital transformation success is kickstarted by introducing new and emerging technology that allows companies to expand in a lot of different ways as it offers deeper insights into the organization and operational processes. There are four different types of digital transformation, including process transformation, business model transformation, domain transformation, and cultural/organizational transformation. Each type refers to another aspect of the business transformation process that allows a company to advance itself and its business strategy in various ways.
<urn:uuid:8babbe29-19fa-47c9-9a1f-8903b7f0aaa7>
CC-MAIN-2022-40
https://digitaldirections.com/types-of-digital-transformation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00559.warc.gz
en
0.947376
1,385
2.671875
3
"I got a Mac because I'm sick of computer viruses." It seems like only yesterday that computer viruses were simply a part of life. It never felt like a question of if you going to be infected, but rather of when you were going to become infected next. While one of the leading causes of computer virus infections was the abundance of file-sharing sites like Napster, Kazaa, and Limewire, the software seemed to be to blame as well. When the first iMacs came on the scene in the late ‘90s with its iOS, a huge value proposition was that Macintosh computers were immune to PC viruses. We all assumed that the chance of a Mac getting a virus was about as good as getting a cold from a dog. These days, Apple computers and the iOS system have taken the world by storm...but you don’t hear about them being “virus-proof” nearly as much anymore. Why is that? Yes, Macs can get viruses...but they don’t as much. If you’ve owned a Mac and have been somewhat loose with your software updates, you can probably attest to the fact that Macs can get viruses. Despite this fact, Macs don’t seem to not get viruses as often. The keyword here is “seem” to not receive viruses — and for a number of reasons. Macs don’t run Windows. One of the reasons why Macs feel so efficient is because the software was designed for the Mac hardware. This same operating system (iOS) makes the computer less likely to get a virus because most viruses are designed for the Windows OS. It’s kind of like trying to insult someone who speaks a different language. Windows OS is favored by businesses. The aim of most viruses is lucrative targets — businesses, government organizations, etc. For these, Windows is the favored operating system. Windows is very flexible in that it can run on a variety of computers. It’s very affordable in comparison to iOS, again, also because of its hardware options as well as subscription plans. However, Windows’ prevalence makes it much more susceptible to attack. Viruses themselves aren’t as prevalent. Don’t get us wrong — computer viruses still exist. Still, viruses are simply the more “infectious” side of malware in general. Malware has been evolving to become more lucrative for the malware developer and user. While a virus is understood to be highly “contagious” to other computer systems, the greater score is a targeted attack. Spearphishing is beginning to replace broader phishing efforts. If a virus was trying to injure with a shotgun at distance, spearphishing is a trained sniper. Apple doesn’t talk about virus protection as much. If you’ve ever owned a Windows-running PC, you’re used to be bombarded by requests to update your virus protection software. The level of which viruses are discussed can make it feel like the next super-virus is waiting in the wings to pounce on your system. The main reason for this is to scare you into buying more virus protection. Apple, on the other hand, doesn’t bring up the virus protection nearly as much. While you may update your iOS in order to receive the latest iTunes features, virus protection may be included in the upgrade — Apple just doesn’t feel like frightening you every time you open your laptop. So, while Macs can still become infected with computer viruses, they aren't nearly as common simply due to the way in which the majority of Macs are used—for one's personal life. However, as Macs become more prevalent among working professionals, instances of Mac viruses will increase. Remaining vigilant against cyber threats is important—no matter what operating system you use.
<urn:uuid:b97cafdb-9889-462c-bde0-437ac01e5265>
CC-MAIN-2022-40
https://www.jdyoung.com/resource-center/posts/view/187/yes-macs-can-get-computer-viruses-jd-young
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00559.warc.gz
en
0.973159
806
2.765625
3
With the rise of high-profile cyber attacks and the increasing threat of data breaches, companies are looking to new solutions for addressing, mitigating and preventing security concerns. Now, more than ever, organizations are setting their sights on artificial intelligence for cybersecurity applications. Capabilities such as rapid data collection and processing, real-time visibility and enhanced analysis, position AI technology as a crucial component in performing breach investigations, identifying vulnerabilities and delivering better insights from collected data. As the technology’s security uses continue to expand, the concept of AIOps is gaining traction as the latest innovation in the information technology and cybersecurity space. AIOps – or artificial intelligence for IT operations – is the term used to describe the application of AI technologies like machine learning, natural language processing and others in traditional IT operations. According to Gartner, “AIOps combines big data and machine learning to automate IT operations processes, including event correlation, anomaly detection and causality determination.” Integrated with IT, AI technology can have a myriad of benefits on an organization’s critical operations. Here are the top 3 areas in which AIOps can impact IT: Commercial companies and federal agencies process massive amounts of data on a daily basis. AI can help to sift through this data to eliminate redundancies (which can account for up to 99 percent of an organization’s data), select elements that indicate a problem and quickly identify potential threats. Additionally, machine learning can capture background information through the data collection process and help to provide greater context in handling future cyber incidents. The AIOps platform can correlate information across multiple data sources to track patterns and give organizations a comprehensive view of the IT environment from infrastructure and network to applications and storage in the cloud and in physical assets. This correlation helps organizations to find relationships between data elements and categorize them for further analysis. AIOps can also perform root cause analysis, or inference, to identify the source of issues and prevent them from occurring again. Because the AIOps platform automates the above processes, IT personnel and employees have the flexibility and freedom to shift their focus to larger, more complex tasks within organizations. Automation can help improve operational efficiency and promote cross collaboration between IT specialists and disparate departments. This also speeds up the diagnosis of an issue and the resolution time needed to address it. Learn more about the role AI can play in cybersecurity at the Applying AI to Data for Cyber Hygiene and National Security Forum hosted by ExecutiveBiz Events on March 10. This in-person forum features Stephen Wallace, chief technology officer and emerging technology director for the Defense Information Systems Agency, as the keynote speaker, as well as industry experts from Skyepoint Decisions, NT Concepts and TIBCO. Register now for the Applying AI to Data for Cyber Hygiene and National Security Forum on March 10!
<urn:uuid:2ef800a2-358e-4ec6-be12-53b0bc96f99e>
CC-MAIN-2022-40
https://blog.executivebiz.com/2022/02/3-major-benefits-of-aiops-in-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00759.warc.gz
en
0.922956
595
2.59375
3
Using Let’s Encrypt Free Certs with your Linux Servers February 16, 2021 Part 2 of our Blog series on certificates focuses on a practical matter: using the free Let’s Encrypt certificates to secure servers that may not be publicly available, but still need better security than self-signed certs can give you. As we explained in our last blog on this subject, to use HTTPS encryption with certificates, you can choose from a number of options: a cert from a private Certificate Authority (CA), in this case, you or your company run the CA, not a trivial task! a certificate signed by a Root CA you trust GroundWork supports any of these (or even two at once on the same server). What you choose to use depends on a lot of things, like your tolerance for trust failure reports in your browser from self-signed or private CA certificates. Basically, only root-signed certs are trusted by browsers out-of-the-box, so unless you want to deal with users reporting and complaining about those failures, and explaining how to explicitly trust the certs you use, it’s best to use certs signed by a Root CA. Which one? What kind? Ok, so you want to use a root-signed certificate, but which one? What kind? Well, again there’s a lot to choose from. You can buy a cert from one of a number of CA providers, and the type you get determines the trust level the browser reports, usually with the lock symbol in the address line. Have you ever noticed that sometimes it’s green? That’s because the domain owner used enhanced validation. Most of the time, you can use a “standard” validation cert that is verified to belong to you using a DNS TXT record you add to your domain. This makes the little lock symbol turn white on the address bar. The way this works is that the CA gives you a coded string for you to put in the TXT record, and then verifies it by doing a DNS lookup. Similarly, you can add a coded string to a URL on the domain, validating it is under your control. Fees for standard validation certs range from free (for Let’s Encrypt) to a few tens of dollars a year. Some CAs offer extended validation procedures, which allows them to certify to a high degree of trust that you exist, and are in fact the owner of the domain(s) you ask the cert to be valid for. Typically you need to provide extensive documentation, affidavits, business licenses, etc. for this type of cert, and fees can be over $1000 a year. Not for everyone, but you do get a green lock symbol, which should make you confident that the site owners are who they say they are. Also, you can get a cert valid for one domain name, several at once, or even all subdomains for a given name at one level of the namespace, a so-called “wildcard” cert. Got all that? Ok, let’s start making it easier instead of harder to understand. The Let’s Encrypt project is designed to democratize encryption by making the standard validation type of cert free. We like democracy, and encryption, and we like free, so we decided to give you an example of how you can generate and use these free certs with GroundWork, and to automate that process. Here we go: Automatic DNS cert validation DNS allows you to map a domain in your external zone file to an internal address, if you want to. So, let’s say you have a GroundWork server which isn’t available publicly. The DNS record for a public name for it can happily resolve to a private IP address. This makes the server accessible to you and your co-workers or anyone on your internal network or VPN using the public name, but not to anyone else. Since servers check the name you access the server by against the cert, you can use a cert for the public name on an internally resolved IP address. The GroundWork server never has to be publicly accessible, but it can still use a Root CA signed cert. How do you validate the cert? Well, the public nature of DNS means you can use DNS validation. The way you do this will depend on exactly how your DNS provider works. You can do it by hand (see Generating Let’s Encrypt SSL Certificates), but that’s inconvenient for the long term, since the Let’s Encrypt certs expire every 90 days and you need to repeat the process. You will want to automate this. Fortunately, you can. Use an API Here at GroundWork, we use Gandi.net (https://gandi.net) for our public DNS, and there’s a convenient plugin for Gandi DNS for certbot. Your provider might be different, so check to see if you can manipulate the DNS with an API. Odds are you can. There’s a list of such providers in the article DNS providers who easily integrate with Let’s Encrypt DNS validation. For a lot more detailed information about using this method, see this page on the Let’s Encrypt site Challenge Types. 1, 2, 3 Go Here’s the steps to validate your cert with DNS and Gandi’s API. This example works on Ubuntu. The steps might be a little different if you use another Linux distro: Install certbot and the Gandi API plugin, which needs pip: sudo vi /etc/letsencrypt/gandi.ini #Live dns v5 api key # optional organization id, remove it if not used sudo chmod 400 /etc/letsencrypt/gandi.ini This plugs in the DNS TXT record for this domain that Let’s Encrypt checks for. Once it’s validated, you will have your key and cert files on disk. The following commands will get them installed in your GroundWork server once it’s running: Note for the last step, never leave the private key lying around where unprivileged users can see it. Make sure to go back into your DNS every so often and clear out the old TXT records this process makes. It’s ok to have a few there, but it might cause problems if there are too many. You could easily write a shell script to use the steps above to renew the cert and load it. That can be added to the GroundWork server host crontab and run once a week. Your Let’s Encrypt cert will get renewed as soon as it’s close to expiration, and you’ll never have to touch it again. We hope this information is useful and practical. Please let us know if you have any questions about applying these steps to your GroundWork server at GroundWork Support, or email us at firstname.lastname@example.org. Happy browsing! Lately, security has become top of mind across infrastructure monitoring customers. This is no surprise considering the widespread reports about supply-chain vulnerabilities and embedded compromises rampant in popular network monitoring software. In light of this, we want to underscore how seriously we have always taken our security processes, and how we cultivate a culture based on a foundation of sound security protocols. We strive to be good stewards of our customer’s data and take great pains to ensure we are always on the bleeding edge of security best practices. A chain is only as strong as its weakest link, which is why we integrate secure processes into the development and deployment of GroundWork, and immediately respond to feedback and suggestions from customers. In this post we outline 7 ways in which our security policies manifest within the platform and our company culture. In the last decade, it has become increasingly important to secure websites and applications using HTTPS instead of HTTP. A GroundWork Monitor installation is no exception, so in GroundWork 8, using HTTPS to access the system is the default setup, and you can add TLS certificates to it that you generate or purchase. See Adding Certificates to HTTPS for more information on doing so. TLS (Transport Layer Security) is the successor to the now-obsolete SSL (Secure Sockets Layer), and TLS certificates support the companion protocol that uses modern cryptography to ensure your HTTPS data on the wire cannot be usefully seen by or altered by third parties. When dealing with certificates, there are many technical questions about how to efficiently and effectively manage the security setup on a web application. While GroundWork does offer several ways to manage certs and system naming, it’s important at the start to make sure you have the right certificates to begin with. To that end, this post describes a small tool we have developed to assist in this process. Future blog posts and documentation pages will cover additional aspects of the security setup on GroundWork systems. Privacy & Cookies Policy Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information. Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
<urn:uuid:6867636e-623b-41b4-81b6-0ecb85c9ceca>
CC-MAIN-2022-40
https://www.gwos.com/using-lets-encrypt-free-certs-with-your-linux-servers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00759.warc.gz
en
0.909342
2,153
2.71875
3
by Writer Calvin Fellows Topic: Synthetic Fraud Identity theft affects millions of people every year, and it’s a significant problem for U.S. citizens. According to a Javelin Strategy study, the total number of identity theft victims in 2018 was around 14.4 million. The widespread use of the internet and computers has created opportunities for fraudsters and digital thieves. While there are quality antivirus software programs for individuals and companies, hackers constantly develop new ways of outsmarting security systems. These are some of the important facts to know about this new-age crime. Completely protecting yourself against identity theft is very difficult, but learning about it can help you avoid these problems. Table of Contents - Identity Theft Affects Millions of People Every Year in the U.S. - Identity Theft Happens Regularly - Social Security Numbers Are a Primary Target - Recovery from an Identity Theft Case Takes Time - Children Are Also Targets - Data Breaches Are a Major Cause of Identity Theft - Identity Theft Increases During an Economic Crisis - It’s Difficult to Protect Yourself Against Identity Theft Completely - Identity Theft Related Tax Fraud is Real - Acting Quickly is Key - How to Protect Yourself From Identity Theft Identity Theft Affects Millions of People Every Year in the U.S. The Federal Trade Commission receives millions of identity theft reports every year. There was a dramatic increase in identity theft and fraud reports from 2019 to 2020. In 2019, the FTC received around 3.3 million reports, and in 2020 the number went up to 4.8 million. The increase was primarily due to the COVID-19 pandemic. Stimulus payment checks were an easy target for online criminals. Nearly one-third of identity theft complaints in 2020 were scams involving government benefits. Identity Theft Happens Regularly Identity theft occurs more often than people think. According to a Javelin Strategy and Research study, during 2013, there was an identity theft case every 2 seconds. Data breaches and cyberattacks also occur frequently and are on the rise. They can potentially affect millions of people. In 2016, a business fell victim to a ransomware attack every 40 seconds, according to a cybercrime report by Cybersecurity Ventures. Social Security Numbers Are a Primary Target Identity theft criminals want your SSN because it’s a crucial and private piece of information, and you can’t easily replace it. With your SSN, thieves can access other information about you, such as your date of birth, employment, bank accounts, and credit files. Sometimes, you may not find your SSN is compromised until your credit application is turned down, or you start getting calls from creditors. Read Also: How Does SSN Identity Theft Work? Recovery from an Identity Theft Case Takes Time One of the most surprising identity theft facts is how long it takes to recover for victims. If you notice fraudulent activity in your accounts soon enough, the time to recover from the consequences of identity theft can be very short. However, according to a study from the SANS Institute, on average, it might take as much as 600 hours to recover from a severe case of identity theft. Children Are Also Targets Adults are not the only people that identity theft criminals target. Identity theft schemes focusing on children are common. Children are given an SSN at birth but generally don’t use it until they turn 18 and go to college. Because children have spotless credit reports, it’s easy for criminals to use children’s SSN to open credit cards to take out loans. In addition, the parents probably won’t check the credit reports for years, giving thieves a lot of time. Data Breaches Are a Major Cause of Identity Theft A data breach happens when a hacker accesses a company’s information or database illegally. Companies like Twitter, Facebook, or Yahoo have all experienced data breaches in recent years. When hackers breach the security systems of an essential social media or financial company, they can steal information from millions of people very quickly. In 2019, there were 1,473 data breaches in the U.S., which impacted roughly 300 million people. Read More: Things To Do In Case Of A Data Breach. Identity Theft Increases During an Economic Crisis An interesting identity theft fact is that there’s more online crime during crises. For example, during the 2008-2009 financial crisis, there was a 33% increase in internet fraud. The same is true of the coronavirus pandemic. The total number of yearly identity theft cases before the pandemic was close to 3 million, but in 2020 the Federal Trade Commission received 4.8 million identity theft and fraud complaints. In times of economic crisis, businesses tend to cut down on costs. Online security is often seen as a non-essential investment, and companies stop paying for it. Usually, these are companies that have never experienced a cyberattack. Hackers know there are opportunities during difficult economic times, and they take advantage. It’s Difficult to Protect Yourself Against Identity Theft Completely Even if you are very careful with your personal information, all kinds of companies and government agencies can suffer cyberattacks. For example, in 2017, Equifax, the largest credit bureau in the U.S., suffered a data breach that affected 145 million Americans. The breach exposed personal data, including Social Security Numbers. Identity Theft Related Tax Fraud is Real Even though tax fraud isn’t one of the most common types of identity theft crime, it’s still a significant problem. This is when someone uses your personal information, including your SSN, to file a tax return. Thieves usually file tax returns early in the season and have the refunds electronically deposited into their accounts. In 2020, more than 89,000 Americans were victims of tax fraud, and during that year, tax fraud accounted for 7.3% of all types of identity theft. Acting Quickly is Key The ordeal might begin when you notice transactions on your bank account that aren’t yours. Maybe, a company that has your information announces their systems have been breached. When you learn you are an identity theft victim, you should act as quickly as possible. First, change your passwords and use different passwords for all your accounts. Then, call your bank and block all accounts and credit cards from further transactions. Once you have taken these two steps to protect yourself, you can file a report with the FTC. How to Protect Yourself From Identity Theft There are many things you can do to protect yourself from identity theft. You should be careful with your personal information at all times. For example, don’t throw confidential documents with your SSN in the trash. It’s better to shred financial or medical records. Always be careful about what you do online. Public Wi-Fi isn’t a secure network, and you shouldn’t log into your bank accounts or other sensitive websites while using it. Also, be wary of online scams. The internet is full of fake and fraudulent websites that require you to remain vigilant. Last Updated on
<urn:uuid:c62844da-bc7b-4e79-9f78-5cbf7cc5f929>
CC-MAIN-2022-40
https://www.homesecurityheroes.com/identity-theft-facts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00759.warc.gz
en
0.942217
1,485
2.828125
3
How do wireless printers work? Printers are still essential in the modern office, but wireless printers are perhaps the most useful Printers have been at the heart of the office, and home office, for many years now, and they've evolved significantly in that time. In previous years, you'd need to ensure multiple cables were connected before you could print: one to the power source, a cable to the computer itself, or for an older network-powered printer, an ethernet cable directly into the router. Thankfully, the advent of wireless printing has streamlined things significantly since then. The question of how a wireless printer works can be answered in a multitude of ways because there are so many different technologies that facilitate wireless printing. Indeed, the wireless technology shift brought printers with it too, and being able to send files, photos, and other documents across the network to the printer itself made the task much easier for pretty much everyone who has ever needed to print anything. No two configurations are the same, though, and the wireless technologies that allow users to print documents wherever there is a network can differ depending on the manufacturers of the printer and device from which you want to print. Some technologies like Apple’s AirPrint are only available on certain models of printer, so it’s worthwhile understanding your printing needs and wants before making the final decision to purchase the best wireless printer for your business. What is a wireless printer? Wireless printers are somewhat self-explanatory, in that they are printers that allow users to print documents without actually having to physically link a computer to the printer itself. These are sometimes known as Wi-Fi printers because it’s common, especially in corporate offices, to use Wi-Fi as the wireless technology through which documents are sent to the printer. Many offices, libraries, schools, and other shared workspaces will often have one or two printers on each floor of the building. No matter where a user is in the building, they will be able to select their nearest printer and send the documents they need to it, as long as they are connected to the same Wi-Fi network as the printer. It’s a highly convenient way of allowing large groups of people to print from the same device. It also means workers in a busy office won’t have to fight over USB cables, awkwardly holding a laptop, for example, within the cable’s reaching distance while waiting for the documents to print. The best printers, often found in industrial offices, often come equipped with touchscreen user interfaces (UIs) and ship with technology that prevents employees from printing others’ work. They can be accessed by wireless keycards with a tap, other kinds of access fobs, or sometimes just a normal password. How do wireless printers work? How each wireless printer works will, of course, vary between models and manufacturers, but the underlying principles remain largely the same across the board. Typically, a wireless printer will need to be paired to the target Wi-Fi network. This is the most common technology used to print wirelessly, although there are others. Many offices use multiple wireless networks and reserve some networks for certain functions, so it’s important to connect printers to the network that the staff use day-to-day - that includes being on the same network frequency, too. Setup may also require users to install drivers or software on their individual machines, but this is relatively easy and quick to do; the manufacturer should provide instructions on how to do this, if necessary. From there (and if installed correctly), the printer should appear in every printing window a user brings up. For example, pressing Ctrl + P on Windows, or CMD + P on macOS, are the usual keyboard shortcuts to print from whatever application a user is running. Among the numerous options that appear in the resulting window, the printer may appear in a dropdown list marked Destination or similar. In other cases, printers may appear as different icons in a graphical user interface (GUI), like how folders can be viewed as small, medium, and large icons in Windows File Explorer, for example. After the correct printer is selected and all other settings are tailored to the printing project at hand, then all that’s left to do is click Print and the documents should be sent over the Wi-Fi network and received by the printer. The wireless printer may print the documents immediately, or send them to a queue. Printing queues are common in offices and prevent unauthorised staff from viewing certain documents. It also prevents people from accidentally taking printed work that isn’t theirs away from the printer. In these cases, the company should set the users up with a way of accessing the printing queue, usually done via a password or wireless keycard. From there, the user selects their print job and ‘releases’ it from the queue, starting the actual printing process. If using Wi-Fi in a large building, ensure the printer is placed in a convenient location for all users, and most importantly, placed in a location that allows it to receive a strong Wi-Fi signal because, without one, there will be no wireless printing. Other types of wireless printing Wi-Fi is the most common wireless standard over which wireless printing is carried out, but other technologies exist that may be even fast and more convenient for any given user or business. Bluetooth printers exist and can be used by a multitude of devices to print documents, much like how Wi-Fi-enabled devices can. The downside to Bluetooth is that the maximum range is likely to be considerably shorter than Wi-Fi, making it a sub-optimal choice and potentially an entirely unworkable one for large offices. Near-field communication (NFC) can also be used to connect a device to a printer directly, requiring just a tap between the two devices. The process should be familiar as it’s the same technology that’s used in modern smartphones to accomplish phone-based payments. This can be convenient in some use cases but in most professional settings it will be difficult and laborious to haul a laptop over to a printer, somewhat eliminating the wireless benefit of printing. Cloud printing is also a fast-growing possibility and if the user or business takes the time to link compatible services with compatible printers, it can be a big time-saver. Some large tech and cloud companies now have dedicated print services so instead of printing over a direct Wi-Fi connection to the wireless printer, the user can send a printing job to a cloud printing service - usually linked to an account with whatever company is offering the service - which then sends it to the wireless printer. This can be useful in some use cases, but Wi-Fi is most likely the best choice for most. Big data for finance How to leverage big data analytics and AI in the finance sectorFree Download Ten critical factors for cloud analytics success Cloud-native, intelligent, and automated data management strategies to accelerate time to value and ROIFree Download Remove barriers and reconnect with your customers The $260 billion dollar friction problem businesses don't know they haveFree Download The future of work is already here. Now’s the time to secure it. Robust security to protect and enable your businessFree Download
<urn:uuid:dc2b14df-d3a8-4607-a0a7-2ea5def33cbe>
CC-MAIN-2022-40
https://www.itpro.com/hardware/368802/how-do-wireless-printers-work
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00759.warc.gz
en
0.943041
1,493
3.015625
3
Cyber attacks have become an issue of growing concern for institutions across a variety of industries. With so much of everyday life conducted online, it’s no wonder a new breed of hackers is intent on stealing information. How can you be sure your business is protected? In 2018, a number of high-profile companies have already experienced data breaches. Now they are left to deal with the repercussions of a dip in consumer trust, along with penalties, fines and perhaps even lawsuits. The Meltdown, Spectre, Heartbleed, and ShellShock cyber breaches in recent years have proven that there is no one-size-fits-all solution to this growing problem. The time for businesses to act is now. Man-in-the-middle attacks, distributed denial of service attacks, and session cookie tampering all played a role in these data breaches, leading to the conclusion that businesses must do more to prepare themselves against a range of attacks. According to CSO, cybercrime damage is expected to cost over $6 trillion annually by the year 2021. Software firm Rapid7, intent on cracking down on cyber attacks, conducted hundreds of penetration tests over the past 10 months to determine how well networks can combat cyber threats. The study, named “Under the Hoodie 2018” is filled with interesting data that sheds light on some of the most common cyber targets and what businesses can do to arm themselves. What Is A Penetration Test? A penetration test, or pentest, is a simulated cyber attack conducted to determine exploitable vulnerabilities in any given computer system. Pen tests can involve the attempted breach of a variety of application systems, including APIs, front and backend servers, and others. These tests are designed to uncover network vulnerabilities that may make a company susceptible to breaches. Studies of this nature are vital for pinpointing which type of network misconfigurations are liable for hacker access, and how user credentials are being used. The insights provided by pen tests can help businesses create a plan of action against attacks, allowing them to fine-tune their security policies and find solutions to fix vulnerabilities before they’re impacted. What Are The Stages Of Pen Testing? Pen testing is typically divided into five stages. The first involves planning and reconnaissance, which means defining the goals of a test and clearly outlining the systems and testing methods that will be addressed. Gathering data is another important part of this stage, as it allows the test conductors to more clearly understand a target and the potential vulnerabilities to be encountered. The second stage involves scanning and static analysis, which means inspecting an application’s code to determine its behaviors. Dynamic analysis, also part of the second stage, involves inspecting this code in a running state, offering a real-time view into its performance. A pen test’s third phase most often includes gaining access to a network by way of web application attacks to uncover a specific target’s vulnerabilities. It is then the duty of the tester to attempt to exploit these by escalating privileges, intercepting traffic, stealing data, or doing other damage. Maintaining access, the fourth stage of a pen test, involves determining how a specific vulnerability can be used to present a persistent threat. Often, persistent threats are used to steal sensitive data from an organization over a period of months. Finally, comes the analysis of collected data. The tester will compile a report that details which specific vulnerabilities were exploited, what type of data was accessed, and the amount of time the tester was able to maintain access to the system while remaining undetected. All of this information combined paints a clear picture of what a business can do to protect itself against similar attacks in the future. What Were The Results? Rapid7 conducted more than 268 pen tests across a wide range of industries, 251 of which involved live production networks likely to hold real and confidential data. Of these 268 tests, 59% of the simulated hackers attacked from outside the target network, which would most likely be the case for the majority of today’s businesses. The study helped gather a world of insight into the everyday user’s online security habits, or lack thereof. One interesting finding was that of password patterns. The findings suggest that the majority of users choose passwords of the minimum required length, and tend to use numbers at the end of the password. The most common password used? “Password1.” According to a popular password hacking website, it would take hackers .29 milliseconds to crack this password. Overall, the study concluded that Rapid7 testers exploited at least one in-production vulnerability in nearly 85% of all engagements. For internally-based penetration tests in which the pen tester had local network access, that number rose to 96%. This means that success rates are significantly higher for penetration testers when they have access to internal LAN or WLANs. This type of information is imperative in giving businesses a leg up in preparing their defense against cyber attacks.
<urn:uuid:8e43f443-a0e3-45c3-9dcc-146212475f89>
CC-MAIN-2022-40
https://www.intelice.com/simulated-attacks-reveal-how-easily-corporate-networks-fall-prey-to-hackers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00759.warc.gz
en
0.948948
1,013
2.515625
3
Phishing remains among the top security threats to any organization, as the attack vector is often where a hacker first attempts to steal credentials and access victim networks with the end goal of stealing secrets, deploying ransomware or other malicious activities. Since the start of the COVID-19 pandemic, phishing attempts have grown considerably, with nearly every sector reporting increases above 66%, according to August 2021 research from Sophos. Those attacks are growing in sophistication, with hackers using current events and pop culture to lure victims into clicking on malicious links and giving up their information. The COVID-19 pandemic was a popular social engineering lure that hackers used in phishing attacks, and it remains the most common theme of phishing emails. According to data from cybersecurity firm Positive Technologies on the 10 most popular phishing topics in 2021, pandemic-related topics remain the most used by hackers. Some common topics were employee vaccination poll seemingly sent by HR, which asks for corporate credentials via a fake authentication form. Other pandemic-related phishing attempts exploited the benefits given to vaccination employees in certain countries, with fake government websites appearing to offer vaccination QR codes. While the pandemic dominated phishing email topics, pop culture was also a consistent theme, with hackers mimicking services such as Netflix or fake merchandise website selling products affiliated with popular TV shows or movies. Sporting events such as the FIFA World Cup and Olympic Games were also popular phishing topics, with some such attacks appearing a year in advance of those events. Positive Technologies’ report also lists dating services as a phishing attack theme as more people signed up for dating apps during the pandemic. Attackers create fake dating profiles and send would-be dates malicious links. In another recent trend preying on the rising popularity in consumer-level investment platforms, hackers have created fake websites that imitate well-known companies. Other tactics used by phishing attackers are well-known, including mimicking corporate communications, banking scams, mail services, travel and vacation and subscription services.
<urn:uuid:193aed9b-35a7-42fb-89a8-fc566b485ed3>
CC-MAIN-2022-40
https://mytechdecisions.com/it-infrastructure/these-were-the-top-phishing-topics-in-2021/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00759.warc.gz
en
0.957242
413
2.71875
3
As of 2022, the recent spike of cyberattacks around the world has made many organizations more attentive to cyber risk and IT threats, so it’s no secret that cyber security has become more important than ever. In fact, security teams are doing their best to stay ahead of the curve, and many new modern security solutions like oxeye.io have been developed as a result. Additionally, the way companies look at cyber security has changed, and it has found its place at the top priorities for many organizations. This has led numerous companies to change their methodology and ways of operation. Namely, DevSecOps is one of the latest frameworks that require accountability regarding security from all teams in the organization, rather than just the security team. This culture change means that security decisions need to be made early in the software development life cycle by a continuous collaboration between all teams. Defining AppSec and DevSecOps AppSec is an abbreviation for application security. It is a broad term that is used to define the security process throughout the entire software development life cycle. This includes identifying, patching, and remediating vulnerabilities in the app. The goal of AppSec is to find all of the security weaknesses of the application as early as possible so that they can be taken care of before they become a severe problem. The benefit of AppSec is delivering a secure product on time while keeping the expenses at a minimum. On the other hand, DevSecOps refers to Development, Security, and Operations. Most experts refer to it as an upgrade to DevOps, which is a set of practices that combine software development and IT operations in order to shorten the software development life cycle and provide continuous delivery. When AppSec is added to the mix, you get DevSecOps. The main difference between AppSec and DevSecOps is that the former is a process in the cycle, and the latter refers to the methodology and culture in the organization. But, ultimately, both of them share the same goal of continuously delivering a reliable product with high quality and keeping costs low. What Is the Effect of AppSec and DevSecOps On Application Security? Adopting DevSecOps essentially means integrating AppSec in your software development life cycle, and the best way to do that is through automation. There are quite a few software solutions you can find online that will allow you to achieve continuous security: Static application security testing solutions Commonly known as SAST, this type of toolset is often comprised of frameworks that will enable you to scan proprietary code and detect flaws that may lead to vulnerabilities. SAST tools are usually used during the code, build, and development phase. Software composition analysis Usually referred to as SCA, software compassion analysis is a solution that scans the source code in a third-party app or any open-source components used in the life cycle. In addition to detecting vulnerabilities, SCA tools also provide visibility into licensing, which by itself can be a whole other security risk. They can be easily integrated into the CI/CD pipeline and continuously detect vulnerabilities from the build to the pre-production phase. Dynamic application security testing This is a software framework that can be used to find all the liabilities in a website or web app, though it has to be running in production. Software solutions like this give you a chance to identify vulnerabilities by mimicking a hacker before the real hacker gets an opportunity to exploit them. Why These Tools Are Worth Looking At All of these types of tools can have a massive impact on your organization’s security, and you need to find the one that will suit you the most. They will allow you to detect and eliminate threats sooner rather than later, which is the least painful way to do it. With them, you can test the findings and identify all major & minor vulnerabilities, allowing you to report the details to the security team in due time. If you understand the importance of cyber security, then you’ll probably also understand how beneficial AppSec and DevSecOps can be for your business. Effective implementation of security into the software development life cycle can make a great difference. Identifying vulnerabilities throughout the entire process is the right way to go, as it will allow you to quickly deal with minor issues rather than face huge problems before release.
<urn:uuid:809d4a4a-d0a3-47b6-972c-637fe9d3b6ab>
CC-MAIN-2022-40
https://gbhackers.com/explaining-the-difference-between-appsec-and-devsecops-how-they-impact-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00159.warc.gz
en
0.960865
876
2.53125
3
One of the challenges in today’s modern office is keeping up with mobile device security. Being able to work from computer or phone fluidly has been shown to improve productivity but keeping those devices from leaking too much data isn’t always straightforward. Malicious mobile apps can be a major problem for data security, HIPAA compliance, and malware threats, and they tend to fall into two categories: - The ones that purposely plant malware - The ones that access and gather too much personal data, which can then be compromised 87% of successful phishing attacks on mobile devices happen outside of email (apps, etc.) TikTik is the second variety. A mobile app that isn’t malware, but it does collect quite a lot of personal data, including names of other apps and files on your smartphone. The big danger is that TikTok is owned by a company based in China, called ByteDance, which leaves many to worry that the Chinese government could demand all that personal data that’s being collected from millions of user devices and use it for nefarious means. When dealing with data security on company devices, you need to think beyond antivirus/anti-malware protections and take a hard look at the apps installed on the device to see if they’re safe or accessing too much sensitive information. Considerations About the TikTok App Here are a few of the considerations when deciding whether or not to ban the use of TikTok on mobile apps used for your company’s work processes. TikTok Data is Stored Outside China The company that owns TikTok states that its data is not subject to Chinese law because it’s stored on servers outside the country. ByteDance states that user data is stored on servers located in the U.S. and Singapore. The Company That Owns TikTok is Based in China ByteDance is based in Beijing, China, so the company itself is subject to Chinese laws, even if its data is stored outside the country. This worries many that it could still be required to turn over user data that could then be exploited by the Chinese government. TikTok Collects a Lot of Device & User Data Exactly how much data it can collect from your device does depend upon a few settings, i.e. whether or not you allow it to track GPS location. But the types of data collected is quite concerning. Here are some of the types of data collected by the TikTok app and in the control of ByteDance: - Any information you send through the platform, including messages - Your contacts/phone book on your mobile device - Your registration & profile information (name, phone, email, password, etc.) - Payment information when used for a purchase - You device carrier, mobile, time zone setting, IP address - The apps and file names on your device - Keystroke patterns or rhythms - Location data, including information based on your SIM card Government Agencies Have Banned the App The armed forces (U.S. Air Force, Navy, etc.) have banned the use of TikTok on their devices due to security concerns. A complete ban of the app is also currently pending. The White House has threatened a ban on both TikTok and WeChat (another app owned by a Chinese-based company) unless the apps are sold to a U.S. company. - Service providers and business partners - Within their corporate group (parent, subsidiary or affiliates) - In connection with a sale, merger, or other business transfer Further, there is a risk involved if ByteDance were to receive an order from the Chinese government to turn over user data. The policy states: “We may disclose your information to respond to subpoenas, court orders, legal process, law enforcement requests, legal claims, or government inquiries, and to protect and defend the rights, interests, safety, and security of TikTok Inc., the Platform, our affiliates, users, or the public.” Items You Can Opt-Out Of On TikTok There are certain types of information shared that can be opted out of on TikTok. The company states that users can limit the data TikTok collects in the following ways: - Disable cookies in your browser or on your device - You can manage advertising preferences in the app for some third parties - Use any device operating system features that limit certain types of targeted advertising - Unsubscribe to marketing emails from TikTok - Turn off GPS location functionality on your mobile device Ensure Mobile Devices Aren’t Leaving Your Company at Risk Mobile devices now make up about 60% of a company’s endpoints. Make sure yours are secure and protected by working with C Solutions. Schedule a free technology consultation today! Call 407-536-8381 or reach us online.
<urn:uuid:1547f4cc-9fd5-499d-883b-0eb7ae842b23>
CC-MAIN-2022-40
https://csolutionsit.com/safe-tiktok-company-devices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00159.warc.gz
en
0.936623
1,112
2.546875
3
Competing security companies can unite to fight fraudsters and still remain competitive More data = more accuracy. This is generally a well-understood truth in any data-applicable field, from economics to medicine to machine learning. Fortunately, we live in an era awash in data; the abundance of data underpins the success of businesses across various industries — except in cybersecurity, where there is actually a dearth of actionable data. Data poverty in cybersecurity is a profound problem for businesses and consumers. It means cybersecurity systems lack access to the grade-A data needed to effectively prevent the next enormous data breach. It means security systems lack the data insights to effectively identify insidious hacker patterns. What’s the solution to the data poverty in cybersecurity? Data coopetition. Data Coopetition: a Proven Model in Many Industries Data coopetition is data sharing between businesses, even rival ones, to achieve a common goal. It’s an existing practice in several industries, for example: - Casinos exchange intelligence on card counters. - Rival software companies exchange data to make more money and improve the customer experience. - Adtech businesses collaborate to deliver more effective brand campaigns. - Healthcare providers exchange patient data, enabling doctors to accurately diagnose diseases and save lives. - The FS-ISAC (Financial Services — Information Sharing and Analysis Center) is a consortium of 15,000 businesses in the financial sector collaborating to safeguard their respective institutions and customers from cybercrime. Yet data coopetition is woefully absent in cybersecurity, the very mitigators of cybercrime. Why? The simple answer is competitive advantage: cybersecurity companies are reluctant to share data for fear of giving their competitors an advantage. However, as exemplified by other industries, data sharing isn’t a zero-sum game; actionable data can be exchanged in a secure and privacy-compliant way with the right solution. Data Democratization on a Privacy-Compliant Basis No cybersecurity vendor is data-rich. Sharing timely and functional behavioral data on cyberthreats benefits all vendors working to neutralize bad actors. For example, companies can exchange data on attack reports, which provide the code used to defend against an attack. Another example is sharing datasets of typical user behavior, such as how often they mistype their passwords, in order to identify anomalies in user patterns that indicate unauthorized access. How can cybersecurity companies share data in a secure way that doesn’t compromise competitive advantage? Deduce was founded to address this data democratization opportunity. We created a data collective through which companies can share information about users’ security-related behavior and logins. In exchange for sharing data with the platform, companies get access to Deduce’s repository of identity data from over 150,000 websites, which is used to better detect suspicious activity and alert their users. It’s Time to Shift the Cybersecurity Mindset To protect businesses and their customers from fast-evolving fraudsters, the industry must shift its mindset — we have no other choice. Companies gain no competitive advantage by hoarding cybersecurity data. Even worse, it leaves everyone vulnerable. We need to think more along the lines of other industries that already realize the value of shared data. By adopting a collective intelligence model, cybersecurity companies have a fighting chance against hackers. See Deduce in action! Click here to request a demo.
<urn:uuid:efd5f931-b850-4aa4-b7a8-c5ea9b85de1b>
CC-MAIN-2022-40
https://www.deduce.com/coopetition-is-the-panacea-for-data-poverty-in-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00159.warc.gz
en
0.920342
697
2.78125
3
The State Line Generating Plant was opened in 1929 as an engineering and architectural marvel. The creation of the plant almost 100 years ago brought the important distribution of energy to meet needs, just as Digital Crossroad Data Center is the current leading data center in the Chicagoland area. Thomas Edison himself personally directed the design of the generators and power-generating systems, attending many design meetings in the still-standing Archway Building. It was under Edison’s direction that Samuel Insull, then CEO of the newly formed holding company Commonwealth Edison Co. made the commitment to provide the resources to make Hammond Indiana the nation’s premiere power generation site. The architectural design of the State Line Generating Plant was a masterpiece by the legendary architectural firm Graham, Probst Anderson & White, the same firm that designed the Wrigley Building, the Fields Museum, the Shedd Aquarium and the Chicago Civic Opera House. In its ranking of the Landmarks of Engineering History, the American Society of Mechanical Engineers (ASME) has ranked the State Line Generating Plant as its 24th most significant American engineering Landmark. The art-deco style Archway building, which was the entryway into the Plant, became an iconic international symbol of the new ways to generate electricity and was used in advertising campaigns by electric companies throughout the world. At the time of its closing in 2012, it was one of the oldest continually serving urban energy generating plants in the world. The opening of the State Line Power Generating Plant was the culmination of Edison’s vision for mass electricity power distribution. The Graham Probst legendary art deco urban design continues till this day to be a part of this enduring legacy. Accordingly, the site is perfect for housing today’s most sophisticated data centers. The site was excellent for electricity generation and power distribution about a hundred years ago. Turbines require cooling and Lake Michigan’s water is the perfect coolant. Power generation requires constant coal deliveries and rail is the best transportation method. State Line Generating Plant “25 Years – The World’s Largest Turbine Generator” The plant’s Unit 1 generator was the best in the world at the time, producing 208,000 kilowatts of energy, 30% more than the next leading unit. The commissioner of this energy project was Commonwealth Edison, founded in the early 1900’s as an energy utility giant and the incumbent provider of energy in Chicago, Illinois. For over 25 years, the State Line Generating Plant was the world’s largest electricity-producing generator in the world and supplied energy to more than a million homes in Northwest Indiana, Chicagoland area, suburbanites and to rural farmlands. The construction of the State Line Energy plant took place during the electricity boom, from 1921 to 1929 as electrification of homes and businesses was soaring and delivered an economic and industrial period of flourishment to advance the area. Its contributions to Chicago and surrounding communities cannot be denied because they would not have become what they are today without it. While the plant allowed for economic growth, it also produced large amounts of pollution, causing damage to the environment and the health of the public. Its decommissioning and demolition in 2012 caused short-term job loss. Consequently, it should not overlook the giant impact the plant had for its age. The site is renewing itself by preparing for new technology and industry to overtake. Edison’s Construction of a 1920’s Large Electric Power Plant State Line Energy began as an idea that came together by putting together Thomas Edison’s power distribution patents and Samuel Insull’s understanding of the increasing need of electricity and energy in northwest Indiana, South Chicago and the region surrounding it. The early 1900’s was defined by years of exponential growth in the economy, industry, and energy of the United States. The Roaring 20’s posed new problems, as well as opportunities for the country. There was a new tide of consumerism and a need for electricity. Many more Americans were moving to cities as urbanization grew rapidly during this time, leaving the majority of Americans living in metropolitan areas than on farms. Chicago’s urban population grew from 2,701,705 to 3,376,438 from 1920 to 1930 with those people came the demand for expanding the energy network. In hand with the economic boom and rapid growth of industry in the early-to-mid 20’s, Americans had more money in their pockets and new inventions to spend it on. The new lifestyle enjoyed by middle class Americans demanded an increase in the supply of energy and electricity. Electrification needed to support growing population and site selection During the electricity boom, underrepresented communities lagged in access to energy. Around 1.3 million Northwest Indiana and Chicagoland homes needed electricity in the late 1920’s, including rural and working-class neighborhoods. The increasing industrial needs in the city also required more and more electricity output. State Line was a wonder and “champ” for its generation abilities being incomparable to any other plant at the time. The need for electricity was evidently present and the State Line Generating Plant was the largest element of the solution to this growing necessity. Commonwealth Edison, at the behest of Thomas Edison, planned, developed, constructed and operated this project, clearly seeing the opportunity present in the energy deficit of the area. The site’s optimal placement was on the shores of Lake Michigan and its proximity to Chicago, but on Indiana land, made it a great site for construction. The city and surrounding area needed electricity and the power plant needed a cooling source for its high-capacity generators, making the site in Northwest Indiana on Lake Michigan the picture-perfect solution. People and Companies Involved in Early Stages and Plant Commissioning The foresight of Commonwealth Edison to head the operation of a new, ambitious energy plant is not too surprising, given the company’s founding and history. Commonwealth Edison Co. is the merging of energy utilities by Samuel Insull, with the founding of its parent company by Thomas Edison. Edison established the Western Edison Light Co. in Chicago in 1882, and five years later the company was renamed Chicago Edison Co. Another five years later, in 1892, Samuel Insull became the president of Chicago Edison Co (Encyclopedia of Chicago). After assuming the head position at Chicago Edison Co., Insull incorporated another electrical organization, Commonwealth Electric Light & Power Co., into the company in 1897. Insull’s unification of the two companies was not official until 1907, when the well-known name of the company, Commonwealth Edison Co. (ComEd), was finally established. The energy plant increased the energy output of ComEd, which already had a large stake in the energy network of the city. ComEd had a franchise with Chicago and its operations served around 500,000 consumers in the area, generating approximately 40 million dollars in annual revenue in the 1920’s. Insull astutely observed the increasing energy needs of industries and consumers in Chicagoland, and commissioned the building of the State Line Energy plant. Samuel Insull – Visionary, Entrepreneur and Facilitator of Edison’s Dream Commonwealth Edison was the dream of Thomas Edison but would not have taken on the construction of this ambitious new power plant if it weren’t for their ingenuous and determined President, Samuel Insull. In 1892, Insull left his position on the executive board at General Electric, founded by Thomas Edison, and joined Chicago Edison, which later became Commonwealth Edison. Samuel Insull designed and built the modern power grid, consolidating many smaller producers of energy into fewer, larger industrial power producers (History of Electricity, IER). These new industrial power producers made it more economical to have more users. Insull was able to provide electricity to many more customers for a cheaper price, by famously implementing economies of scale. He was also able to expand the main energy network outside city limits, providing rural communities with much-necessary electricity. Insull’s design abilities and ideas for efficiency allowed for fast and low-cost electricity. The cost of electricity per kilowatt-hour decreased from 4.5 US dollars in the early 1900’s to about one dollar in 1918. After this time, the price became fairly consistent into the 1930’s and remained quite low; around one USD per kilowatt-hour (IER). Insull knew the commissioning of a new plant like State Line would afford the increased energy output that was much needed in the area and enable more consumers to be serviced at a lower price. Without those efforts many would not be able to receive the necessary power to operate their lives. Construction of the State Line Generating Plant Graham Anderson, Probst and White – Legendary Design Considerations ComEd hired the Chicago-based architectural firm, Graham Anderson, Probst & White (GAWP) to design their new State Line Energy plant. The firm had a distinguished legacy in the city, designing or refurbishing many buildings throughout Chicago, to name a few major projects: Civic Opera House, Field Museum, Wrigley Building, Merchandise Mart, and Shedd Aquarium. The company had many operations through 1912 to 1936, including the State Line Energy plant. By the 1920’s, the company had moved away from their earlier designs and styles, such as the classical style found in its Beaux-Arts buildings. They decided to create the State Line Energy plant with a newer and sleeker design, Art Deco style. From the book written on the architectural projects taken on by GAWP from 1912 until 1936, the cost of State Line was around 4.65 million dollars to design and construct. Many well-known architectural masterpieces were designed using Art Deco style, including the Chrysler Building and Empire State Building in New York City. This artistic style embodied modernity and the qualities of man-made machines within architectural design, by using man-made materials like steel, concrete, and opaque glass, as well as incorporating simple, streamlined geometric figures (Art Deco, Britannica). Sargent and Lundy – World-Class Engineering Innovations Sargent and Lundy formed their engineering consulting firm in 1891, which coincided with the growing business of electricity. The electric power industry and the consulting company have had a close relationship since its inception. The first project taken on by the firm was an energy power station, Harrison Street Station, completed in 1892. Which was commissioned by Samuel Insull through the Chicago Edison company. The engineering consulting work of Sargent and Lundy in construction of power stations like the Harrison Street Station and the State Line Energy Plant created reliable energy generation in Chicago and the surrounding communities. Both plants lived up to Samuel Insull’s enthusiastic expectations of energy generation and capacity, affording consumers to receive electricity at a lower price. Sargent and Lundy creativity and efficiency reduced the amount of coal needed by half per kWh, which is still used till this day. Their ingenuity for coal consumption innovation procedures contributed to help make the State Line Generating Plant the most innovate design at the time. Post-Construction – Benefits to the Community When State Line Goes Online Around 1.3 million consumer households in the Chicago area were serviced by energy producers in the late 1920’s, with the city having a maximum energy generation capacity of around 1,300 megawatts of power. ComEd had a large holding and extensive reach in the city, according to Burnham’s Manual of Chicago Securities. The company essentially had, “a monopoly on electric lighting and power business of Chicago, including the furnishing of all the power required by the elevated and surface railways”. After State Line began its operations, the energy generation capacity of the area increased by more than 15%, to approximately 1,518 megawatts. The plant was completed at an peak time to support the electricity boom, Industrial Age, and Great Depression of the 1930’s. Increased electricity access and energy production created more Chicagoland jobs, improved the quality of life for many residents, and bolstered the industrialization and economy of both Illinois and Indiana. The State Line became a behemoth energy producing and architectural giant for its age. Great Depression – Impact of the Economic Downturn The industrialized world suffered the worst economic downturn in history during the Great Depression, beginning after the stock market crash of 1929. The impact on the economy and everyday life steadily worsened over the next decade, and lasted until the late 1930’s. Every community across the country was facing damaging effects, with large cities like Chicago unable to ward off unemployment, homelessness, and food scarcity. The areas most affected during this time were farmlands and rural regions of the country, and they were in desperate need of electricity. The State Line Generating Plant again became a national topic as President Roosevelt acknowledged the energy deficit in these areas, calling to prioritize rural electrification. The Rural Electrification Administration was then created to assist farmers and rural populations to connect to local or nearby power plants (Rural Electrification). This initiative changed the perspective on energy access in the country, as people started to believe it was a right, rather than a commodity, which prompted a substantial national increase in energy generation. Chicago was severely impacted by the Great Depression as manufacturing jobs were lost. The State Line Generating Plant survived through these devastating times, still providing electricity to consumers and jobs to residents in the region. While the plant itself did not see such damaging consequences, Samuel Insull’s ambitious innovation and investing was met with an unpredictable and unforgiving economy. Insull corporate and financial methods were controversial and resulted in an indictment for fraud as the late 1920’s businesses used “highly leveraged junk bonds”. While Insull was acquitted, it preceded him to leaving ComEd. Which led to filing for bankruptcy, which financial destroyed him. Nonetheless, the company and its operations maintained a strong presence in the area, even in the face of worsening economic conditions in the city and the nation. The State Line Generating Plant and ComEd kept operations running throughout the Great Depression, remained strong during World War II, and continued to expand afterwards. World War II – Impact of economic prosperity on owners and operation Not only did the State Line Energy Plant and ComEd survive the Great Depression, but the company and the plant’s operations thrived during and after World War II (WWII). The nation saw a serious revitalization and bounced back from the Great Depression and Northwest Indiana was at the center of steel and wartime materials development. The Northwest Indiana industrial sector and production were drastically ramped up to keep pace with wartime efforts. By 1941, the Great Depression was ending, and the nation focused on industrial output to win the war. This was an extraordinary industrial expansion and economic breakthrough. 17 million new civilian jobs were created, industrial productivity increased by 96 percent, and corporate profits after taxes doubled. Power plants and the energy sector were essential in the industrial process, providing the energy required to maintain the production and manufacturing levels needed to support the U.S. and Allied Powers in WWII. To increase its energy output and further expand as planned, ComEd decided to add another unit to the State Line Energy Plant. “Unit 2” was fully operational in 1938, enabling energy production for manufacturing and industry in Chicagoland. Chicago was a main power hub during the war, following only behind Detroit in the value of commodities produced for the war. The economy and quality of life for many Americans improved as a result of increased employment opportunities with industrial production through the duration of the war. At its ending also came the end of the Great Depression and its devastating effects on the nation. Post-War Era – Impact of economic prosperity on owners and operation The end of World War II in 1945 brought about a newly revitalized economy, as well as the closing of the effects of the Great Depression. Economic prosperity continued in the U.S. after WWII, further increasing the quality of life for Americans, expanding the middle class, and consolidating the victorious nation’s power and influence in the global economy. The energy sector continued to grow as more inventions were put out into the market and increased production and population required more electricity. The late 1940’s through the early 1960’s saw the baby boom increase the population of the nation by around 76.4 million. After suffering through the Great Depression and war, the newfound economic security of the times prompted young Americans to get married and have children. Middle-class America was growing, producing, and consuming, with optimism for sustained economic stability, which increased the need for energy generation. Unit Expansion in 1938, 1955, and 1963 After WWII, ComEd, then a subsidiary of Exelon, was given a 42-year franchise from Chicago and became the largest electric utility in Illinois and is the sole electric provider in Chicago. ComEd added two new units to bulk up its generation capacity. With the War over and the economy up and running again, the company and its plants were in full swing. In 1955, only ten years after the war, and seventeen years after its second unit expansion, ComEd ordered the addition of another unit at the State Line Energy plant. This newest unit, Unit 3, had a generation capacity of 197 megawatts up until the plant’s decommissioning in 2012. Another unit expansion, Unit 4, was operational just eight years later, in 1963, with a generation capacity of 318 megawatts. The Creation of the Environmental Protection Agency (“EPA”) and the Clean Air Act In December 1970 President Nixon created the Environmental Protection Agency (EPA), after increased pressure to form an organizational body to monitor and protect the environment and human health. That same month, Congress passed the Clean Air Act, which allowed the EPA to set and enforce national air quality, auto emission, and anti-pollution standards. The EPA and the later formed Department of Energy, were to regulate and limit aspects of the energy sector, especially the generation of energy from pollution-causing agents like coal. Coal-fired power plants like State Line were some of the most affected energy stations under this act, as State Line’s operations continued to release toxic chemicals into the air, such as nitrogen oxide, sulfur dioxide, and toxic mercury, as well as greenhouse gases. The Clean Air Act forced the plant to start shutting down operation of the record-setting Unit 1 and Unit 2 in the late 1970’s. While Units 1 and 2 were closed in the late 1970’s, the remaining units, Units 3 and 4, continued generating energy for decades even while not meeting EPA standards, up until the plant’s closure in 2012. 2000-2010 – State Line: Environmental Justice and its End of Life ComEd sold the State Line Power plant to Mirant Corp. in 1997 for $68 million. Just five years later, Mirant sold the plant to Dominion Resources for $128 million. The plant had been violating the standards set in the Clean Air Act for some time. The old equipment was not replaced with new machinery that would enable the plant to meet the anti-pollution standards set in the act. By 2010, employment was down to about 100 employees and the threat of closure seemed to be imminent. The owners refused to upgrade the facilities, seeing that the future of coal was in serious question and Dominion Resources decided against retrofitting the State Line Generating Plant with Clean Air Act pollution controls and would instead shut the plant down in the three-year period between 2012-2014. The State Line Generating Plant was the largest polluter in the region and the cause of Chicago’s inability to comply with the standards set forth in the Clean Air Act. The old plant was clearly the region’s largest polluter and emitter of carbon dioxide as well as other toxic chemicals and gases. The plant had multiple renovations and expansions over the years, however, in not updating its equipment to meet the health and pollution standards enforced by the EPA, the pollution from the plant directly caused health related diseases and toxic chemical exposure deaths in multiple smaller cities surrounding the plant. Decommissioning – Basic Facts and Details When Dominion Resources in 2012 made the corporate decision against complying with the strict EPA coal regulations and against upgrading the equipment necessary to continue operating the facility, the company announced the possibility of closing the plant within two years. Because of the highly skilled local labor force, the decommissioning occurred much quicker. While the environment and health of the public was better off, the 100 employees who operated the plant were left looking for employment, most near retirement age. Additionally, the city would be missing the tax revenue paid by the plant to the city as, at the time of the decommissioning, the plant was the largest taxpayer of the city’s tax revenue. Therefore, Digital Crossroad Data Center will instill back into the community what they lost years ago. Rising the site from its downfall and taking the incredible innovation into a digital age where the State Line will again be transformative in this time.
<urn:uuid:db4eef34-5230-43ae-ac03-ea1aad578ef6>
CC-MAIN-2022-40
https://digitalcrossroad.com/blog/the-state-line-generating-plant-story/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00159.warc.gz
en
0.961729
4,348
3.09375
3
How Pittsburgh Became the First U.S. City with Self-Driving Taxis Pittsburgh has 446 bridges–likely more than in any other city in the world. The city's patchwork of streets isn't much of a grid. And the city has traffic lights that have been in use since the 1950s. Making matters worse, Pittsburgh's weather isn't as accommodating as that of California, Nevada, or Florida–other states that have allowed driverless cars on their roads. And sometimes, the only way to make a left-hand turn in the city is to make a “Pittsburgh left”–gunning the gas to turn as soon as the light changes green, overtaking the car at the front of the intersection that is going straight. Yet Uber–with a plethora of possible options–chose Pennsylvania's second-largest city to launch its self driving-cab service. “If they can do it in Pittsburgh, they can do it anywhere,” said Mayor Bill Peduto at a press event at the Washington Post's headquarters. But the reason Uber chose Pittsburgh has nothing to do with these challenges. “It is because of Carnegie Mellon University,” Peduto explained. After the collapse of the steel industry decimated the city's economy, CMU took a leading role in laying the groundwork for the city's current high-tech economy and transformed it into a robotics leader. In the late seventies and early eighties, “the economic heart of the entire region was ripped out,” Peduto said. “There were more people who never came back [to Pittsburgh] than New Orleans lost after [Hurricane] Katrina. The unemployment rate was 19 percent.” But things began to improve after Carnegie Mellon established a groundbreaking robotics program in 1979. Nine years later, it would offer the world's first robotics Ph.D. program. The city is also a pioneer in autonomous vehicles. “By 1994, before there was a smartphone, they had already driven a car from Pittsburgh to D.C. and back without the driver ever using hands,” the mayor added. The Washington Post's Brian Fung interviews Pittsburgh mayor Bill Peduto. More recently, the city has partnered with CMU to create what Peduto calls “the world's smartest traffic signals.” The traffic lights can communicate and detect real-time traffic and adapt to make traffic move 32% more efficiently, he says. The city is also installing sensors in major areas of the city that will be able to recognize everything from cyclists, pedestrians, to people with needs. “We are monitoring that and how it will work with traffic flow. The next logical step is working with Uber and using the sensors they have to pull all of this data together,” Peduto said. “In the very near future, urban mobility will be like using Waze.” City planners already see financial benefits from having Uber's Pittsburgh offices, which has more than 600 workers. It expects 1000 workers by the end of the year. While driverless cars could eventually take away jobs from professional drivers, the technology will also provide new opportunities. The city wants to help train workers to repair and maintain the driverless vehicles and make some of the sensors that go into the cars. Such steps are needed to secure the city's future, Peduto says. “When Pittsburgh went through a crash, it wasn't because we planned to fail. It was because we failed to plan.” Still, convincing Pittsburgh citizens that self-driving cars are a good idea is a challenge. Many people see the technology as being unsafe and untested and are expecting accidents involving driverless cars. The fact that Uber required passengers to waive liability in case of injury or death has done little to assuage that fear. But the mayor pointed out that human error is to blame for the vast majority traffic fatalities–1.25 million globally each year. “The number is going up because of distracted driving,” Peduto added. “And these are preventable deaths.” “Is there going to be an accident with a robot car? Yes, there is,” Peduto said. But there were also deaths linked to other technologies that have transformed the world, such as with vaccination and aviation. “The greater goal is to make our streets safer in the long run. We have to begin at some point. And we can't wait for regulation to catch up with innovation. […] We let innovation take a hold and then, as we learn, we create regulation that not only makes it safer but allows innovation to continue.”
<urn:uuid:06c7babe-71fe-466c-a954-677379594a04>
CC-MAIN-2022-40
https://www.iotworldtoday.com/2016/09/29/how-pittsburgh-became-first-us-city-self-driving-taxis/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00159.warc.gz
en
0.974663
959
2.828125
3
All You Need to Know About Two-Factor Authentication. Even Its Weak Spots Looking for ways to protect your online accounts? Aside from using a strong password, HTTPS connections, and end-to-end encryption, you should also embrace two-factor authentication (2FA). Two-factor authentication was introduced by AT&T in 1998. The company patented an “automated method for alerting a customer than a transaction is being initiated and for authorizing the transaction based on a confirmation/approval by the customer thereto.” If you are new to these terms, continue reading as with this article we will cover the essential information, including the weak spots. What do 2FA and MFA mean? Two-factor authentication (2FA) is verification process where you need to take two steps to access sensitive information. Multi-factor authentication (MFA) is a more complex process that uses more than two factors to verify the authenticity of a login. Nowadays, 2FA is more common and recommended for all online accounts to cut the risk of unauthorized access. There is even a dedicated website that lists all services that use 2FA and provides detailed information on available authentication methods. What is a factor? A factor is simply a type of authentication, and below, we listed the most common ones. Something you know It is the most common authentication factor that we use daily. It requires you to enter credentials to access your account, and the best example of this factor would be the username and password. Something you have This factor requires you to provide an email account, phone, or another way to send a verification code. Some of the most common authentication forms are: SMS code. It’s a unique, one-time code consisting of six numbers that get texted to your phone. Authentication applications. This works very similarly to the SMS code. The app generates a one-time code consisting of six digits, but instead of sending a text message, you get this code from the app. You can use one of the many authentication apps such as Google authenticator that are available to download on your phone. A hardware token. It’s a device, such as YubiKey, that generates an encrypted one-time passcode. Usually, it comes as a string of six numbers that you use the same way as a text message or numeric code generated via authentication apps. The added benefit of the hardware token is that it comes as a separate device you carry along with you. Something you are It uses your biometrics such as fingerprint, voice recognition, or retina scans for login verification. It's difficult to replicate and bypass this factor as it's based on unique biological identifiers. And this is one of the key reasons why biometrics are growing popularity as an authentication factor. Somewhere you have This factor is related to your location and usually detected based on your Internet Protocol (IP) address. Some companies collect your geolocation information and flag any attempts to log in from random locations. For example, you live in the US, but someone has tried to log in to your account from China. In this case, the service will notify you about the login attempt and may ask to verify a new location. This authentication factor helps to identify unauthorized access in early stages. Why should you use two-factor authentication? The core reason for using 2FA or MFA is the added layer of security. It doesn’t mean that by adding 2FA to your account, you reduce all the risks for hacking it. However, it becomes less of a target for hackers. That’s because a hacker would need to disable not only your password but your 2FA as well. They would need to use a phishing attack, malware, or try to activate your account recovery. Next, they would need to reset your password, and only then they could try to disable your 2FA. And that's extra work that they are generally not keen to do for individual accounts. Is 2FA as secure as it seems? Despite the best intentions to protect online accounts, hackers are getting more creative and looking for new ways to bypass the 2FA. And all the methods are safe until the first successful attempt. SMS-based authentication is the first one to be known vulnerable. This is why back in 2016, the National Institute of Standards and Technology (NIST) now part of the US Department of Commerce, banned SMS for 2FA. The rise of abuses like SMS phishing (or smishing) and phone porting, which cause victims to lose control over their phone numbers, are primarily to blame. However, it’s still more secure than not to use it at all. And if there are no other alternatives, it’s best to enable it. How can you improve your online account security? Passwords are your front line for online account security. They are your front line for online account security. So do all you can to make them strong. Check your password strength and make sure they haven’t been exposed before. If needed, replace them with secure passwords that you can generate with our online password generator. Make sure you store your passwords securely. Sticky notes or Excel files on your computer are not the methods we recommend for password safekeeping. Enable 2FA on all your online accounts that support it. Usually, you need to go to the account settings and look for any 2FA options under Security. And stay vigilant. Hackers continuously explore and employ new hacking tactics that are much more sophisticated than ever before. Subscribe to NordPass news Get the latest news and tips from NordPass straight to your inbox.
<urn:uuid:ac975638-5c4a-498d-ac4c-74d2b59f8572>
CC-MAIN-2022-40
https://nordpass.com/blog/two-factor-authentication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00159.warc.gz
en
0.938281
1,173
3.109375
3
New research has suggested that financial worries may cause migraines in people who have two specific variations in a gene that regulates our biological clock. Our biological clock is the collective name given to a range of interacting molecules that regulate our sleep-wakefulness cycle and the bodily and behavioral changes that go with it. New research examines the link between this clock which also regulates genetically and the risk of developing migraines. Researchers from Semmelweis University in Budapest, Hungary, explain, the research was prompted by previous studies suggesting that people with mood disorders often have symptoms that signal a disruption of their circadian rhythm. Additionally, other studies pointed to a link between mood disorders and certain variations in the genes associated with the circadian rhythm. As well as to a genetic link between mood disorders and migraine. Stressors shown to trigger migraines by disrupting the body’s rhythmicity, and all of this existing evidence made the researchers wonders whether circadian genes might also play a role in the development of migraines. Researchers investigate a total of 2,349 participants and asked them to report on whether or not they had migraines using the ID-Migraine Questionnaire. The CLOCK gene is the main genetic component of the circadian clock. Therefore, the researchers zoomed in on it, screening the participants for two single nucleotide polymorphisms, or variants, of the CLOCK gene. The participants also asked to fill in a financial questionnaire, and the researchers defined chronic stress in relation to financial worries. Researchers tested the effects of the CLOCK gene variants on migraine statistically by applying logistic regression models and adjusting the analysis for population, gender, and age. At first, the study found no link between the CLOCK gene variants and migraine, but when they added financial stress into the mix, the results changed. People who were having financial difficulties have 20 percent more likely to have migraines if they also had the two CLOCK genetic variants. This work does not show what causes migraine. There is no single cause, but it shows both stress and genetics have an effect. The results shed light on one specific mechanism that may contribute to migraine. What it does mean is that for many people, the stress caused by financial worries can physically affect on person. The study demonstrates how an environmental risk factor exerts its effect only in the presence of a given genetic risk factor. This has done to a great extent in migraine, making this study an exciting new lead. More information: [ECNP]
<urn:uuid:a0746994-dc35-4d64-9926-b3ab9d11863b>
CC-MAIN-2022-40
https://areflect.com/2017/09/04/gene-variations-may-lead-stress-migraines/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00359.warc.gz
en
0.942228
526
3.09375
3
By Gail Seymour We tend to think of a database as information stored on a computer that we can access and alter, but the database is just information. It’s a bit like a filing cabinet full of files in an office. Without procedures in place, we wouldn’t know where to put the files in order to retrieve them when we wanted them again. Without the forms, each file would be a random collection of information, and even when we found it we wouldn’t be able to make sense of it, or compare it to other files easily. What is a Database Management System (DBMS)? The database management system provides ways to organize, store, retrieve and interact with the data in the database. It consists of: - A modeling language, used to define the database schema, or structure. Common database structures are hierarchical, network, relational and object based. Models differ in how they connect related information. The most widely used, particularly in Web applications, is the relational database model, which we’ll cover in more detail in another article. - A database engine that manages the database structure and optimizes the storage of data, whether that is fields, records, files or objects, for a balance between quick retrieval and efficient use of space. - A database query language, such as SQL, that enables developers to write programs that extract data from the database, present it to the user, and save and store changes. - A transaction mechanism that validates data entered against allowed types before storing it, and also ensures multiple users cannot update the same information simultaneously, potentially corrupting the data. What Are Database Management Systems Used For? Database management systems are used to enable developers to create a database, fill it with information and create ways to query and change that information without having to worry about the technical aspects of data storage and retrieval. Other features of database management systems include: - User access and security management systems provide appropriate data access to multiple users while protecting sensitive data. - Data backup to ensure consistent availability of data. - Access logs, making it easier for a database manager to see how the database is being used. - Rules enforcement to ensure only data of the prescribed type is stored in each field, for example, date fields may be set to only contain dates within a set range. - Formulas such as counting, averaging and summing included in the DBMS make statistical analysis and representation of the data simpler. - Performance monitoring and optimization tools may also be included to allow the user to tweak the database settings for speed and efficiency. Many Web applications rely on a DBMS, from search engines and article directories, to social networks like Facebook and Twitter. Almost any site that offers a user registration with personal logon details rather than a single shared password will probably require a database, as will ecommerce systems, blogs and collaborative sites such as Wikis and multiple user content management systems. Read the entire series: Part 1: Introduction to Database Management Systems Part 2: Relational Database Management Systems Part 3: Introduction to Net Database Administrators About the Author Gail Seymour has been a website designer for more than 10 years. During that time she has won three design awards and has provided the content and copy for dozens of websites and more than 50,000 Web pages.
<urn:uuid:2046e82f-a3e7-4390-bb55-4ce36e967dae>
CC-MAIN-2022-40
https://hostway.com/introduction-to-database-management-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00359.warc.gz
en
0.899684
683
3.640625
4
According to the Aureus Analytics report, it is estimated that the world’s data volume will grow at 40% per year from 2021 to 2026. Data has been recognized as a strategic asset by enterprises. Data is power. Public data breaches have resulted in data security becoming a priority for enterprises. Data-privacy risks include inappropriate safeguards in technical measures, negligence due to improper configuration, lack of encryption, third-party access, outdated security software, social engineering, social media attacks, and mobile malware. Such risks hinder data flow and have led to the need for regulatory data governance. What is Data Governance? The Data Governance Institute (DGI) defines data governance as “a system of decision rights and accountabilities for information-related processes, executed according to agreed-upon models which describe who can take what actions with what information, and when, under what circumstances, using what methods.” Data governance comprises all the processes, policies, standards, and roles that ensure information is used efficiently to achieve the goals of an organization. It signifies the procedures and responsibilities required to establish the quality and security of data used by an organization. What are the Importance and Benefits of Data Governance? It is essential to build trust among customers for successful and continued business. Breaching public data can drag enterprises into losses. Data governance prevents poor data management and non-compliance, which could have led to said business losses. A robust data governance program enables organizations to discover and preserve useful information and discard outdated, trivial information. It ensures increased data centralization leading to better decision making and business planning. More importantly, legislation revolving around data privacy is new and will continue to evolve as technology evolves. Thus, it is necessary to implement a proper data governance program to improve quality and ease of compliance and protect enterprises from being penalized for data breaches. Who is Responsible for Data Governance? Data Governance teams in organizations typically comprise the following roles: 1. Data Owners Individuals or teams are appointed to decide who can have the right to access, edit, and determine how data can be used. They oversee and protect data domains. 2. Data Stewards Data stewards collect, collate, and evaluate problems and issues relating to data. They ensure adherence to data policies and data standards. 3. Data Custodians Custodians handle technical aspects of data maintenance. They manage the safe custody, transport, and storage of data. 4. Data Governance Committee They are vested with the power to approve data governance policies and standards. They handle escalated issues. 5. Data Users Data users are responsible for following the policies and guidelines established and outlined by the management. In case of any issues, they are obliged to report the same to the appropriate data owners. Other persons forming part of the Data Governance team include data architects, data modelers, data quality analysts, and engineers. Data life cycles based on set standards are operated by data editors, who create and maintain data. Executive sponsors provide sponsorships, strategic direction, and funding. Master Data control and governance across enterprises is designed, implemented, and maintained by Managers. Architects oversee designs and implementation. Data Analysts analyze and determine trends. Data strategists develop and execute plans made from trend patterns. Compliance specialists are involved in ensuring adherence to standards required by legislation. How Can a Data Governance Team Help with Data Privacy Regulations and Compliance? Choosing the right data governance plan and team is objective to each enterprise depending on the business’s sector, size, and culture. Data governance teams establish and implement the framework to meet key governance standards. This framework encompasses ownership of data assets, access rights, analytics, and security systems. The governance teams ensure that the data collected, stored, and used is managed in compliance with the law and the customers’ best interest. They largely help corporational processes, like managing risks, business processes, mergers, acquisition, marketing, and financial planning. With the advent of new technology and the continued expansion of data, data governance will have a wider application. Forming a well-equipped and experienced Data Governance team is an enterprise’s first step toward an effective and evolving data governance regime. Data Governance Tips for Privacy Teams Data governance teams have the key role and responsibility to formulate the basic framework of the enterprise’s privacy governance program. This requires the team to be strategically placed in a manner that can effectively ensure collaboration of every part or department of the enterprise. Here are some tips and best practices in data governance that Data Intelligent organizations adhere to: - The first step to framing a governance program is to analyze the different roles, responsibilities, business terms, and data domains present in the enterprise to determine if the program should be centralized or decentralized. - Understand how business analysts use data, the processes surrounding it, and the flow of information within and outside the enterprise to identify the compliances the enterprise is liable to follow and develop a governance plan. - Define control measurements such as automating workflow processes, applying them to governance structures and data domains, and reporting the progress. - Centralize data using data governance technology. Some useful features to prioritize would include a business glossary, the quality of the data, and demarcating the responsibilities of persons involved in the data process. It is important to use technology with a user-friendly and reliable user interface that will simplify and improve the efficiency of the entire process. - Maintaining consistent communication and setting up a metric to measure data governance goals will improve the teams’ efficiency in the data governance compliance plans. Nowadays, almost all businesses collect and manage personal information about their customers. Data privacy risks increase operational inefficiencies, attract intervention by regulators, and result in financial penalties and irrecoverable loss of consumer trust. Cyber security and privacy concerns mandate the need for data to be protected. Various privacy and data governance laws such as HIPAA, FCRA, FERPA, GLBA, ECPA, COPPA, and VPPA have been enacted in the United States to address such data privacy risks. Data Governance ensures that data collected is confidential, secure, and compliant. Every enterprise needs to increase governance investments to reduce the risk of breach. It is mandatory to build a strong foundation for data governance to implement and proactively address changes in regulation as they emerge and evolve. In the wake of the pandemic, we’ve become more digitally connected, thus more...
<urn:uuid:c0c711f8-bc7f-40c2-8e4c-5de854fac26a>
CC-MAIN-2022-40
https://secuvy.ai/blog/privacy-for-data-governance-teams/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00359.warc.gz
en
0.916334
1,359
3.234375
3
As we now approach the mid-way mark of the 2020-2021 school year, distance learning remains an instructional delivery cornerstone for most schools in one way or another. It’s around this halfway point that it feels like the right time to check in and ask key questions like, ‘How are we actually doing? Are students effectively learning and engaging online? After spending more time online, are students – and their data – adequately being protected?’ To try and answer these, as well as gain a better understanding of the realities of distance learning, we decided to ask some of those on the front lines: the parents, who may be struggling to balance remote work and online learning, or to equip their students with everything they need to learn in a virtual setting. In a survey of adults with children at home, 3 in 4 said at least one child was engaging in some form of distance learning. Of those, one in three are using a school-issued device to learn from home while another 1/3 of respondents said their child was relying on a personal laptop. While many parents said the devices being used for online learning were equipped with security tools like anti-virus (AV) and firewalls for protection, only 35% of responding parents reported feeling very confident that these controls actually keep student devices and data safe. And understandably so, as we know that such security tools are only effective when they are installed, fully up-to-date and have not been tampered with. Our data shows that less than 70 percent of education devices, on average, have an AV app that is healthy and compliant. This becomes even more troublesome by the fact that 65 percent of parents reported they aren’t fully confident in their ability to recognize a phishing email or worse, don’t know what a phishing email is. Schools and students are prime targets for ransomware attacks, with phishing being most the most popular and widely used attack vector. And while schools administrators are working to inform parents and students of security updates and potential vulnerabilities, only 1 in 4 parent respondents consider the communication coming from their school to be enough. Read: 4 Ways to Protect Your School from Ransomware Before COVID-19, most school-held devices were used for some classes, but not others. In today’s distance learning environment, every class requires a laptop… and they’re being heavily used. Absolute's own data shows there has been a 39 percent increase in overall active daily hours since the start of the pandemic, and 70 percent of parents surveyed say their students are on their devices 4 or more hours per day. But, are students using the device for online learning? 85 percent of parents say yes, students use their devices for online learning… at least most of the time. Absolute’s insights show 25% of student web usage is dedicated specifically to engaging with education and online learning tools, while 30% is spent on entertainment and online video sites.* (*Note: this includes YouTube, which is used for both educational and entertainment purposes. Duarte Unified School District in Duarte, California wanted to ensure their students were getting the most out of their technology tools - so they turned to Absolute Web Usage to monitor, manage and measure their distance learning program. Read about how they were able to demonstrate ROI on technology investments, and gain visibility into device, web and application usage visibility in this case study. As a leader in Endpoint Resilience, Absolute can help enable smarter, more secure learning. We have the unique, privileged position of being embedded in the firmware of devices from more than 25 of the leading endpoint device manufacturers, which means we are capable of enabling an undeletable tether to every device. This digital tether, leveraging our patented Persistence technology, allows our customers to see all their devices from a single pane of glass, remotely query and remediate them at scale, and even extend the power of self-healing to their mission-critical security applications. Learn more about how Absolute empowers education organizations to manage and secure their devices and data with a persistent, undeletable connection on our website.
<urn:uuid:087600c3-5929-4c78-b5c3-885079bb7a61>
CC-MAIN-2022-40
https://www.absolute.com/blog/the-parents-perspective-insights-into-distance-learning-and-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00359.warc.gz
en
0.964667
847
2.78125
3
Cybersecurity 101: What is Cybersecurity? Definition and examples Cybersecurity is the armor for your business in the digital era, meant to protect your organization’s data from attacks. Since unauthorized access can be attempted both externally and from within an organization, cybersecurity is vital to protect not just data, but also computers, software programs, and networks from attack and damage. Cybersecurity is a combination of structures and technologies, processes, and everyday practices. It is geared at protecting confidentiality and data integrity while ensuring that data remains safely accessible to genuine stakeholders. Examples of Cybersecurity measures against specific modes of attack Technology never sleeps, and threats continue to evolve. Organizations need to continually up their game when it comes to protecting themselves in such a dynamic and malicious threat scenario. Listed are a few crucial software and solutions that organizations adopt to arm themselves against cyberattacks: - Firewalls: Firewalls are comprised of both software and hardware. They are designed for the purpose of keeping systems safe from attackers who try to access them using external as well as internal communication links. - Malicious software code can be hidden in software code from pop-up windows, or programs that log usernames and passwords with the intention of fraud. Organizations can seek web proxy protection solutions to protect their data from such malware or spyware. - Since email is a common (and official) mode of communication within an organization, anti-spam software is embedded in email systems to keep them free of broadcasts that may contain viruses or malware. - Web hosts protect their digital assets by means of anti-phishing software Companies should install anti-spyware, anti-virus, and anti-malware software on every device belonging to them, as well as on employee devices that are linked to the company network. - In the context of remote operations, however, many of these measures serve as responses to attacks without any remediation, and can certainly not prevent future attacks of the unknown variety. This is where the role of managed detection and response (MDR) comes into the picture as a very viable solution for the constantly evolving threat landscape. Standard frameworks and functions to assess cybersecurity risks and adopt best practices: The framework here helps understand the best possible way of preempting, responding, resolving, and preventing cyberattacks in today’s world. - Identify potential risks both in terms of the industry you operate in, as well as your business unit in general. Risks can come in the form of something as simple as clicking on a phishing link to something so complex that it goes unnoticed for months, if not more. - Protect your organization at both the systems and the network level. Today, when people work over multiple home networks, there are solutions such as BluSapphire that help prevent attacks even in a remote environment. - Detect attacks before they happen. Naturally, in today’s digital landscape, this isn’t possible using predictive analytics alone. We need to harness the power of machine learning and AI to understand how the landscape evolves, identify potential threat agents, and prevent attacks. Indeed, the best cybersecurity solution is one that does this and shares a report with you on the triage measures it has taken up, as opposed to siloed methods that detect at best and do nothing thereafter at worst. - Respond immediately when an attack does happen. Switch to a solution that can offer response and remediation in a few seconds, as opposed to the several days and months that are the current industry standard. - Recover quickly in terms of saving sensitive data, protecting all endpoints and triaging the attack to contain it, and prevent further damage, at all times of day. Why should you care about cybersecurity in 2021? Because COVID-19 wasn’t the only pandemic that was rampant in 2020. The cybercrime pandemic took on a life of its own, too! According to a research study conducted by Deep Instinct, hundreds of millions of cyberattacks were attempted every day throughout 2020. Malware increased by a whopping 358% and ransomware increased by 435% compared to 2019. These are numbers that should make any organization sit up and take notice! Over 100,000 malicious websites and 10,000 malicious files crop up EVERY DAY. Also, 87% of organizations have experienced an attempted strike on an existing weak spot that was already known to them. Moreover, in 46% of organizations, a malicious mobile application was downloaded by at least one employee. What are the most common cyber threats your business faces? Trends in 2021 highlight three cyber threats that organizations face in particular: - Thanks to the coronavirus, the cyber attack target has only gotten wider and bigger with millions forced to work from home. With organizations ramping up their own digital transformation initiatives in 2020 to adapt to the changed circumstances, many more people connected to work interfaces on personal computing devices, accessing data on private and public cloud data centers, various IT infrastructures, and on utility infrastructures, broadening the attack surface. - Ransomware, as the name suggests, is a cyber weapon of choice in targeting organizations. Ransomware is popular among cybercriminals precisely because the payoff is so big. The average payout attributed to ransomware has reached a dizzying figure of nearly $234,000 per event. According to Deep Instinct, ransomware increased by 435% in 2020 compared to 2019. Typically, a piece of malware is hidden, disguised, and embedded inside a different kind of document. When unwittingly executed by the target user, the malware can encrypt the organization’s data set with a secret 2048-bit encryption key. Alternately, it may communicate to a control server and await instructions to do further damage. Either way, the data is effectively locked up, even the backup data is rendered inaccessible till the company pays ransom as instructed. Thwarting ransomware requires entrenching multiple levels of cybersecurity policies and measures across the organization, including anti-malware software, passwords, having secure VPNs, Wi-Fi, and routers. - Critical infrastructure is in the crosshairs via Industrial Control Systems, ICS, Operational Technology/Infomation Technology cyber-threat convergence, particularly on legacy systems that were not designed to be safe against cyber-attacks. The IoT-based supply chain for various essentials and non-essentials that housebound individuals shop for online is also an inviting target as they present attackers multiple entry points. How are cyber threats classified? There are several approaches to classifying cyber threats, with new models being proposed all the time. . A few are described below. Approaches to cyber threat classification often fall into two main classes: a) Classification methods based on the attack techniques b) Classification methods based on threats class or impacts In one approach, a three-dimensional model subdivides threat space into subspaces according to three orthogonal dimensions that are labeled “motivation, localization, and agent.” - Threat motivation represents the cause of the creation of the threat and it is reorganized into two classes: deliberate and accidental threat - Threat localization represents the origin of threats, either internal or external - The threat agent is the entity that lays the threat on a specific asset of the system which, in turn, is represented by three classes namely: human, technological, and force majeure. Another model is STRIDE, which is applied to the network, host, and application. The STRIDE acronym is created with the first letter of each of the following categories: Spoofing Identity, Tampering with Data, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. The STRIDE model allows for the characterization of known threats according to the motivation of the attacker. It is a goal-based approach, where an attempt is made to get inside the mind of the attacker by rating the threats raised against the organization. What can I do to identify a cyber attack? The key to mounting a response and initiating successful damage control in case of a cyber-attack is rapid threat detection. For this, the appropriate strategies and mechanisms must already be in place to raise timely red flags. In addition, every employee must be coached on the shared responsibility of cyber-surveillance and to notice the following signs: - Computer processing has slowed down immensely - Storage space is suddenly full and new programs/add-ons that they did not install have appeared - Programs are opening and closing automatically - Their security software has somehow been disabled - Popups appear rampant - The browser automatically redirects sites - They are locked out of their system or their password has been reset automatically What should I do when a cyber-attack happens? Every individual who notices any of the above signs must report them to the IT cell immediately for them to deploy security measures. Businesses are often aligned with cybersecurity partners who have the requisite know-how, tools, and solutions to contain a cyber attack. What happens to businesses hit by cyber threats? Depending on how devastating the cyberattack is, the fallout can be tremendous, costing the company big time in terms of both time and money. According to IBM, companies take about 197 days to identify and 69 days to contain a cyber breach. The companies that are able to contain a breach in 30 days or less can save more than a million dollars compared to those that take longer. What are the current challenges in implementing cybersecurity solutions? In an ironic twist, cybercriminals have started adopting corporate best practices to focus their attacks, selling or licensing hacking tools and commodities. Ransomware is even offered as a service. This established cybercrime ecosystem makes it increasingly harder for companies to protect themselves from cyberattack. Other challenges include: - Cloud vulnerability, i.e. threat to sensitive company data stored in third-party cloud applications. - AI fuzzing can be utilized by cyber attackers to initiate, mechanize and ramp up zero-day attacks. - A hacker may insert malicious instructions, mock-ups, or backdoors into a system to weaken it. How to solve these challenges Companies’ IT wing must set the software to automatically check for updates in real-time or at least once a day, or nightly, and schedule a complete scan after these daily updates. They can follow up by running anti-spyware, say at 2:30 AM and running a full system scan around 3 AM. This is possible for companies that have a continual high-speed internet connection. The time may vary, but these daily activities are a necessity. Logs must be backed up and scanned regularly to look for any red flags. Logs must also be saved for at least a year, depending on the type of information they hold, or longer if necessary. Companies must “patch” frequently, i.e. install newly available code updates in existing software to address any new security vulnerability, install new drivers, or fix any stability issues. Last but not the least, companies must have insurance that covers costs of fixing not only the damage done, but investigation, rebuilding, and restoration activities. It should also cover productivity loss as a data breach will indefinitely stall operations.
<urn:uuid:c67bc278-d507-498f-92d4-4a9e8ea6976f>
CC-MAIN-2022-40
https://www.blusapphire.com/blog/the-definitive-guide-to-cybersecurity-in-2021
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00359.warc.gz
en
0.946287
2,310
3.375
3
No matter where your company operates or how large it is, having a Disaster Recovery Plan (DRP) and Business Continuity Plan (BCP) is an essential component of maintaining and growing a successful business. Disaster recovery (DR) is the process, policies and procedures related to preparing for recovery or continuation of technology infrastructure critical to an organization after a natural or human-induced disaster. DR is a subset of business continuity. Disasters can fall into one of two categories: 1) Natural disasters such as hurricanes, earthquakes, tornadoes, or floods 2) Man-made disasters such as terrorism, IT systems failure, or even malfunctioning fire sprinklers A disaster of either category can prove devastating if your company has not prepared a contingency plan for continued operations in the face of such crises. According to the US Department of Labor over 60% of small businesses close within 2 years of a major disaster. Current news events show just how catastrophic a natural disaster can be upon an entire nation, or even the entire world, and corporate managers must think about protecting their business in the wake of disaster. Just as people must have emergency plans in order to protect their family, companies must do the same in order to maintain business continuity. With the reliance of businesses on IT infrastructure and staff to run everyday operations, it has become critical for organizations to create both a DR and a BC plan. A Business Continuity Plan (BCP) includes the strategy and planning for personnel, facilities, communications and reputation protection. The key to disaster preparedness and business continuity entails preservation of and access to mission-critical information in order to minimize business disruption. Online document management software, also referred to as online enterprise content management, can help your organization achieve this objective. Online document management software will store and protect each department’s mission-critical information, DR and BC plans, not only for when an event occurs but during the creation, collaboration, versioning, review and final approval of these documents. Using document management systems for business continuity and recovery includes these additional advantages: 1) Secure offsite information storage and retrieval via servers located in protected data facilities easily accessible from anywhere in the world via the internet. 2) 24/7 operations providing uninterrupted data availability around the clock. 3) Automated back-ups removing the burden or reliance on IT resources. 4) Process and security controls which guarantee your information is safe and readily accessible while your company recovers. To learn more about how online document management can give your business added peace of mind, please visit DocuVantage and sign up for our Four Steps to On Demand Document Management.
<urn:uuid:c86a3d97-f53f-4d34-9649-a52a0a39f59d>
CC-MAIN-2022-40
https://www.docuvantage.com/blog/document-management/bid/55525/disaster-recovery-business-continuity-as-a-function-of-document-management
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00359.warc.gz
en
0.922418
538
2.75
3