text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
The majority of US consumers would happily share their personal data with police or healthcare providers via IoT devices, but there are limits to what they will share and with whom, according to a survey from Unisys Security. The results were released in the 2017 Unisys Security Index, a global study of 1,000 adults in the US during April 2017, which gauges their attitudes on a wide range of security issues. In general, Americans support IoT applications that promote safety and convenience in areas such as law enforcement and healthcare, the survey found. Consumers said they also see potential value in areas such as air travel and banking. Who? What? Why? Their enthusiasm depends on a number of factors, however, including why the data is being collected, by whom, and how it will be used. For example, 84 percent of Americans surveyed said they support the use of a button on their smartphone or smartwatch to notify police of their location in an emergency. There is a thin line marking how far they would be prepared to share, though: only 32 percent saying they support police being able to monitor data on, say, a fitness tracker at anytime in order to determine their location. Similarly, the majority of respondents (78 percent) support the ability of medical devices such as pacemakers or blood sugar sensors to immediately transmit significant changes in health conditions to a patient’s doctor. By contrast, only 36 percent support health insurers accessing fitness tracker data to determine a premium or reward customers for good behavior, the survey reveals. Barriers to acceptance Barriers to IoT acceptance arise when consumers are unable to see a compelling need for an organization to obtain their data, Unisys said. As evidence of this, nearly half (49 percent) of those who did not support using a smartwatch app from a bank or credit card company to make payments said they were most worried about the security of those transactions. Specifically, 90 percent of those respondents are concerned about hackers or malicious malware gaining access to their financial transactions. The same is true of medical devices, with 78 percent of consumers reporting some level of concern and 51 percent “very” or “extremely” concerned about hackers or malicious intruders gaining access to IoT defibrilators, pacemakers or insulin pumps. Convenience versus control “Americans want to obtain the efficiencies and security benefits of the Internet of Things (IoT), but not at the expense of losing control of their personal data,” said Bill Searcy, vice president, Justice, Law Enforcement and Border Security for Unisys and a former FBI deputy assistant director. “For the IoT to succeed, governments, healthcare organizations, financial institutions and other enterprises must take steps to assure the public that personal data collected from IoT devices will be secure and that privacy will be protected.”
<urn:uuid:bf5b9e3b-38c1-48f6-b9fb-e45a0b192fac>
CC-MAIN-2022-40
https://internetofbusiness.com/american-consumers-data-emergencies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00282.warc.gz
en
0.953077
583
2.59375
3
If you spend much time with teenagers, you know they use a special version of the English language. A few months ago, I was introduced to the term "on fleek." Personally, I never liked it, but by the time I worked up enough courage to use the term in a conversation, I was informed, "Alyssa, 'on fleek' is so last year. Now, we say 'lit'." (Rolling my eyes here.) While both terms can be used to describe something "awesome," I tell you this to emphasize how difficult it can be to understand another language. In September 2016, the FFIEC released an update to their IT Examination Handbook, Information Security Booklet. As I began to read through the Booklet, at first, it seemed like the FFIEC had written something completely new. From risk appetite, to information security officers, to bizarre buzzwords (e.g., taxonomy, security culture, event trees, escalation, middleware, etc.), the booklet seemed to be written in a completely different language. But was it a different language? Well, yes and no. Yes, the FFIEC did use new and complicated terminology, but no, the Booklet is not impossible to understand. It just needs to be interpreted. The concept of interpretation is simple: absorb a seemingly new concept, apply your existing knowledge, and present the idea in a different way. While you may not feel like you have the skills necessary to interpret guidance, I am here today to tell you: Yes, you do. Interpretation never takes place in a vacuum. You have history, experience, context clues, and connections with other professionals to guide your thinking. All of these resources make you the perfect person to interpret the Booklet for you. Let's use an example. In the Booklet, the FFIEC included a paragraph in section II.C.13(e) titled "Rogue or Shadow IT." While the title certainly sounds interesting, it's not commonplace terminology. If an examiner asked you, "Have you addressed Rogue or Shadow IT in your policies?" you may be inclined to say no, if you don't understand the term. When confronted with an unfamiliar expression, the first thing you should do is ask, "What is it?" and resolve to find out. Use your resources. Google. Do whatever you need to find "it" out. In this case, the FFIEC gave us a good working definition and some examples in the booklet. In short, Shadow IT is "unsanctioned or unapproved IT resources (e.g., online storage services, unapproved mobile device applications, and unapproved devices)." Once you crack the cipher, consider what the author wants you to do. Do you know of any personal "online storage services" your employees may use? You should and if you don't, research some more. For the purpose of continuing this discussion, "online storage services" includes things like Dropbox or OneDrive. If only there was a place you could instruct employees to not use unapproved bank resources. Lucky for you, there is such a place and regardless of whether you're a Teller or a CEO, you should be familiar with it: the Employee Acceptable Use Policy. Look at your policy. You likely already have some policy language about what employees can and cannot use with regard to Shadow IT, even if you didn't know the proper name for it. So, before you begin to write up several new policies, first try interpreting the Booklet. If you can take a new concept and apply what you know, often topics that appeared foreign become familiar. More than that, the ability to interpret shows you understand ideas presented in the Booklet and not just the language itself. Did you know the new booklet uses the words "should" and "may" 522 times? This leaves a lot of room for interpretation, if you ask me. As long as you do not directly disregard guidance or regulation, don't be afraid to interpret. Ask questions. Find answers. Enhance and defend your program. Think outside the box and with a bit of confidence, your Information Security Program is going to be "lit." Alyssa Pugh is a Security+ certified Tandem Software Support specialist for CoNetrix. Tandem is a security and compliance software suite designed to help financial institutions develop and maintain their Information Security Programs. To learn about how CoNetrix can help you, visit our website at www.CoNetrix.com or email email@example.com.
<urn:uuid:74c2591c-ffbf-426f-af3a-925e7900f1c9>
CC-MAIN-2022-40
https://conetrix.com/articles/information-security-is-on-fleek-interpreting-the-new-booklet
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00282.warc.gz
en
0.945418
935
2.546875
3
RAID is a technology that is used to increase the performance and/or reliability of data storage. The abbreviation stands for Redundant Array of Inexpensive Disks. A RAID system consists of two or more disks working in parallel. These disks can be hard discs but there is a trend to also use the technology for solid state drives. There are different RAID levels, each optimized for a specific situation. These are not standardized by an industry group or standardization committee. Load more : How to view an NVR remote viewing Below is an overview of the most popular RAID levels: RAID level 0 – Striping In a RAID 0 system data are split up in blocks that get written across all the drives in the array. By using multiple disks (at least 2) at the same time, this offers superior I/O performance. This performance can be enhanced further by using multiple controllers, ideally one controller per disk. - RAID 0 offers great performance, both in read and write operations. There is no overhead caused by parity controls. - All storage capacity is used, there is no disk overhead. - The technology is easy to implement. RAID 0 is not fault-tolerant. If one disk fails, all data in the RAID 0 array are lost. It should not be used on mission-critical systems. RAID 0 is ideal for non-critical storage of data that have to be read/written at a high speed, such as on a Photoshop image retouching station. RAID level 1 – Mirroring Data are stored twice by writing them to both the data disk (or set of data disks) and a mirror disk (or set of disks) . If a disk fails, the controller uses either the data drive or the mirror drive for data recovery and continues operation. You need at least 2 disks for a RAID 1 array. RAID 1 systems are often combined with RAID 0 to improve performance. Such a system is sometimes referred to by the combined number: a RAID 10 system. - RAID 1 offers excellent read speed and a write-speed that is comparable to that of a single disk. - In case a disk fails, data do not have to be rebuild, they just have to be copied to the replacement disk. - RAID 1 is a very simple technology. - The main disadvantage is that the effective storage capacity is only half of the total disk capacity because all data get written twice. - Software RAID 1 solutions do not always allow a hot swap of a failed disk (meaning it cannot be replaced while the server keeps running). Ideally a hardware controller is used. RAID-1 is ideal for mission critical storage, for instance for accounting systems. It is also suitable for small servers in which only two disks will be used. RAID level 3 On RAID 3 systems, data blocks are subdivided (striped) and written in parallel on two or more drives. An additional drive stores parity information. You need at least 3 disks for a RAID 3 array. Since parity is used, a RAID 3 stripe set can withstand a single disk failure without losing data or access to data. - RAID-3 provides high throughput (both read and write) for large data transfers. - Disk failures do not significantly slow down throughput. - This technology is fairly complex and too resource intensive to be done in software. - Performance is slower for random, small I/O operations. RAID 3 is not that common in prepress. RAID level 5 RAID 5 is the most common secure RAID level. It is similar to RAID-3 except that data are transferred to disks by independent read and write operations (not in parallel). The data chunks that are written are also larger. Instead of a dedicated parity disk, parity information is spread across all the drives. You need at least 3 disks for a RAID 5 array. A RAID 5 array can withstand a single disk failure without losing data or access to data. Although RAID 5 can be achieved in software, a hardware controller is recommended. Often extra cache memory is used on these controllers to improve the write performance. Read data transactions are very fast while write data transaction are somewhat slower (due to the parity that has to be calculated). - Disk failures have an effect on throughput, although this is still acceptable. - Like RAID 3, this is complex technology. RAID 5 is a good all-round system that combines efficient storage with excellent security and decent performance. It is ideal for file and application servers. RAID level 10 – Combining RAID 0 & RAID 1 RAID 10 combines the advantages (and disadvantages) of RAID 0 and RAID 1 in one single system. It provides security by mirroring all data on a secondary set of disks (disk 3 and 4 in the drawing below) while using striping across each set of disks to speed up data transfers. What about RAID levels 2, 4, 6 and 7? These levels do exist but are not that common, at least not in prepress environments. This is just a simple introduction to RAID-system. You can find more in-depth information on the pages of wikipedia or ACNC. RAID is no substitute for back-up! All RAID levels except RAID 0 offer protection from a single drive failure. A RAID 6 system even survives 2 disks dying simultaneously. For complete security you do still need to back-up the data from a RAID system. - That back-up will come in handy if all drives fail simultaneously because of a power spike. - It is a safeguard if the storage system gets stolen. - Back-ups can be kept off-site at a different location. This can come in handy if a natural disaster or fire destroys your workplace. - The most important reason to back-up multiple generations of data is user error. If someone accidentally deletes some important data and this goes unnoticed for several hours, days or weeks, a good set of back-ups ensure you can still retrieve those files.
<urn:uuid:40b1e10d-4bd6-4eea-a503-a1db2805c80e>
CC-MAIN-2022-40
https://www.dvraid.com/how-to/how-to-raid-introduction-guide/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00282.warc.gz
en
0.909396
1,291
3.40625
3
Presented by FedTech Ultrasound and remote video tools are critical to keeping International Space Station crew members healthy. Much of what makes up a good telehealth program is the ability to deliver high-quality care advice to individuals from a distance. That's especially necessary for astronauts on board the International Space Station, since the closest medical doctor is a rocket-ship ride away, back on Earth. "Telemedicine really is our only resource," said Shannan Moynihan, deputy chief medical officer for the NASA Lyndon B. Johnson Space Center, during a March 8 session at HIMSS 2018 in Las Vegas. "Unfortunately, we don't get to go make house calls. I've tried that — it didn't fly." Moynihan, and Michelle Frieling, the department manager for flight and medical operations at KBRwyle, dissected the role of telehealth in providing treatment for astronauts, particularly noting the importance of using ultrasound technology and a video connection. "Ultrasound is a great modality for us in orbit," Moynihan said. "We don't have an X-ray machine, an MRI or anything like that, so this is our go-to piece of hardware when we're trying to get some data." Color-Coding Helps to Train Astronauts The crew on board is trained on a modified, color-coded ultrasound machine, but only receives about 40 to 60 hours of instruction as early as two years prior to the launch, according to Frieling. The color coding, in particular, really helps the astronauts orient themselves and get to the button pushes faster since it represents a universal language, she added. "Remember, you might be training a fighter pilot, you might be training an engineer, because they're going to be the ones operating this," she said. "They might be American, they might be Russian, Japanese, European or Canadian. So being able to tell somebody 'I need a purple, two, up arrow' or 'I need you to push the green three button' is much faster than expecting them to fully understand the entire keyboard." In addition to video equipment to help send information to doctors on the ground who guide the crew, the astronauts are also equipped with an IP phone. "A lot of times I'll just get a cellphone call and it'll be from the crew on orbit," Moynihan said. "When we first got this capability up, I wouldn't just answer it because I was screening my phone calls. Then I realized that there are certain area codes you need to answer." All videos also are privatized and routed to a secure backroom for only flight surgeons, remote guiders, and in some instances, researchers to view. "That's not video we want to go out to the public at-large via NASA TV, or even to the greater Mission Control Center community," Frieling said. "We take protecting privacy very seriously." Medicine Meets Concerns Around Microgravity Adaptation to microgravity is one of the biggest concerns both from a health and a training perspective. For instance, while a headache is a common complaint heard from astronauts, the cause of that headache might not be all that common, Moynihan said. "For us, we have to think about maybe … there's been some elevated CO2 exposure, and that could be the cause of the headache they have," she said. "If it's during their first couple of weeks on orbit, perhaps they have some fluid shifts going on that are causing the headache. We're thinking about the common reasons, but also the not-so-common ones." With regard to training, the crew must learn how to properly restrain themselves and handle tools in the new environment. "If I push against you in microgravity, you're going to go in the other direction," Frieling said. "Making sure people know … how to do the things they need to do, how much extra time it's going to take, is very important." This content is made possible by FedTech. The editorial staff of Nextgov was not involved in its preparation.
<urn:uuid:09ef729c-bf77-45e8-8bf9-2ee3fbf5acde>
CC-MAIN-2022-40
https://www.nextgov.com/sponsors/fed-tech/2018/03/how-nasa-deploys-telehealth-care-astronauts/146748/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00282.warc.gz
en
0.974026
851
2.734375
3
The story of the Internet and its Things may seem as star-crossed a tale as any, but it does not need to be hopeless. Although security researchers Dennis Giese and Daniel Wegemer eventually managed to hack into the Xiaomi Mi Robot vacuum cleaner, their research shows that the device is much more secure than most other smart things are. In their talk at Chaos Communication Congress 34, which was held in Leipzig recently, the researchers explained how the device’s software works and which vulnerabilities they had to use to finally crack its protection. Hacking the Mi Robot with tinfoil When they started their research, Giese and Wegemer were amazed to find that the Xiaomi vacuum cleaner has more powerful hardware than many smartphones do: It is equipped with three ARM processors, one of which is quad core. Sounds pretty promising, right? So, for starters, Giese and Wegemer tried to use several obvious attack vectors to hack the system. First, they examined a unit to see if there was a way in through the vacuum cleaner’s micro USB port. That was a dead end: Xiaomi has secured this connection with some kind of authentication. After that, the researchers took the Mi Robot apart and tried to find a serial port on its motherboard. This attempt was likewise unsuccessful. Their second hacking method was network based. The researchers tried to scan the device’s network ports, but all ports were closed. Sniffing network traffic didn’t help, either; the robot’s communications were encrypted. At this point, I’m already rather impressed: Most other IoT devices would have been hacked by now because their creators usually don’t go this far in terms of security. Our recent research on how insecure connected devices are illustrates it perfectly. However, let’s get back to the Xiaomi Mi Robot. The researchers’ next attempt was to attack the vacuum cleaner’s hardware. Here, they finally succeeded — by using aluminum foil to short-circuit some of the tiny contacts connecting processor to motherboard, causing the processor to enter a special mode that allows reading and even writing to flash memory directly through the USB connection. That’s how Giese and Wegemer managed to obtain Mi Robot firmware, reverse-engineer it, and, eventually, modify and upload it to the vacuum cleaner, thereby gaining full control over the unit. Hacking the Mi Robot wirelessly But cracking stuff open and hacking hardware is not nearly as cool as noninvasive hacks. After reverse-engineering the device’s firmware, the researchers figured out how to hack into it using nothing more than Wi-Fi — and a couple of flaws in the firmware’s updating mechanism. Xiaomi has implemented a pretty good firmware-update procedure: New software arrives over an encrypted connection, and the firmware package is encrypted as well. However, to encrypt update packages, Xiaomi used a static password — “rockrobo” (don’t use weak passwords, kids). That allowed the researchers to make a properly encrypted package containing their own rigged firmware. After that, they used the security key they obtained from Xiaomi’s smartphone app to send a request to the vacuum cleaner to download and install new firmware — not from Xiaomi’s cloud but from their own server. And that’s how they hacked the device again, this time wirelessly. Inside the Mi Robot’s firmware Examining the firmware, Giese and Wegemer learned a couple of interesting things about Xiaomi smart devices. First, the Mi Robot firmware is basically Ubuntu Linux, which is regularly and quickly patched. Second, it uses a different superuser password for each device; there’s no master password that could be used to mass-hack a whole lot of vacuum cleaners at once. And third, the system runs a firewall that blocks all ports that could be used by hackers. Again, hats off to Xiaomi: By IoT standards, this is surprisingly good protection. The researchers also learned something disappointing about Mi Robot, however. The device collects and uploads to Xiaomi cloud a lot of data — several megabytes per day. Along with reasonable things such as device operation telemetry, this data includes the names and passwords of the Wi-Fi networks the device connects to, and the maps of rooms it makes with its built-in lidar sensor. Even more disturbing, this data stays in the system forever, even after a factory reset. So if someone buys a used Xiaomi vacuum cleaner on eBay and roots it, they can easily obtain all of that information. Concluding this post, it’s worth emphasizing that both of the techniques Giese and Wegemer used enabled them to hack only their own devices. The first one required physical access to the vacuum cleaner. As for the second, they had to obtain the security key to make an update request, and those keys are generated every time the device is paired with the mobile app. The security keys are unique, and it’s not that easy to get them if you don’t have access to the smartphone that is paired with the Xiaomi device you’re going to hack. All in all, it doesn’t look like the Xiaomirai is nigh. Quite the contrary: The research shows that Xiaomi puts much more effort into security than most other smart device manufacturers do, and that is a hopeful sign for our connected future. Almost everything can be hacked, but if something takes a lot of effort to hack, it’s less likely that criminals will bother trying — they are usually after easy money.
<urn:uuid:4fa8b7c0-9c20-4e1c-b28f-5bd95536f5de>
CC-MAIN-2022-40
https://www.kaspersky.com/blog/xiaomi-mi-robot-hacked/20632/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00282.warc.gz
en
0.953943
1,166
2.515625
3
What is Network Access Control? Network Access Control (NAC) bridges several different security techniques to provide a unified approach to network access, be it wired or wireless. Endpoints, such as corporate laptops and mobiles, are often deployed with anti-virus software and users undergo an authentication process to access critical resources. NAC combines these technologies and as a device connects to the network it is able to combine user authentication with device verification. In this way, NAC can be used to limit network resources to authorised users, and to ensure that devices that do connect to the network meet certain requirements - such as having the latest anti-virus software, no known vulnerabilities and are corporate-owned rather than personal devices. What you'll need NAC requires the use of wired and wireless devices that support protocols such as 802.1x to enable encryption and authentication. This includes both the switching and wireless LAN equipment as well as end-user devices. The policy is then defined centrally and means that a user is able to obtain the same level of access regardless of where they connect to the network. Leading NAC technology partners Latest news and blog posts Mist AI Enterprise networking Back to the office with a strong Wi-Fi connection With hybrid working, the use of workstations is becoming increasingly flexible. One consequence is that the existing Wi-Fi network is more heavily loaded than before. Is the network ready for this? Why ITIL can be applied at any level Many organisations struggle with a proper ITIL implementation. But this best practice offers a range of solutions that can partly be implemented at one's own discretion. Zero-Trust Palo Alto Networks Why Zero Trust is essential in a post-pandemic world The rapid transformation to hybrid work and hybrid networks/clouds has exposed weaknesses in the first ZTNA approaches in this post-pandemic world. Kumar Ramachandran from Palo Alto
<urn:uuid:d9405c38-6b2e-4082-b028-557eaba1cbd3>
CC-MAIN-2022-40
https://www.nomios.com/security/nac/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00282.warc.gz
en
0.930381
413
2.765625
3
Between the infamous 9/11 attacks, the 2004 Madrid train bombings, and the 2015 Paris attacks, mass transit has been considered a serious target for large-scale terrorist attacks. All over the world, there have been big pushes in enforcing new types of regulations and rules in an attempt to curb the possibility for terrorist attacks. There have also been pushes in the use of terrorist screening measures, whether it’s through the use of more on-site human personnel or newly-implemented terrorist screening technology. Many different measures have now been put in place to make mass transit more safe and protected against potential terrorist attacks. Read on to learn more! Improved Number of Security Forces The first and most obvious method for reducing possible terrorist attacks is to add more security forces – whether that is security guards or something similar like air marshals or other military personnel for flights. These personnel would then have the ability to screen passengers along with their bags, although this does come at a cost. There are some financial ramifications since hiring on more people always comes with a bigger price tag, but it can also lead to reduced perceptions of the easiness and convenience of transportation. This allows for vulnerabilities at these security checkpoints and creates a general feeling of fear. Equipping the security personnel with transportation security technology will make their job easier, since they won’t have to worry about the time it takes to perform an inspection. Analytics and Video Surveillance Video-based security systems are also becoming much more intelligent every year. These days, these security systems are capable of facial recognition to figure out if a person is allowed to be in a particular spot. This could be a great feature for prohibiting any free movement in spaces where only transit employees are supposed to be. Setting up an intelligent vehicle occupant detection system in an airport’s employee parking lot will help aid in preventing terrorists from gaining any access. Systems Designed Just for Mass Transit As we make a move into more security measures being set in place, the concern for terrorist screening in mass transportation, we will see more and more systems designed with transit in mind. Take Gatekeeper’s very own Automatic Train Undercarriage System, for example. While capable of scanning either passenger trains or cargo trains, the system is intended to provide high-quality scans of every vehicle so you can ensure there are no foreign objects or potentially unsafe modifications being made to the vehicles. Groundbreaking Technologies with Gatekeeper Gatekeeper Security’s suite of intelligent optical technologies provides security personnel with the tool to detect today’s threats. Our systems help those in the energy, transportation, commercial, and government sectors protect their people and their valuables by detecting threats in time to act. From automatic under vehicle inspection systems, automatic license plate reader systems, to on the move automatic vehicle occupant identifier, we offer full 360-degree vehicle scanning to ensure any threat is found. Throughout 36 countries around the globe, Gatekeeper Security’s technology is trusted to help protect critical infrastructure. Follow us on Facebook and LinkedIn for updates about our technology and company.
<urn:uuid:419de710-d2e7-4683-82e1-519dc0e2c75f>
CC-MAIN-2022-40
https://www.gatekeepersecurity.com/blog/terrorist-screening-implemented-mass-transit-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00482.warc.gz
en
0.939763
622
2.609375
3
As the global focus shifts to a sustainable way of living, companies must adapt as consumers look for carbon neutral and sustainable options for their purchases. There is a growing urgency to reduce as many greenhouse gas emissions as possible which is reflected in Scopes 1, 2 and 3. The urgency comes as the race is on to reduce greenhouse gas emissions in an effort to meet the goal of keeping the global temperature increase below 2⁰c. Companies are responsible for the majority of global emissions which means they play an essential role in the long-term sustainability of our planet. Looking at emissions in each of the 3 Scopes can have a positive impact for any organisation, ranging from saving money to better working relations with suppliers. What are Scope 3 emissions? Scope 3 emissions are the greenhouse gases produced indirectly from a company’s activities upstream and downstream. Upstream is the term given to any activities that happen before the company performs its in-house operations, another great way of looking at it is the pre-production activities. Examples of the indirect upstream activities include the sourcing of raw materials and their transportation, employee commuting, business travel and the waste generated in operations. Downstream is the term given to any activities that happen after the company performs its in-house operations, which can also be classified as post-production activities. Some examples of the indirect downstream activities are the transportation and distribution of the manufactured goods, the use of the sold products and their end-of-life cycle. One company’s Scope 3 emissions overlap with another company’s emissions and therefore by one company making a change which impacts its greenhouse gas emissions many other businesses would also benefit. The UK’s 2050 Net-Zero Target In 2019 the UK took the initiative on the fight against climate change and vowed to be net-zero by 2050. They were the first major economy in the world to pass laws in a bid to end its contribution to global warming. The target requires the UK to beat its previous goal of at least 80% reduction from 1990 and achieve no greenhouse gas emissions by 2050. So far, the UK has already made a 42% reduction in its greenhouse gas emissions whilst also growing its economy by 72%. Clean growth is now at the centre of the Industrial Strategy and with it there is expected to be a rise in the number of “green collar jobs” as experts estimate the total job count to hit around two million. To accompany the expected rise in “green collar jobs” the value of exports from the UK’s low carbon economy are predicted to increase to £170 billion a year by 2030. The Energy and Clean Growth Minister, Chris Skidmore, said “The UK kick-started the Industrial Revolution […] Today we’re leading the world yet again in becoming the first major economy to pass new laws to reduce emissions to net zero by 2050…” The UK’s net-zero target seemed to be the most ambitious in the world as it was recommended by the Committee on Climate Change, however many more countries have joined in this initiative since. How does reporting on Scope 3 benefit a company? Reporting on a company’s Scope 3 emissions can provide benefits which range from better relationships with stakeholders and suppliers to cost savings and good publicity. Reviewing Scope 3 can highlight areas with exceptionally high emissions and ensure the company tackles these areas; from encouraging employees to reduce commuting emissions by carpooling to sourcing much more environmentally sustainable equipment. By performing an evaluation of a company’s total emissions across all Scopes it is possible to find areas of large money saving opportunities, especially in the UK where lower carbon emissions reduces the Carbon Emissions Tax charges. Reporting on Scope 3 also provides the company a greater insight into the operations and effectiveness of its suppliers and stakeholders allowing for a better understanding into the upstream and downstream operations. Through this understanding the reporting company can pinpoint weak relations and build upon them. They can also assess the value for money with the existing connections and make changes should they see fit. Through the strengthened relations as a direct effect of reporting on Scope 3, the reporting company can account on its Corporate Social Responsibility much more accurately. Having a clearer understanding of the product’s journey from raw material to its end-of-life cycle can be relayed to the ever-growing sustainability focused consumers. With consumers now becoming more eco-aware they look at the entire story behind a product. The publicity attached to Scope 3 reporting not only directly affects them but is almost crucial for a business in the sustainability focused world. Reporting on Scope 3 will not only save organisations money and encourage consumers to purchase sustainably produced goods, but it will actively help our fight against climate change.
<urn:uuid:7a65541c-ba49-4a1f-87ea-12999569d5c6>
CC-MAIN-2022-40
https://circularcomputing.com/news/why-should-businesses-care-about-scope-3/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00482.warc.gz
en
0.958456
976
3.140625
3
MPLS or Multiprotocol Label Switching is a hybrid routing methodology that has become immensely popular within the networking community across the world. It works towards streamlining the main switching of IP packets that flow between layer 3 and layer 2. MPLS VPNs are virtual private networks that have connectivity by means of the MPLS infrastructure. So, where does OSPF fit in with MPLS? And can using OSPF over MPLS VPN environments make it faster? These points are discussed further in the directions below. OSPF or Open Shortest Path First is best understood as a routing protocol designed for IP or internet net protocol networks. The technology primarily uses LSR or link state routing algorithms. It is also grouped with IGPs or internet gateway protocols mainly because it operates within a single AS (Autonomous System). OSPF is a popular IGP, especially in large scale networks. Being a routing protocol, OSPF is designed to calculate the least possible or the shortest possible route to the specific destination across the network using an algorithm. OSPF is also considered as highly reliable for calculating routes even if you are dealing with complex and wide-scale local area networks. OSPF is a route-free loop routing protocol. In other words, this is derived from the algorithm itself. The route changes can be easily transmitted throughout the entire system in the shortest possible time. Also, the autonomous system is segregated into different regions. This means the summary of all routing information is adopted which further reduces the quantity of information that is required to be transmitted. Even if the network scale increases, the routing information does not expand as rapidly. OSPF also offers more reliable routing with the help of strict division of routing level. It supports md5 authentication and interface-based plaintext making it a good option from the security point of view as well. Lastly, OSPF is capable of adapting to different scales of the network reaching up to as many as thousands of units. MPLS is an excellent solution to the traditional IP network issues. Its traffic engineering capability is far better than most other network technologies. VPNs or virtual private networks are private networks that utilize public networks for connecting to two or more than two remote sites. Rather than using dedicated connections, VPNs use virtual connections that are tunneled via public networks which are mostly service provider networks. MPLS VPNs are best understood as different methods for using MPLS to create VPNs or virtual private networks. This transportation method is highly flexible and is capable of routing various types of network traffic by using the MPLS setup as a backbone. Other than the many advantages of MPLS technology, MPLS VPN offers some more. MPLS VPN users see significant improvements in services like VoIP, web conferencing, as well as mission-critical apps. MPLS VPN has also gained popularity in recent times as it is an excellent way to connect to the cloud. This setup also enables connectivity for not just IP-based but also non-IP-based WAN physical security systems. Bandwidth is better utilized as important apps are easily prioritized on the network. Most importantly, businesses employing MPLS VPN can bring down the number of hubs present in their network. This directly reduces the maintenance costs for the organization. Both OSPF and MPLS VPN come with some unique advantages and benefits. Deployment of OSPF over MPLS VPN is one of the deployment options of MPLS VPN. And as with most MPLS VPN deployments, the customer routes must be broadcasted to all relevant PE-routers once these routes have been set in the receiving VRF (virtual routing and forwarding). However, this is not an automatic process as a result, there is some amount of redistribution between BGP (border gateway protocol) and OSPF that is required to be carried out. Here, it is also important to remember that the MPLS VPN backbone actually does not act as a definitive OSPF area 0 backbone and that any proximities are formed only between PE-routers and CE-routers. Therefore, for all of the OSPF routes to be translated to VPN-IPv4 routes, it is important to use MP-BGP (Multiprotocol Extensions for BGP) between PE-routers. The address cluster in a BGP configuration can be used to redistribute VRF OSPF routes into MP-BGP. Two different sites are within the same OSPF domain if certain criteria are met. For instance, the routes beginning from one site to another must be intra-network routes. Also, both these sites can run OSPF in the form of an intra-site routing protocol. You can do this by setting each route as inter-area routes. Here, the PE router must ideally operate as an independent OSPF instance for each-and-every domain. Also, in case the PE router is operating in IGP (interior gateway protocol), then the OSPF instance must be independent and separate from all other instances. When using OSPF for connecting CE and PE routers, the routing information that is gathered from the VPN location is put in a separate VRF that is associated with that incoming interface. Also, the PE routers attaching to this VPN use BGP for distributing VPN routes in between them. Here, a CE router is capable of peering into its attached PE router to learn routes to different sites within the VPN. Hence, by implementing normal BGP and OSPF interaction processes the routes originating from one site can be easily delivered as external routes to another site. This makes it impossible for them to be differentiated from the actual external routes that exist in the VPN setup. To streamline such situations, it is recommended to implement an improved version of the OSPF and BGP interaction process to ensure that the routes delivered from site to site are inter-area routes. A route is considered as an external route if it belongs to an OSPF domain that is not the same as the OSPF instance within which the route is getting distributed or if the route does not at all originate from an OSPF domain. However, a route will be an inter-area route if it belongs to the exact same OSPF domain and the same OSPF instance to which it is getting redistributed in, and also if it was initially broadcasted to the PE router in the form of an inter-area or intra-area route. For an OSPF domain, the PE and CE links can easily belong to different areas. This includes area 0 as well. Although, in case the PE connects to the CE by means of a non-zero area, then in that case the PE router will act as an ABR (Area Border Router) for that particular area. Here, the MPLS VPN setup will also become a Super Backbone. Network connectivity in offices spread across the world has become more crucial than ever in the current scenario. OSPF is basically configured as a routing protocol in the service provider network. By enabling OSPF over MPLS VPN on all of the service provider’s network routers, the MPLS labels get assigned on the basis of the route defined by OSPF. This makes routing more effective as MPLS VPN works over an efficient routing protocol in the network. Thereby, organizations can work towards designing as well as deploying a secure network for its enterprise-wide requirements. OSPF over MPLS VPN is a fairly vast topic and at CarrierBid we ensure the client’s organizational requirements are taken into consideration before designing an MPLS solution. So, if you have any further questions, please feel free to contact us directly. You can also fill in the form below for us to reach out to you for an initial free consultation since we never charge you for our services.
<urn:uuid:194f5f08-c9c6-4fc4-ac3b-3808c2d393de>
CC-MAIN-2022-40
https://www.carrierbid.com/ospf-over-mpls-vpn-environment/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00482.warc.gz
en
0.947583
1,606
3.046875
3
In the era referred to as ‘Industry 4.0’ or ‘The Fourth Industrial Revolution,’ two of the pillars of the technology field; automation and data transfer are closely coupled with concerns regarding cybersecurity. As organizations own, or use more and more information and assets which become additional nodes in the network, the attack surface area increases exponentially. As a result, cybersecurity aspects are transforming at an unprecedented rate. The challenges to security are becoming more prominent than ever, with both sides– hackers and security teams – trying to stay ahead of each other. In today’s hyper-connected world, cyberattacks are no longer a matter of “if”, but rather “when”. What is industry 4.0? As the use of computers and automation led the charge through the last few decades, organizations and governments focused their efforts on investments in the IT infrastructure era, which is referred to as ‘Industry 3.0’. However, today, the focus has shifted to new technologies such as IoT (Internet of Things), AI (Artificial Intelligence), machine learning and reinforcement learning which are defining the new work culture across almost all industries. Industry 4.0 essentially blends automation with advanced AI to reduce direct human effort and resources. The result is a more efficient utilization of both financial and material resources. In the era of the Fourth Industrial Revolution, organizations are hyper-connected with smart devices and smart networks. This poses a very lucrative target for hackers who can try to exploit the significantly higher number of vulnerable entry points into networks and devices. . Cyberattacks on critical infrastructure and in vital industrial sectors have become more frequent and more sophisticated. IoT – Internet of Things Internet of Things describes a world in which smart technologies enable objects within an intelligent network to communicate with each other and interface with humans effortlessly. This connected world of accessibility and technology does not come without its consequences, as interconnectivity implies hackability. Most of these devices are designed with little to no built in security mechanisms, making them easy targets for security breaches. This new world of convenience calls for new and revolutionary protection measures and strategies to assure secure networks. As a concept, blockchain has been around for approximately a decade. It is a well understood and defined concept that forms the backbone of most common cryptocurrencies. Blockchain, as a technology, focusses on the integrity and immutability of transactions. While blockchain as a technology is evolving, it offers solutions that can compete with current centralized offerings in terms of speed, but the blockchain is much more reliable considering capacity and trust. Inevitably, in IoT, blockchains will be used to secure infrastructure while maintaining device interoperability. Although blockchain holds immense promise and potential, it remains vulnerable to cyber threats and risks. A robust cybersecurity program is therefore crucial for protecting blockchain assets from cyber threats. AI – Artificial Intelligence Artificial Intelligence is another technology that is rapidly permeating organizations and government departments. It is a branch of computer science that helps to build solutions with human-like intelligence that can carry out complex tasks independently. AI applications are based on heuristics such as neural networks, machine learning, deep learning and natural language processing algorithms. Machines mimic humans only after they are trained to accomplish specific activities by processing vast amounts of data and identifying patterns in that data. The growing interest in these technologies and the value they offer is resulting in their adoption across many aspects of software and IT.. AI has the potential to make cybersecurity more efficient and responsive to ever-increasing threats and improve the cybersecurity posture of an organization The Way Forward The more industries become connected, the more vulnerable they are to the risk of cyberattacks because there are significantly more entry points for hackers to find and exploit. Hackers can target the connected devices that generate data, the networks that carry the data, the servers that host it, or the information systems that use it. AI has the potential to make cybersecurity more efficient and responsive against ever-increasing threats and improve the cybersecurity posture of organizations. Being the world’s first AI-powered Application Security Testing platform, Bright helps innovative companies that are in the forefront of the Industry 4.0 era significantly reduce cybersecurity risks. Cybersecurity should never be an obstacle to progress. Bright’s AI powered application security solutions help organizations save time and eliminate the security personnel bottleneck while reducing costs and their exposure window by being secure by design. To learn more about how you can adopt these tactics in your organization and embrace the fourth industrial revolution, or you have any questions on how to become secured by design contact us today.
<urn:uuid:c8ed3749-b6d6-4ac5-9c15-35867b26bc06>
CC-MAIN-2022-40
https://brightsec.com/blog/cybersecurity-in-the-era-of-industry-4-0/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00482.warc.gz
en
0.94783
942
2.96875
3
The traceroute tool is one of the simplest yet most helpful tools you can use to troubleshoot network issues. This tool is built into virtually every operating system, so no matter what type of computer you are working on, you will have it available. Traceroute runs a connection test from one computer to another device, showing each “hop” that it takes between devices on the network. A simple example of this would be to run a traceroute from your computer to Catchpoint’s servers. The specific results will be different for each person. However, in most cases, the results will show you somewhere around 15-20 “hops” that data takes to get from your computer to Catchpoint’s servers and back. The first one would likely be your local router, and from there the data will take multiple “hops” through your internal network and out through your internet service provider (ISP) and over the Internet, before finally reaching Catchpoint’s servers. Figure 1 shows an example of what you might see. Understanding how to run this tool, and what all the different information displayed when you run a traceroute command means, will help you when troubleshooting various types of problems. How to run the Traceroute command Running a traceroute is very simple. The first step is to bring up a command prompt on your computer. The specific method to bring this up will depend on what operating system you are using. For Windows 10, for example, you can simply click on the start button and type CMD to bring up the options below. From here, simply click on the Command Prompt app to open it up. When your command prompt has loaded, just type the command tracert followed by the destination you want to use to run the test. For example, to run a test to catchpoint.com you would type tracert catchpoint.com and hit enter. (For Linux and macOS devices, you would type traceroute catchpoint.com instead.) Available options for the Traceroute command In most cases, the default traceroute command will give you the information you need. There are, however, some additional options that you can use to get more details or change how the command runs. Accessing these options is done by adding in one or more option flags after the traceroute command and before the destination. On Windows-based machines, the flags for various options start with a “/”. For example: tracert /d catchpoint.com. The following are the most commonly used options that you can choose from and what they do: - /d — This flag stops the attempt to resolve an IP address to a domain name at each hop. This can speed up the trace and provide you with a clear list of IPs at each hop that is not cluttered with full domain names. - /h — Use this flag to specify the maximum number of hops; the default is 30. Increasing this limit may be necessary for destinations that are far away. To set the maximum number of hops to 45, for example, you would type tracert /h 45 catchpoint.com . - /w — This sets the amount of time that the command will wait at a hop before timing out, measured in milliseconds. The default is 4 seconds (4,000 milliseconds). Type /w 6000, for example, to set the timeout to 6 seconds. - /4 or /6 — Using the /4 or /6 flag makes it so the traceroute command will only use either IPv4 or IPv6 hops for the command. - /h — This will bring up help information about the traceroute command. How to read the results from a Traceroute One of the best things about the traceroute tool is that once you learn how to read the results, you can understand the information it provides with just a quick glance. When you look at the example results of the traceroute listed above, you will see several key pieces of information. The following table breaks down the key information you will see: The first column just tells you which hop the trace is on. Whenever you access the Internet (or even data on an internal network), the data travels from one piece of hardware to another. These will typically be routers, but could also be switches, servers, or even computers. Each of these pieces of hardware that the data goes through is considered a hop. The total number of hops that a connection goes through will depend on many factors, the most important of which is the physical locations where you run the command and the destination. Round Trip Time (RTT) Results The next three columns (Table 3) show the amount of time it took data to go from the source (typically your computer) to that hop and back. This is measured in milliseconds. When running the traceroute command, you are sending data to each hop three times. The first column is the amount of time it took the first time, the second is for the second attempt, and the third is for the last attempt. When everything is running properly, the round-trip time for each attempt should be similar. Hop Name and IP Address The final column is where the name, IP address, or other information about the hop is displayed. The information displayed here is determined by the settings on the hop itself. Some devices are set to only display their IP addresses. Others will also display the device name or other information. In some cases, the owner of the device has set it up so that it will not reveal any information at all, in which case you will simply see an asterisk (*) for that particular hop. Common problems discovered with Traceroute You can use this command to look for a variety of different types of network issues to determine what types of problems may be present based on the results displayed. Asterisks (Timeouts) at various points The most common issue you will see with a traceroute is a timeout response, which is represented by an asterisk (*). These happen quite frequently and for a variety of different reasons. In the following example, you can see multiple hops have asterisks when attempting to run a traceroute to google.com. When you see an asterisk, it will mean one of the following things: - Single Asterisk on a Hop: This means that the request timed out on just one of the three attempts. This can be a sign that there is an intermittent problem at that hop. - Three Asterisks, Then Failure: If you see all three attempts at a hop have asterisks and then the traceroute errors out, it means that the hop is completely down. - Three Asterisks, Then Success: If you see three attempts at a hop failing but then the rest of the traceroute continues without an issue, that is actually not a problem at all. This simply means that (as mentioned earlier) the device at that hop is configured not to respond to pings or traceroutes so the attempt times out. Elevated latency after one hop If everything looks fine for several hops but then the response times jump up significantly at one point and each hop after that remains high, it likely means a problem either at that hop or on the connection between it and the previous one. Since the connection from you to each successive hop has to go through that one, they will all be impacted by the latency it is causing. If you can identify where that hop is located, you can work with the owner of that connection to troubleshoot the problem. The issue will most often be with their data circuit. If you do not know the owner of that connection and this latency is causing significant problems, you may be able to work with your Internet service provider to have your traffic routed around that point. One hop of elevated latency If you see one hop that has an elevated response time but then the rest of the hops return to normal, this is not anything to be concerned about. It simply means that the device at that hop is configured so that responding to traceroutes is a low priority, which causes this type of delay. While there may appear to be latency on the traceroute, that slowness will not impact normal internet traffic. Alternatives to Traceroute There is no doubt that the traceroute command is one of the most frequently used tools when troubleshooting connectivity issues. In many cases, it will provide you with the exact information you need to rule out a specific problem. If you need additional information or more complex options, you will want to turn to advanced tools such as Catchpoint’s Network Observability tool. Our Network Observability tool will provide you with the same helpful information that you can find with traceroute and much more. For example, you can take advantage of DNS, CDN, and BGP monitoring to get detailed information about the connectivity between two (or more) points. You will also be able to keep the data over time so that it can be referenced if needed. For most network administrators, help desk support people, and other individuals who engage in this type of troubleshooting, having access to both the simplicity of traceroute and the functionality of Catchpoint’s Network Observability tool is the perfect balance. Get started with Traceroute today Anyone who wants to be able to troubleshoot connectivity issues over a public network will need to understand how to use the traceroute command. While it is not complex, it does take some getting used to. Taking the time to experiment with the various traceroute options and learning how to understand the results generated from this command will provide essential understanding for those working anywhere in the IT industry.
<urn:uuid:b305d482-4f7c-42ba-95ec-9c0a09e0420f>
CC-MAIN-2022-40
https://www.catchpoint.com/network-admin-guide/how-to-read-a-traceroute
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00482.warc.gz
en
0.933548
2,041
3.234375
3
Blockchain: A Way to (Safely) Bring Down the Barriers to Data Imagine for a moment a healthcare system where patientscontrol who sees their data. An individual at home recovering from a procedure might sign in to her medical account through a personalized dashboard on her phone, and with a couple of swipes give her physician consent to share her health record with her pharmacist. Or that same patient—determined to get a second opinion after a follow-up visit for imaging and labs—again might give permission to an off-site provider to go into her account and take a look at her results. In this imaginary world, there would be no need to travel from hospital A to hospital B for another appointment and another round of testing; and that pharmacist, wherever he might be, wouldn’t need to be part of the same integrated network as his patient’s referring physician. Medical information would live in “digital medical wallets”—and these wallets would be carried by the patients themselves. If it all sounds a little farfetched, that’s because it is, at least for right now. But a few years down the road? Blockchain technology holds the promise to make it a reality Blockchain, through its secure and decentralized data-sharing framework, will be the solution that healthcare needs Blockchainis the cryptographic technology developed for Bitcoin to allow secure transactions without a central clearinghouse. With blockchain, buyers and sellers connect directly, and their exchanges are recorded in blocks of data on a digital “distributed ledger” visible to everyone in the network. Transactions go through when blocks are linked together using cryptographic validation.In Bitcoin’s case, the blockchain is public (transactions are anonymous), so anyone can participate and add to the ledger. That’s not quite the way blockchain will work in healthcare. For healthcare applications, it is anticipated each person’s “wallet” would employ a blockchain where information would only be visible to those who had permission from that individual. Each person grants access to participants in theirnetwork – such as primary care physicians, specialists, and family members. Network participants would access data only after that participant’s unique digital fingerprint had been validated. Interoperability between electronic health record systems would no longer be an issue because data would sit on the individual’s personal data cloud. Blockchains establish a platform to automatically enforce privacy regulations. As health data is shifted or linked to blockchains, organizations can track who has shared data and with whom, without revealing the data itself. Further, withblockchain technology, permissible data visibility will be better than ever, which in turn maylead to gains in value-based care. Whereas today patient data sit in disparate silos, from the EMRs of different physician practices to medical devices in consumers’ homes, in the future that information will be fluid—and always at the fingertips of healthcare providers.Once they’ve secured a patient’sconsent,providers will have access toa 360-degree and longitudinal view of that person’s status, and insight as to their status on the care-delivery continuum. And if a medical expertcombines that visibility with analytics and cognitive computing? They’ll get trustworthy, data-driven insights to help provide better patient care. A Promising Future The growing pool of patient data available to providers is looking more like an ocean every day: biometric data collected by mobile and wearable devices; clinical data gathered in exam rooms; administrative and claims data used for billing and insurance; and genomic data sequenced and analyzed from laboratories. Additional data sources include “social determinants” data regarding lifestyle-related factors like education, family structure, and socioeconomic status and other patient-generated data. This information is coming in from all directions, and could be immensely valuable to clinicians. Unfortunately, it is not put to use consistently today and often winds up in disarray, or locked up and unreachable in unconnected technologies. Blockchain, through its secure and decentralized data-sharing framework, will be the solution that healthcare needs—a technological fix to a high-tech problem. And the best news is, it will be cost-effective. It is expected that organizations with legacy information systems or newly purchased EMRs won’t have to gut their existing infrastructures and won’t have to invest in major IT undertakings. They could simply access the technology using SaaS and bring their “off-chain” records to the blockchain. Blockchain for healthcare is still a work in progress, but all signs are it’s coming soon. When that day arrives, data sharing will change forever—as will the prospects for improving patient care.
<urn:uuid:89f77dd4-b4c7-4567-a540-2362d396cf5f>
CC-MAIN-2022-40
https://blockchain.cioreview.com/cxoinsight/blockchain-a-way-to-safely-bring-down-the-barriers-to-data-nid-23710-cid-176.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00482.warc.gz
en
0.937836
980
2.671875
3
Enhance Security of Digital Identity The subject of this article is a fragile digital identity built with a weak password, which makes a grave choke point of the cyber age. The word ‘password’ is poly-semantic and context-dependent. Sometimes it’s narrowly interpreted as ‘remembered text password’ and sometimes it’s taken broadly as ‘whatever we remember as secret credentials’. This situation drives some people to allege that the ‘text password’ is hard to manage so the ‘password’ should be removed from digital identity altogether by relying on ‘physical tokens’, ‘biometrics’ and ‘PIN’. We could, however, draw a totally different observation from the same assumption that the text password is hard to manage. That is, the text password is hard to manage so we ought to think about ‘non-text passwords’ towards an easier-to-manage and yet more secure password system. Do physical tokens mitigate the password headache? We do not need to take much space to explain the security effect of authentication by a physical token. This scheme may be enough. This cartoon published 15 years ago might also help. Does biometrics ease the password headache? Passwords and physical tokens can be deployed on their own and also with other authenticators in the security-enhancing ‘multi-layer’ method, whereas the biometrics cannot be deployed on its own. It can be deployed only in the security-lowering ‘multi-entrance’ method along with a fallback measure. Biometrics used with a fallback measure (Password/PIN in most cases) provide the security lower than that of the fallback measure” as outlined in this video. Houses with One Door or Two Doors. Which house is easier to sneak into? Alleging that biometrics which needs to rely on a password can displace the password is not different to alleging that a baby who needs to rely on its mother can displace the mother. With so much money invested and so many products sold, it may be hard to admit ‘Biometrics has actually brought down security’. But an alternative fact cannot displace the fact for long. Does PIN help with the password headache? Some people thought of declaring that a PIN is not the password. Say, the password should be removed but the PIN could stay for use on its own or as a fallback measure for biometrics. In this world where we live, PIN is no more than a weak form of numbers-only password. When the password (superordinate/generic concept) was removed, the PIN (subordinate/specific concept) has also been removed. In a parallel world where those people live, the PIN (subordinate concept) can do what the password (superordinate concept) cannot do, as a paper-knife should be able to do something that the knife cannot do. ‘PIN-dependent Password-less Authentication’ may not be a day dream for them, but it is exactly a day dream. Hard-to-break long password written on a memo? – It belongs to the physical token that we had analyzed. – It is hard to use multiple hard-to-break patterns without confusion. ID federations like single-sign-on services and password management tools? – Centralization creates a single point of failure. If modestly decentralized, multiple reliable master passwords are necessary. – They need a reliable password as one of the factors for each scheme.. Why sticking to the memory of characters and numbers? The part of our memory for characters and numbers, which we categorize as ‘text memory’ is just a small segment of our overall memory capacity. We have a huge memory capacity for non-text memories – visual, audio, tactile, gustatory, olfactory, which have supported our history over hundreds of millions of years – besides the text memory humans acquired only hundreds of years ago among the large parts of the population. Why don’t we think of making use of these deep-inscribed memory capacities, particular the visual memories? We know that the latest computers and phones are so good at handling visual images. Among the image memories we could focus on the images linked to our autobiographic memory, episodic memory in particular. Secret credentials made from episodic memory are ‘panic-proof’. Identity authentication measures practicable in panicky situation are easily practicable in everyday life. The reverse is not true. Our Proposition – Expanded Password System In the matrix, there are several known images. We can easily find all of them right away. Or, rather, these known images jump into our eye. And, only we are able to select all of them correctly. This is Expanded Password System. We can use both images and characters. It’s easy to manage the relation between accounts and the corresponding passwords. Comfortable and even fun. The idea of using pictures for passwords is not new. It’s been around for more than two decades but the simple forms of pictorial passwords were not as useful as had been expected. Unknown pictures we manage to remember afresh are still easy to forget or get confused, if not as badly as random alphanumeric characters. Expanded Password System is new in that it offers a choice to make use of known images that are associated with our autobiographic/episodic memories. Since these images are the least subject to the memory interference, it enables us to manage dozens of unique strong passwords without reusing the same password across many accounts or carrying around a memo with passwords on it. And, handling memorable images makes us feel comfortable, relaxed and even healed. Torturous login is history. Accounts & Corresponding Passwords Being able to recall strong passwords is one thing. Being able to recall the relation between accounts and the corresponding passwords is another. When unique matrices of images are allocated to different accounts, those unique image matrices will be telling you what images you should pick up as your password for this or that account. When using images of our episodic memories, Expanded Password System will thus free us from the burden of managing the relation between accounts and the corresponding passwords. Hard-to-break text passwords are hard-to-remember. But it’s not the fate of all the secret credential. It would be easily possible to safely manage many of high-entropy passwords with Expanded Password System that handles characters as images. Each image or character is presented by the image identifier data which can be of any length. Assume that your password is “CBA123” and that the image ‘C’ is identified as X4s& eI0w, and so on. When you input CBA123, the authentication data that the server receives is not the easy-to-break“CBA123”, but something like “X4s&eI0wdoex7RVb%9Ub3mJvk”, which could be automatically altered periodically or at each access where desired So far, only texts have been accepted. It was, as it were, we have no choice but to walk up a long steep staircase. With Expanded Password System, we could imagine a situation that escalators and elevators are provided along with the staircase. Or, some of us could think of all those ladders we have for climbing in Donkey Kong. Where we want to continue to use text passwords, we could opt to recall the remembered passwords, although the memory ceiling is very low, Most of us can manage only up to several of them. We could opt to recognize the pictures remembered in stories where we want to reduce a burden of textual passwords. The memory ceiling is high, that is, we would be able to manage more and more of them. Where we choose to make use of episodic image memory, we would only need to recognize the unforgettable images, say, known images. There is virtually no memory ceiling, that is, we would be able to manage as many passwords as we like, without any extra efforts. A simple brain-monitoring has a problem in security. The authentication data, if wiretapped by criminals, can be replayed for impersonation straight away. Therefore the data should desirably be randomized as the onetime disposable ones. An idea is that the authentication system allocates random numbers or characters to the images shown to the users. The users focus their attention on the numbers or characters given to the images they had registered. The monitoring system will collect the brain-generated onetime signals corresponding to the registered images. Incidentally, the channel for showing the pictures is supposed to be separate from the channel for brain-monitoring. If intercepting successfully, criminals would be unable to impersonate the users because the intercepted data has been disposed of. Improvised 2-factor Authentication A very strong password supposed to not be remembered and written down on a memo should be viewed as ‘what we have’, definitely not ‘what we remember’, so it could be used as one of the two factors along with a remembered password. We could then turn a boring legacy password system into a two factor authentication system at no cost, just by verifying two passwords at a time, one volitionally recalled and the other one physically possessed. When those two different passwords are used as two factors, we could rely on the strength of a remembered password against physical theft and the strength of a physically possessed long password against brute force attack, although it is not as strong against wiretapping as token-based solutions armed with PKI or Onetime Password. This configuration could be viewed just as a thought experiment or could actually be considered for practical application in between a single factor authentication and a costly heavily-armored 2-factor scheme, or, as a transition from the former to the latter. It goes without saying that Expanded Password System could be brought in for generating a remembered high-entropy password. Fighting Threats to Security and Democracy Where the digital identity platform was built without the secret credentials made from our memory, we would have to see the necessary level of security lost. Where the secret credentials, for which our will/volition is indispensable, are removed from the digital identity platform, we would have to see erosions of democracy that our ancestors have won through heavy sacrifices. On this front we are not optimistic; too few people are taking the correct course towards the correct objectives. Too many people, with professionals, researchers, politicians and journalists included, are badly distracted and straying off the course. More and more people are expected to join our efforts. By Hitoshi Kokumai Hitoshi Kokumai, President, Mnemonic Security, Inc. Hitoshi is the inventor of Expanded Password System that enables people to make use of episodic image memories for intuitive and secure identity authentication. He has kept raising the issue of wrong usage of biometrics with passwords and the false sense of security it brings for 16 years.
<urn:uuid:1bcd40bc-14c4-4d14-b158-a347ba160cd1>
CC-MAIN-2022-40
https://cloudtweaks.com/2019/10/how-enhance-security-digital-identity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00482.warc.gz
en
0.942834
2,356
2.78125
3
What is fibre broadband? Fibre broadband is a technology that uses fibre optic cable, as opposed to copper telephone wires, to provide an internet connection. Fibre optic cable is made of glass or plastic and can carry more data, making it faster than copper-based services such as ADSL which has lower bandwidths and limited capacity. Full fibre broadband, or Fibre to the Premises, uses a fibre connection from the exchange to the premises, as shown below. This makes it a truly future-proof solution and guarantees a fast, reliable connection. Switch on the TV or radio and you’ll hear any number of adverts offering “fibre broadband” but this usually isn’t full fibre; it’s Fibre to the Cabinet, or FTTC. This means that fibre optic cable is used from the local exchange to the green roadside cabinet, then a shorter copper cable is used from the cabinet to the premises, as below. In these cases, the term “fibre broadband” is slightly misleading as customers often expect they are getting the latest, future-proof technology that will meet all their needs, but they’re actually getting an intermediary solution that will need updating again at some point in the near future. FTTC offers speeds of up to 80Mbps, which is usually sufficient for a home user or small office, but it comes with its downsides. The majority of FTTC broadband still relies on the PSTN, which is due to be switched off in 2025. This means that the technology does not offer much longevity and users will need to migrate to an alternative, Single Order technology that does not require line rental, within the next few years. Furthermore, the use of copper lines means that the service is distance-dependent, so the further the property from the street level cabinet, the poorer the connection. This is enhanced by the nature of copper which causes it to degrade quicker, again, having a negative impact of the broadband service provided and resulting in slower speeds than expected. Single Order Broadband While FTTP is the best solution for future-ready broadband connectivity, Openreach coverage is predicted to be at 5.8 million premises as of September 2021, meaning many premises are still not full fibre ready. While FTTC has its drawbacks, we encourage our partners to offer the next best solution, Single Order Generic Ethernet Access (SOGEA). SOGEA is an FTTC solution that has been created to allow users to have a broadband connection without a landline. With SOGEA, the copper line has been maintained to ensure a more reliable connection, however it won’t last forever, so full fibre is always best where available. What does true full fibre mean? True fibre broadband is known by many names, including Full Fibre, Fibre to the Premises (FTTP) or Fibre to the Home (FTTH). This means that fibre optic cabling is used from the local exchange, all the way to the end-user premises, relying on no copper-based telephone lines. Full fibre is the future of broadband, a technology that is guaranteed to see users through the 2025 Switch Off and beyond. FTTP is a Single Order broadband product, meaning no WLR is required as it does not utilise the traditional copper telephony network at all. A purely fibre connection is capable of achieving gigabit speeds, making it the fastest broadband technology available today. However, full fibre is not yet available to everyone. In September 2021, Ofcom reported that 24% of homes and businesses across the UK were able to access FTTP from one provider or another, up from 21% in May. Although it’s not as widely available as FTTC yet which is said to reach 96%, the network is continually growing with large players such as Openreach accelerating their rollout to reach a total of 7.1 million premises by the end of 2021/22, with other providers following suit. For both business and residential customers, broadband is one of the most important services they rely on, whether it’s for entertainment such as streaming and gaming, or for essential systems that keep a company running. As our digital lives evolve, we are all becoming more aware than ever of the effects of poor connectivity, which is why FTTP is the number one solution that everyone should be looking to move to once it becomes available to them; not only will the higher speeds and lower contention transform the way they use the internet, but it will also future-proof their home or business long past the Great Switch Off.
<urn:uuid:80c52fa9-f23d-4835-99af-0978d24de40a>
CC-MAIN-2022-40
https://digitalwholesalesolutions.com/2021/11/what-is-full-fibre-broadband/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00482.warc.gz
en
0.960297
940
2.796875
3
Saturday, October 1, 2022 Published 2 Years Ago on Saturday, Dec 12 2020 By Karim Husami The dynamic shift brought on by the COVID-19 pandemic has made it crucial for countries to adopt emerging technological trends to facilitate daily life, work and the economy. Smart city development has accelerated in response to change and urbanization, as digital solutions pave the way toward a more liveable future. The smart city landscape is expanding with the emergence and rising adoption of connected technologies and increasing government initiatives. “The global smart cities market size is expected to grow from USD 410.8 billion in 2020 to USD 820.7 billion by 2025, at a Compound Annual Growth Rate (CAGR) of 14.8% during the forecast period,” according to Research and Markets. Smart cities will proliferate as 5G/IoT-powered solutions gather valuable information and improve the overall connectivity of citizens and enhance governance and participation. In addition, as long as the aging population increases, technological provisions are required to help ease the growing pressure on global healthcare systems and community-based services. Smart cities can have a significant impact on business development by engaging community business owners and municipalities. In order to encourage the greatest level of efficiency, companies tend to have a continuous flow of information with lawmakers about new policies, regulations, taxes, benefits and credit schemes that may be applicable to them. The exchange of data made possible by AI-powered IoT can help support long-term customer relationships and strategic partnerships. Some of the services that can be offered to citizens include matters like payroll, medical compensation, provision of funding, pension schemes, and bank information. This method conserves resources in services that often require HR involvement or outsourcing. Smart government initiatives can produce reports and publish data for businesses to make better-informed decisions when forecasting. Higher levels of digital engagement encourages more business transactions which in turn promotes economic development. In daily life, smart city sensors can be utilized for public lighting, air quality monitoring, localized parking assistance, watering of public parks, among other use cases. For example, the city of Santander in Spain has been recognized for its progressive smart city vision. Parking solutions were implemented in the city as early as 2013. Electronic information and mobile apps keep drivers informed about parking space availability and traffic flow which prevents traffic congestion and accidents in busy commercial spots. It is a great point of frustration for many of today’s youth that while the rest of the world’s industries and sectors are digitized and evolving with time, the education system feels stuck in the 1920s – at least outside of developed countries. Many parents and children alike are yearning for a more futuristic education […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:b6ed4e05-09b3-41c7-b2a5-5b079f4d63eb>
CC-MAIN-2022-40
https://insidetelecom.com/smart-cities-exploring-citizen-benefits/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00482.warc.gz
en
0.945476
594
2.75
3
The Internet of Things is typically thought of as a purely technological network. We focus on the wireless capability of devices, the importance of connections, and the ultimate functions that can be enabled. The IoT also represents a massive manufacturing effort, however, with millions of new sensors and devices being required to make it work. Here, we’ll look into how some advanced manufacturing methods are helping to meet the demands. Where 3D-printed prototypes are concerned, we need only look to healthcare — one of the busiest industries in IoT adoption and innovation. Our blog post on ‘IoT in Healthcare’ delved into how a variety of connected devices are helping people to stay healthy. And while some of those devices (such as wearable wristbands) have been around for some time, others (like ingestible sensors or some smart clothing) are still new, or in some cases still in development. And the same can be said about innumerable IoT devices across a range of other industries as well. Getting those IoT devices just right, such that they are safe, effective, and reliable, requires a very intricate design process. And while it isn’t always the solution, 3D printing can be implemented as part of that process. With its ability to manufacture a product from a precisely formed computer design, 3D printing is arguably the most exact method we currently have of testing product iterations and creating prototypes. While 3D printing is largely used to create prototypes, at least in the context of the IoT, alternative modern manufacturing options like CNC machining and injection molding can help to mass-produce final products. Fictiv breaks down machining and molding and how they differ from 3D printing, ultimately conveying that each option has its uses. But it’s often these two practices that are most helpful for fast part production and mass replication of existing designs. Specifically, CNC machining is the process of cutting and shaping object via computer-driven machine processes; injection molding is the creation of a reusable mold that can shape heated material into the desired design. Both tend to be somewhat faster than 3D printing where large-scale design projects are concerned, and both can help to drive the IoT forward by helping related companies to meet the sheer volume of demand for new devices. The manufacturing methods discussed in the previous sections are remarkable unto themselves. However, it’s also important to note that the ever-expanding list of innovative materials that can be put to use via those methods is also a key factor. At this point, processes like 3D printing and injection molding in particular work with a range of hybrid plastics and similar alternatives that can be cheap, durable, and strong despite being light. Metal has also become an option in some cases, and in fact Space Daily wrote about nanoscale metal structures as 3D printing options just two years ago. Basically, this miniature objects can potentially be designed with more sophistication and precision than ever before. Ultimately, it seems that with each passing month, we hear about a new material or hybrid that can be printed or molded, which only expands the possibilities for new IoT devices. Considering all of these examples, it becomes clear that manufacturing is actually an incredibly important aspect of IoT advancement. And as these networks continue to grow — potentially to include 31 billion devices this year, according to TechJury — how new creations are physically constructed is going to continue to be a key aspect of development. Mousumi is a Digital Marketing Executive at IoT Avenue who helped to promote the site along with several other sites with her compassionate SEO experties. Jan 17, 2020 | Press Releases Jan 23, 2020 | Press Releases Jan 31, 2020 | IoT Devices & Sensors Feb 04, 2020 | IoT Devices & Sensors, IoT Applications & Examples Feb 24, 2020 | Press Releases
<urn:uuid:20a9a48f-d6ea-4d2b-969b-4cc48dcb79f9>
CC-MAIN-2022-40
https://www.iotavenue.com/how-advanced-manufacturing-helps-build-the-iot
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00482.warc.gz
en
0.95358
815
3.296875
3
API to make IoT connectivity simpler Two Google engineers have proposed a way for IoT devices to easily connected to web pages. The move could pave the way for simpler installation of Internet of Things (IoT) sensors. The engineers, Reilly Grant and Ken Rockot, said their WebUSB API would enable hardware manufacturers to set up and control devices from web sites. The proposal would also make connecting USB devices and complex IoT sensors easier. When connecting devices, users either need the right drivers to set them up or you have to log into a small web server on the device itself. WebUSB allows the device to contact a web page and be configured from there. “For lots of devices it does because there are standardized drivers for things like keyboards, mice, hard drives and webcams built into the operating system. What about the long tail of unusual devices or the next generation of gadgets that haven’t been standardized yet? WebUSB takes “plug and play” to the next level by connecting devices to the software that drives them across any platform by harnessing the power of web technologies,” said the engineers on the WebUSB website. The engineers were quick to point out that the API will not provide a general mechanism for any web page to connect to any USB device. They said that historically, hosts and devices have trusted each other too much to let arbitrary pages connect to them. They added that there are published attacks against USB devices “that will accept unsigned firmware updates that cause them to become malicious and attack the host they are connected to; exploiting the trust relationship in both directions.” According to the engineers, WebUSB could replace native code and native SDKs with cross-platform hardware support and web-ready libraries. You might like to read: How APIs connect the world to the Internet of Things API connects IoT to the net The proposed mechanism has also been designed to be backwardly-compatible with USB devices without needing special firmware. “For devices manufactured before this specification is adopted information about allowed origins and landing pages can also be provided out of band by being published in a public registry,” the two said. The code is still a work in progress and is unofficial and hosted at W3C’s Web Platform Incubator Community Group (WICG). The engineers are welcoming members of the WICG to contribute to the project. Christian Smith, President and Co-Founder of TrackR, told Internet of Business that he sees WebUSB providing the standard to allow a seamless connection between hardware with USB and software. “It would allow me to take a mechanical design file from Google drive, automatically download the calibration settings for a 3D printer, plug in the 3D printer, and be able to print directly from the web. WebUSB short circuits the complications to hardware and allows your USB devices to have instant access to updatable drivers, files, and printers,” he said.
<urn:uuid:72d37490-3933-427f-8160-db5bad409014>
CC-MAIN-2022-40
https://internetofbusiness.com/google-webusb-iot-devices-internet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00482.warc.gz
en
0.944153
687
2.625
3
A proposed bill in the US – the Health Misinformation Act – seeks to make social media companies responsible for the spread of incorrect information about vaccines and other health-related claims during the pandemic and raises questions about the liability social media platforms have for the content posted by their users. While social media platforms are shielded from the liability of the content posted by their users (under Section 230 of the Communications Decency Act (CDA)*), this bill would potentially withdraw the liability shield under certain specified circumstances**. How can enterprises move forward in this changing regulatory environment, and what role can service providers play in helping platforms adapt to the new realities and restore trust? To find out, read on. What is the impact of health misinformation? Health-related misinformation can be fatal. According to the World Health Organization (WHO), in the first three months of 2020, nearly 6,000 people globally had to be hospitalized, and 800 people lost their lives due to coronavirus misinformation. In today’s digital age, such infodemic – an overabundance of typically unreliable information that spreads rapidly alongside a disease outbreak – can result in instilling distrust and the dismissal of proven public health measures, including vaccines. Case in point, data shows that as many as 99% of the new coronavirus deaths in the US are among the unvaccinated population. Current health misinformation prevalence in social media Social media platforms continue to grapple with health misinformation. Here are some recent examples: - YouTube had removed more than 800,000 coronavirus misinformation-related videos from its platform from March 2020 to March 2021. However, as of July 22, six of the 12 anti-vaccine activists responsible for creating more than half of the anti-vaccine-related content online were still searchable and were posting videos - Facebook still struggles to prevent vaccine misinformation from being shown to its users. In a June 2021 experiment by an advocacy group on how anti-vaccine content is propagated, two newly created experimental accounts on Facebook were recommended 109 pages with anti-vaccine information in just two days - A report by the London-based think tank, Institute for Strategic Dialogue, stated that a TikTok feature that can be used for adding another person’s audio to one’s video is being used for promoting misleading information about COVID-19 vaccines Unfortunately, the US has been witnessing a surge in COVID-19 cases. A major portion of this rise is in parts of the US where vaccination rates are low, and misinformation regarding the vaccine is reported as a contributing factor to these low numbers. As shown in the below graph, Florida is grappling with a rise in COVID cases due to vaccine hesitancy and misinformation. The connection between vaccine misinformation and low vaccination numbers in the US has been cited by US President Joe Biden, who has pointed to social media platforms for spreading falsehood about the virus and the vaccine. The Centers for Disease Control and Prevention Director Rochelle Walensky also had cautioned that COVID-19 is “becoming a pandemic of the unvaccinated.” Misinformation legislation globally The Health Misinformation Act would not be the first legislation in the world to hold social media platforms responsible for misinformation on their platforms. The Trust and Safety (T&S) practices of enterprises, especially social media platforms, have been increasingly encountering legal regulations. Several other countries are looking at ways to hold enterprises accountable for the content they host. Here are some examples of legislation against online misinformation around the world: What is the way forward for enterprises? Enterprises need to quickly adapt to the regulatory changes that compel them to assume higher responsibility for the content they host. Enterprises can take the following steps to transform themselves to the new realities including: - Setting up a dedicated war room of moderators to tackle COVID-19 related misinformation - Hiring moderators with dedicated expertise who can help them spot and identify health-related misinformation - Seeking assistance from medical professionals to undertake training of their automated moderation Artificial Intelligence (AI)/Machine Learning (ML) systems to identify and remove misinformation - Collaborating with other enterprises to identify dubious content using cross-platform information sharing and collectively tackling their common enemy – misinformation - Being agile enough to quickly adapt their policies to local content laws - Becoming aware of country-level laws and devising universal content policies since the same nature of user-generated content (UGC) may be considered legal in a certain region and illegal in another What role can T&S service providers play in enabling enterprises to adjust to the new regulatory realities? Service providers who are responsible for ensuring T&S will continue to play increasingly important roles in partnering with platforms to help them adapt to regulatory changes and emerge with strengthened T&S functions. Here are some ways service providers can evolve their offerings to enable enterprises under these dynamic circumstances: - Proactively offer content policy consulting to help enterprises make better UGC moderation decisions - Help enterprises meet their specialized talent requirements for their T&S functions in light of the new legislations - Offer AI technology solutions that can be trained to help enterprises identify and remove health-related misinformation from their platforms Global regulations put a larger responsibility on enterprises to ensure the wellbeing of the people they connect and bring together through their platforms. Maintaining custom and agile T&S operations has become the need of the hour for organizations. To help enterprises with their T&S needs, service providers are evolving their offerings and becoming partners of choice in these trying times. The bottom line User-generated content has nuances and needs contextualization for accurate interpretation, and hence, online content moderation decisions are not always black and white. As a result, it is critical that enterprises, service providers, health experts, regulators, and all concerned civil society groups come together and collectively form policies that can help remove health-related misinformation swiftly before it reaches and influences more people. A multi-stakeholder approach, in the form of an independent review board, consisting of experts from different walks of life, can be a promising way forward for all parties involved to take on the onerous challenge of defeating health misinformation. Currently, under US laws, social media platforms cannot be held liable for the user-content on their platforms, based on Section 230 of the CDA, whose provisions state: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” **Under the proposed Health Misinformation Act, social media entities would be devoid of the immunity granted by Section 230 of CDA only when they are found guilty of boosting the post algorithmically (post shown more to users due to engagement versus if such health-related misinformation is shown to users chronologically). To share your thoughts on responses to misinformation in social media, contact us at [email protected].
<urn:uuid:32640325-777f-4775-a166-240d31b5ad42>
CC-MAIN-2022-40
https://www.everestgrp.com/fighting-health-misinformation-on-social-media-how-can-enterprises-and-service-providers-help-restore-trust-blog.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00682.warc.gz
en
0.937937
1,409
2.53125
3
AC vs DC Switch: Which One to Choose? To apply a switch in the network, the first step is to get the network switch powered up. There are two power types: AC(alternating current) power and DC(direct current) power for a Ethernet switch, both of which are used for increasing network uptime. Then AC vs DC switch, what are the differences and how to choose each in different applications? This article will give a detailed introduction. AC Switch vs DC Switch: What Are They? AC switch is usually equipped with a fixed AC power supply connector. To power it up, all you need to do is to connect the AC switch to the power socket with proper cables such as IEC/NEMA/Z-lock power cords. PoE/PoE+ switches are typical AC switches, take FS 8-Port Gigabit Ethernet PoE+ switch as an example, which is equipped with a single power supply (as shown below). Figure1: FS S3150-8T2FP PoE+ Switch DC switch can be configured with an internal or external DC power supply. And the external DC power supply is more popular nowadays, which is also known as redundant power supply. DC switch often has more than one redundant power supply, for it can protect other power supply when it fails with shorted outputs as well as power up the switch. Take FS 48-Port Ethernet switch as an example, equipped with 2 (1+1 redundancy) hot-swappable power supplies, it can still operate while one of the power supplies breaks. Figure2: FS S5850-48T4Q Ethernet Switch AC vs DC Switch: What Are the Differences? What a switch doing in a circuit is to make or break the electrical connection, the speed of which really matters. In AC power, the arc of the AC switch can extinguish itself quickly. It is a desirable condition. While in DC power, it takes much longer for the voltage arc in the switch to be extinguished and may bring about pitting of switch contacts. However, DC power can provide smooth flow and even voltage, therefore most electronics use it such as to store power in batteries. AC vs DC Switch, How to Choose? So AC vs DC switch, how to choose? It depends on the types of power supplies you need in your network structure. Actually, there are also some switches applied with both AC and DC power supplies. However, if you use DC power supply to power up your switch initially, the switch will detect it and operate with DC power. In this case, AC power supply installed in the switch will be disabled. Remember not to mix AC and DC power supplies in a switch. Then if we choose the wrong type of switches, what can we do? To use AC switch in DC location, a power inverter (changes DC to AC) should be added. DC switch can also convert AC power to DC power by using a rectifier. That is to say, when you choose a wrong DC/AC power switch, you have to buy another network device(power inverter or rectifier) as a remedy. While a second device is always needed, FS provides a switch that can operate in both AC and DC power supplies. This FS 24-port layer 3 switch can be applied in dual AC/DC hot-swappable power supplies for 1+1 redundancy and load sharing. Also the AC/DC switch supports MLAG (Multi-Chassis Link Aggregation) for uninterrupted services with high reliability. Therefore it is ideal for large-scale campus network aggregations, small and medium-sized network cores. Figure3: FS S5850-24S2Q Ethernet Switch Both AC switch and DC switch are commonly used nowadays. AC vs DC Switch: which one to choose? It depends on the power supplies as well as your own case and needs. To avoid network disability and save time and effort on remedial measures, the dual AC/DC power supplies switch might be a nice solution for your network.
<urn:uuid:8868b2b1-0c09-496e-bbb9-f1ce5e0f7d6b>
CC-MAIN-2022-40
https://community.fs.com/blog/ac-vs-dc-switch-which-one-to-choose.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00682.warc.gz
en
0.936336
830
3.1875
3
Digital sustainability is the use of technologies in everyday business applications to improve the environment. The recent movement has gained traction as individuals around the world seek to limit the effects that digital technology and sustainability can impose on the environment. To achieve digital sustainability, organizations use digitalization advanced analytics. Businesses adopting digital sustainability as a goal can use digital processes, tools, and forecasting models to measure potential gains against the impact that their success might have on the environment. These same businesses can then work to mitigate any potential environmental impact of their operations, while still pairing consumers with valuable goods and services. Several forces are behind digital environment and social initiatives, including attention to rising populations and the increasing demand for remote IT support software solutions. Ultimately, digital sustainability allows companies to use technology that protects the environment without compromising corporate success. How Digital Sustainability Is Changing Business Digital sustainability is changing the way that companies operate. Businesses are becoming increasingly eco-minded, aware of the effect that digital processes can have on environmental sustainability. At the same time, companies are beginning to value contributions to the environment, enabling digital sustainability strategies that account for customer and environmental needs alike. Appeal to Consumer Attitudes Companies must pay close attention to customer attitudes. After all, consumer opinions will often determine how they spend their money. It’s no surprise that when customers prioritize environmental conservation, companies will work to publicly prioritize sustainability. Appealing to consumer attitudes means demonstrating — as a company — that you care about the same causes your customers care about. Sometimes, the drive to please customers will see a company take a certain political approach, or maintain a social agenda. A modern focus on sustainability and reducing one’s environmental impact means companies are working overtime to prove that they have similar interests. Digital sustainability allows companies to reflect consumer interests without injuring their long-term output. Implementing digital strategies that reduce their environmental impact can improve a company’s long-term outlook. The same strategies that contribute to a net-positive approach for the environment can also help businesses expand their customer targeting. In this way, companies that deploy digital sustainability strategies can also achieve an expanded reach. Sometimes, the digital sustainability strategy that expands customer reach is a large initiative. For example, many film content companies like Netflix transitioned away from lending physical copies of DVDs and videotapes, instead offering a digitized platform where films can be accessed at any time. Regardless of a user’s proximity to the film distribution center, they could now enjoy access to quality, online content. Their replacement of physical film copies with digital film copies eliminated the environmental impact of physical film distribution. They achieved greater digital sustainability while reaching new audiences, through an online library of the same high-quality films. In other instances, companies can leverage digital sustainability structures to benefit remote teams. Many employers with a successful history of managing remote teams will opt for virtual meetings over in-person gatherings whenever possible. Remote meetings eliminate the need for transportation costs, and mitigate any effects that physical transportation might impose on the environment. In the same way, digitized workspaces vastly reduce the need for the use of paper, ink, staples, and other office supplies that often end up in the garbage after a single use. The use of digital resources also has a directly positive impact on business operations: coworkers can simultaneously collaborate using the same online resources, files can be shared instantly, and internal company communication is streamlined. Strengthening Supply Chains Companies that offer consumer products often rely on supply chains to create and distribute their goods to customers. Traditionally, companies prefer to deliver goods to customers as quickly as possible, even if this speed comes at the price of compromising the environment. Fortunately, digital sustainability offers a middle ground: companies can leverage technology to improve their supply chain processes, without harming local and non-local environmental settings alike. Digital sustainability initiatives for supply chains often involve the increased use of automation. Automated technology can contribute to more effective processes while improving supply chain speed and virtually eliminating human error. Businesses that migrate their online databases to the cloud can also use fewer servers to achieve the same goals, reducing carbon emissions in the process. How Digital Sustainability Is Affecting the Environment Digital sustainability is quickly revolutionizing the way that many companies operate. Widely regarded as the newest pillar of business operations, sustainability has led many companies to enact digital sustainability strategies that reduce their overall environmental impact. Many companies have also begun reporting progress in their sustainability measures, even including details in public shareholder meetings. More Resources for Consumers Several solutions born from digital sustainability offer improved options for consumers. Increased digitization means that customers have access to more resources at their fingertips, whether that’s bank information, entertainment, or energy-use reports. Vulnerability to Technology Failures It’s worth noting that digital sustainability does offer potentially negative repercussions, both to businesses that enact them and customers who encourage them. Increased use of technology can sometimes mean larger consequences when technology fails. Similarly, companies that now house a majority of their secure files and finances online are now subject to cybersecurity threats. Digital sustainability can also yield negative effects for society. To practice digital sustainability, many companies are implementing technological solutions to the problems they face. These solutions automate certain processes, displacing the workers traditionally performing those same jobs. Business owners should seek a balance between digital sustainability and output. The challenge for any business today is finding ways to enable digital sustainability initiatives without crippling their profitability. Before fully pursuing digital sustainability, companies should consider the social impact, environmental consequences, and corporate effects that their efforts might have. Examples of Digital Sustainability Digital sustainability efforts can be found in a wide variety of industries today, especially in companies that offer client-facing services. Many businesses are actively investing in remote monitoring and management software, allowing them to simultaneously prioritize corporate success and environmental sustainability by streamlining IT support with the help of managed service providers. These companies can sustain remote workforces — saving the environment from the energy use effects of an in-person office — while addressing employee IT issues through advanced IT automation, scripting, and patch management. Digital sustainability efforts can come to define nearly any industry: - Land use analysis technology can prevent land overuse and reduce deforestation; - Intelligent recycling processes reduce landfill contributions and improve product reuse; - Improved local weather forecasting leads to enhanced yearly crop outputs; - Smart electrical grids control and reduce overall energy use in a residential or corporate setting; - Lighting and heating fixtures respond to touchless input; - Enhanced traffic systems and parking structures can reduce fossil fuel use and increase on-road safety. These and other digital sustainability strategies represent a small portion of the ways that businesses are working to improve the well-being of their company, their customers, and their climate.
<urn:uuid:deffd623-b445-4fce-b961-ba6e3e35245a>
CC-MAIN-2022-40
https://www.atera.com/blog/what-is-digital-sustainability-and-why-is-it-important-for-your-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00682.warc.gz
en
0.927497
1,392
2.9375
3
This week the European Commission has presented its proposal for the Cyber Resilience Act, which aims to protect consumers and businesses from digitally connected products with inadequate cyber security features. The legislation will be mandated for all EU member states, but will also likely have globally implications given any company selling products into the EU will have to comply. The Act was announced in September 2021 and builds on the 2020 EU Cybersecurity Strategy. The aim is to ensure that digital products, often those that are grouped under the ‘Internet-of-Things’ label, are more secure for those living and working the EU and will increase the responsibility of manufactures to comply with minimum requirements. The new regulation will impact everything from smart speakers, to cars, toys and digitally connected factories and warehouses. Margrethe Vestager, Executive Vice-President for a Europe Fit for the Digital Age, said: We deserve to feel safe with the products we buy in the single market. Just as we can trust a toy or a fridge with a CE marking, the Cyber Resilience Act will ensure the connected objects and software we buy comply with strong cybersecurity safeguards. It will put the responsibility where it belongs, with those that place the products on the market. Upon announcing the Act, the European Commission said that ransomware attacks hit an organisation every 11 seconds around the globe and that the estimated global annual cost of cybercrime reached €5.5 trillion in 2021. As such, it adds, ensuring a high level of cybersecurity and reducing vulnerabilities in digital products – one of the main avenues for successful attacks – is more important than ever. The documentation also notes that a cybersecurity incident in one product can have an impact on the entire supply chain, which could lead to disruption of economic and social activities across the EU internal market. What it means The measures proposed today are based on the New Legislative Framework for EU product legislation and will lay down: rules for the placing on the market of products with digital elements to ensure their cybersecurity; essential requirements for the design, development and production of products with digital elements, and obligations for economic operators in relation to these products; essential requirements for the vulnerability handling processes put in place by manufacturers to ensure the cybersecurity of products with digital elements during the whole life cycle, and obligations for economic operators in relation to these processes. Manufacturers will also have to report actively exploited vulnerabilities and incidents; rules on market surveillance and enforcement. Margaritis Schinas, Vice-President for Promoting our European Way of Life, said: The Cyber Resilience Act is our answer to modern security threats that are now omnipresent through our digital society. The EU has pioneered in creating a cybersecurity ecosystem through rules on critical infrastructure, cybersecurity preparedness and response, and the certification of cybersecurity products. Today, we are completing this ecosystem through an Act that brings security in everyone's home, in all our businesses and in every product that is interconnected. Cybersecurity is a matter for society, no longer an industry affair. According to a fact sheet released by the European Commission, 90% of products will self assessed by manufacturers - including hard drives, games, smart speakers, etc. Some 10% of products will undergo some sort of third party assessment, due to their critical nature - these include such things as network interfaces, firewalls, CPUs etc. Member States will appoint market surveillance authorities, which will be responsible for the enforcement of the Cyber Resilience Act obligations. In case of non-compliance, market surveillance authorities could require operators to bring the non-compliance to an end and eliminate the risk, to prohibit or restrict the making available of a product on the market, or to order that the product is withdrawn or recalled. Each of these authorities will be able to fine companies that don't adhere to the rules. The Cyber Resilience Act establishes maximum levels for administrative fines that should be provided in national laws for non-compliance. The European Parliament and Council will now examine the draft Cyber Resilience Act. Once adopted, economic operators and Member States will have two years to adapt to the new requirements. Thierry Breton, Commissioner for the Internal Market, said: When it comes to cybersecurity, Europe is only as strong as its weakest link: be it a vulnerable Member State, or an unsafe product along the supply chain. Computers, phones, household appliances, virtual assistance devices, cars, toys… each and every one of these hundreds of million connected products is a potential entry point for a cyberattack. And yet, today most of the hardware and software products are not subject to any cyber security obligations. By introducing cybersecurity by design, the Cyber Resilience Act will help protect Europe's economy and our collective security. Much like GDPR, this will have ramifications far beyond the EU. Any company selling products into the EU will have to comply with the new standards that will be laid out, meaning that this new Act will likely become the reference point for global organizations looking to minimimum security requirements for connected products.
<urn:uuid:582a095e-4280-460f-8d47-f48b52b459ac>
CC-MAIN-2022-40
https://diginomica.com/european-commission-lays-out-plans-eu-wide-cyber-resilience-act-secure-connected-products
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00682.warc.gz
en
0.95236
1,032
2.6875
3
Series: z/OS Management Facility z/OSMF – The IBM z/OS Management Facility This course provides the learner with a basic understanding of the z/OS Management Facility (z/OSMF). It begins with basic concepts: what z/OSMF is, why it is used, how it is configured, and first steps in logging on and using it. The course then delves further, providing the student with the skills needed to use all the z/OSMF features: problem management, configuration of WLM and TCP/IP, software management and deployment, capacity provisioning, performance monitoring, and workflow creation.
<urn:uuid:57f4233d-cbc3-49f3-90fb-55140053330e>
CC-MAIN-2022-40
https://interskill.com/series/z-os-management-facility/?noredirect=en-US
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00682.warc.gz
en
0.879285
129
2.578125
3
China’s rapidly growing tech economy is now facing some serious questions about the trade-offs involved in the widespread adoption of emerging technologies such as AI. In fact, China’s Ministry of Science and Technology is now leading the debate over the relative benefits and drawbacks of artificial intelligence, with at least some recognition that certain AI applications – such as facial recognition technology – might have some very negative implications for personal privacy. At the same time, other regulatory authorities within China – including the Cyberspace Administration of China – are now taking a closer look at how popular consumer technologies (including mobile apps) might also be going too far when it comes to collecting, using and sharing user data. The privacy vs. social good AI debate in China For now, the most high-profile emerging technology within China is artificial intelligence (AI), which is being embraced much more quickly and widely than in the West. For example, Chinese law enforcement authorities are using AI-powered facial recognition technologies to crack down on crime and terrorism, while urban planners and other policymakers are embracing AI as a way to come up with more efficient healthcare, education and transportation solutions. You can think of this as the “social good” element of AI: when used properly and judiciously, with the right privacy safeguards built in, AI has the potential to transform society for the better. However, it’s not always the case that the right privacy safeguards are always built into AI solutions. Take facial recognition technologies, for example. There is a very real fear that Chinese state authorities are using facial recognition technologies to silence political foes, track dissidents, and round up ethnic minorities in regions such as Xinjiang (which has a large ethnic minority population). In fact, there are accusations and allegations in the Western media that AI-powered facial recognition technologies are giving rise to “internment camps” across China. At the same time, the concern is that the Chinese government authorities will use facial recognition technology as a way to clamp down on critics and dissidents. You can think of this as the “privacy” element of AI: if used for the wrong purposes or taken to extremes, AI could possibly have some very negative consequences for personal privacy. And make no mistake about it – these AI-powered technologies are becoming very powerful indeed. In one highly publicized example, a BBC journalist agreed to be part of a facial recognition experiment in order to see how long it took the Chinese authorities to find him and his exact whereabouts in the city of Shenzhen. As it turns out, it only took 7 minutes to find him. So you can see why some privacy experts and champions of free speech are so concerned – within a matter of a few days, it would theoretically be possible to round up, arrest, or detain all dissidents and so-called “enemies of the state.” China creates a new AI governance committee Given all of this uproar in the Western media and in global policymaking circles, it’s perhaps no surprise that China has started to confront the AI debate of privacy vs. social good head on. Case in point: China’s Ministry of Science and Technology recently formed a committee on AI governance that is comprised of some of the biggest thinkers and AI experts in the Chinese tech sector, including the former head of Google China, the CEO of facial recognition technology company Megvii, and the head of AI at e-commerce powerhouse JD.com. In June 2019, less than four months after being formed, this AI committee helped to come up with a set of 8 different principles for developing responsible AI within China. These principles included the following: harmony and friendliness; fairness and justice; inclusivity and sharing; respect for privacy; secure and controllable; shared responsibility; open collaboration; and agile governance. Even Western critics of China’s AI sector were impressed. Don’t these principles – at least, on the surface – sound a lot like the types of principles that a similar type of AI governance committee in the West would have come up with for an AI company? AI and the Chinese economy If you’re a cynic, of course, it’s easy to see how the inclusion of principles such as “respect for privacy” might seem a little disingenuous. But here’s the thing: China is no longer an isolated Communist Party nation with a standalone economy; instead, China is a vibrant member of the global economy, and needs to play by a certain set of rules. China is apparently getting the message that, if it wants to be a global economic superpower, it will have to get a lot more serious about ethics and respect for privacy when it comes to emerging technologies such as AI. Importantly, China views AI as a very powerful driver of future economic growth. Unlike the first “Asian miracle” – which saw the spectacular rise of Japan and South Korea on the global stage while China remained stagnant – the next Asian miracle involving AI-powered economic growth will include China at the forefront of the boom. In fact, China has already proclaimed that it wants to be a world leader in AI by the year 2030, and that its domestic AI industry will be worth an estimated $150 billion by that time. At the same time, China has already started to protect its biggest tech companies at the forefront of AI research and development, including search engine Baidu (China’s Google), Alibaba and Tencent. To make that possible, China is looking for ways to catalyze the growth of its AI and machine learning industry. From a Chinese perspective, any Western efforts to cap the development of AI in China are really just a way to handicap future Chinese economic growth. China is very aware that it was slow to industrialize and globalize the first time around, and is not going to be making the same mistake the next time around. Thus, it’s easy to see why the Chinese authorities are slow to impose rules, regulations or laws on the use of AI and its power to make sense of Big Data: as they see it, these rules and regulations will only restrict the development of the AI industry, and potentially put Chinese companies at a competitive disadvantage with the West. This might sound a lot like paranoia, except for the fact that this same scenario seems to be playing itself out in the area of 5G networks. 5G is another emerging technology with huge economic and societal implications, and the West appears to be taking steps to undercut, handicap or neutralize the growth of China’s biggest 5G network operators, especially Huawei. The U.S. has been particularly vocal about the national security and privacy risks posed by Chinese 5G network providers, and has been encouraging its allies to stop the installation of Huawei 5G networking equipment. If this strategy proves successful for the U.S. when it comes to 5G, what’s to stop the U.S. from employing a similar type of strategy when it comes to AI? China and the mobile apps debate While China might claim that U.S. fears over AI being used for authoritarian purposes are overblown, the fact remains that China’s information technology companies have a very spotty history when it comes to protecting personal privacy. The easiest place to see this is with mobile apps, where even Chinese regulators – not just consumers – are starting to get very concerned about how much data these mobile apps are collecting, and how they are sharing all of these data with other third parties. Most notably, one of China’s top Internet watchdogs – the National Computer Network Emergency Response Technical team (CNCERT/CC) – recently came out and warned that China’s mobile apps creators are becoming too abusive in terms of how much data they are collecting. In its latest report (issued every six months), the CNCERT/CC warned that “data-hungry” apps were becoming a risk to consumers. As an example, the report pointed to the example of food delivery apps suddenly asking for permission to access a user’s photos or videos. Why, exactly, would a food delivery app need to see photos on your camera roll? Moreover, the Cyberspace Administration of China has also gotten involved with the debate over mobile apps, warning that Chinese mobile and social media developers must curb excessive data collection. It’s not just that these apps are collecting too much data – it’s that they are collecting data completely unrelated to the purpose of the app. One concern, of course, is that they are simply collecting all this data in order to turn around and sell access to this data to third parties. But a more insidious scenario is that these mobile apps are becoming a backdoor surveillance mechanism for the Chinese state with real-time monitoring capabilities. What better way to check up on your friends and acquaintances, than by perusing your camera roll? Imagine political dissidents and critical journalists opening up their entire network of friends and family to scrutiny by the state every time they order a pizza for delivery. And, of course, Chinese consumer associations are also warning of the perils of mobile apps in China. According to the Chinese Consumers Association, mobile apps collect way too much information. Of 100 mobile apps analyzed, 90 of them pushed the boundary of what is acceptable to collect from users. This Chinese consumer body noted that apps in several key verticals – travel and hotel, cloud storage and wealth management – were particularly bad when it came to collecting personal data and personal information. In exchange for getting access to cloud storage, for example, a mobile apps user might need to give up location data, contact lists, or mobile phone numbers. As might be expected, the Western media has taken this debate over Chinese mobile apps to an entirely new level of speculation and paranoia. Media giant CNBC, for example, recently warned of the perils of using popular Chinese camera apps. If you use a popular Chinese camera app like Meitu (which enables you to beautify selfies and other photos), you might be opening up yourself to scrutiny by the Chinese state. If you use the popular app TikTok (now one of the fastest growing mobile apps in the world), you might be unknowingly subjecting yourself to surveillance by shadowy Chinese intelligence organizations. That’s because, say Internet experts, Chinese tech companies are almost powerless to stop data requests from the Chinese government. If the Chinese state requests information about a certain user, the odds are pretty good that a Chinese tech company (especially an upstart mobile apps company) will simply turn over the information requested, no questions asked. That’s why, for instance, there is so much concern over Huawei setting up 5G networking equipment worldwide. Experts fear that Huawei is really just a “front” for the Chinese government, willing to embed back doors into any of its technology if it can be used to further Chinese national security or surveillance objectives. The future of AI and mobile apps in China Thus, as can be seen, AI and mobile apps are two tech areas where privacy concerns are highest. In both cases, data protection standards appear to be lower than in the United States. And, in both cases, respect for personal data and personal privacy appear to be lower than in the West. It will be up to the Chinese government to prove that it is embracing international norms when it comes to the further development of these technologies. Certainly, it is encouraging sign that China is starting to have a debate over personal privacy and technology. From a purely cultural perspective, China appears to value social stability much more than personal privacy, while in the West, there is probably higher value attached to personal privacy and personal freedom than social stability. Thus, there may never be a point where the U.S. and China see eye-to-eye on the development of key technologies like AI. And, it certainly doesn’t help matters right now that the U.S. and China are embroiled in a mounting economic trade war, in which any debate over surveillance technologies or personal privacy might plausibly be used as a negotiating tool by the West. For now, at least, the burden of proof is on the Chinese authorities and Chinese tech companies. For top AI and mobile apps companies from China to gain global acceptance, they will need to show that they are fully committed to respecting personal privacy.
<urn:uuid:4a22d9aa-8ad6-44ee-9eef-f7977906d62e>
CC-MAIN-2022-40
https://www.cpomagazine.com/data-privacy/chinas-privacy-challenges-with-ai-and-mobile-apps/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00682.warc.gz
en
0.955756
2,519
2.859375
3
K-12 in a Time of Pandemic: A Changing Culture Before COVID-19 was declared a pandemic, most schools were either on or going on spring break and considering “what if…?” scenarios. When schools were forced to close for the rest of the school year, the question became: “what now?” That every school answers that question differently is no surprise. After all, the business of education differs from school to school, district to district, and state to state. The curricula are different, educational apps are different, security and networking protocols are different, and one-to-one and collaboration initiatives are different. Everything is different. These differences made sending kids home, in a consistent way, a challenge—though not necessarily a technology challenge. Distance learning is about more than technology In truth, distance learning isn’t really a technology issue. We have the technology to ably support distance and online learning, no problem. From the perspective of parents, who are juggling the “newness” of work from home with the “newness” of their at-home, distance-learning kids—especially younger ones—keeping them engaged and on task is a challenge. From teachers’ perspectives, many are struggling to stay relevant in a digital age when traditional tools and traditional roles are quickly changing. From students’ perspectives, not everyone has access to technology. Even if they do have access, the devices they use and how they connect are all over the board. For example: - Many students have smartphones, but not all do. - Many students had access to Chromebooks/mobile devices in school, but they were only available for a specific class or within a specific classroom. - Some students have access to a computer at home, but it is likely a shared device. - Some 73 percent of U.S. adults have high-speed broadband service at home, according to Pew Research, but “racial minorities, older adults, rural residents, and those with lower levels of education and income are less likely to have broadband service at home.” Even if given a device to take home, kids in these homes couldn’t connect. Finally, IT teams shifted from an internally focused “safe harbor” to an externally driven “rocky sea.” Connectivity had to be extended, a vast array of devices secured, and numerous applications supported—all “out in the wild.” Now. There was really only one answer, of course, and that was to go to the cloud. In the cloud, devices become inconsequential. Apps can be accessed from anywhere and with any device, so long as you have the right credentials. Read more about using the cloud to scale remote learners. (A side benefit: K-12 IT teams now have time to do infrastructure projects with E-Rate and other funding that were previously put off due to time. Our services teams have been busy helping our K-12 and higher ed clients with refreshes, server upgrades, cabling runs, new access points, connectivity within and between buildings, and more. In fact, the FCC has extended deadlines for E-Rate due to the pandemic. Read more here.) Logicalis: Your partner in education Logicalis can help you securely bridge the gap so that students can effectively use distance and online learning—both now during the shutdown and as your educational needs change in the future as a result. Our highly certified and skilled education advocates take the time to understand your school or district, your challenges, your goals, your people, and your vision. Then we provide vendor-neutral advice that enables you to better engage students and achieve student outcomes. Logicalis can help you: - Build secure networks and strong identity and access management that protects both student privacy and networks. - Configure and manage devices and select the right software platforms to ensure your educational apps perform as needed. - Implement collaboration and videoconferencing systems that help you connect to students, extend classrooms, meet with other teachers and administrators, and more. - Prepare for future disruptions with tested business continuity or continuity of operations plans that gives kids and teachers a sense of normalcy when there’s chaos around them. - Assist IT teams by filling absences, skills, support, and other gaps and help teachers gain the confidence needed to deliver digital education. Mike Marchal is Director of Technical Sales for Logicalis, responsible for working with account managers and customers to provide technical solutions to business problems.
<urn:uuid:90380751-6aab-4af1-97ab-c3be020eb8c7>
CC-MAIN-2022-40
https://logicalisinsights.com/2020/05/12/k-12-online-and-distance-learning-in-a-time-of-pandemic/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00082.warc.gz
en
0.965381
945
2.984375
3
Demystifying the Blockchain: A Basic User Guide Multiple use cases for the blockchain are currently being tested, so it's worth learning the basics. Most people agree we do not need to know how a television works to enjoy using one. This is true of many existing and emerging technologies. Most of us happily drive cars, use mobile phones and send emails without knowing how they work. With this in mind, here is a tech-free user guide to the blockchain - the technology infrastructure behind bitcoin, and many other emerging platforms. Q: What does the blockchain do? A: The blockchain is software that stores and transfers value or data across the internet. Q: What can I store and transfer using the blockchain? A: To use the blockchain, you will need to set up an account or address (a virtual wallet). At this time, the most popular use for the blockchain is to make micro-payments with virtual currencies. For example, you can buy bitcoin with real money and then spend it on the internet using the blockchain. Authorising a payment using the blockchain is similar to using a credit card to buy something online. Instead of a 16-digit credit card number, you provide the vendor with a unique string of numbers and letters generated for each transaction. With this unique identifier, the blockchain can verify and authenticate the transaction. Q: Can I use the blockchain to transfer real money? A: Not yet. Some companies are using the blockchain to make international financial transfers, but most of these transactions are enabled by bitcoin or other digital currencies. Exchanging real money for bitcoin incurs fees for the sender, but the benefit is speed, security and convenience. Q: How is transferring value or virtual currency on the blockchain different from transferring money from my bank account? A: Depending on the amount and the destination, when you transfer money from your bank account, your bank will limit the amount you can transfer. Most banks impose daily limits for all transactions. When you use virtual money on the blockchain, there are no limits. When you transfer value or currency from your bank account to an account with a different bank or other financial institution, the transfer can take days. When you use the blockchain, the transfer is immediate. If a transfer from your bank account puts your account into debit, your bank will charge you a fee. The blockchain will not allow a transfer in excess of your balance and so your virtual wallet will never be in debit. Q: How is storing value using the blockchain different from keeping my money in a bank account? A: Bank accounts and credit cards are vulnerable to attack from fraudsters and hackers. The blockchain is a more secure way to store and transfer funds, particularly if you keep a modest value in your virtual wallet. Hacking the blockchain is difficult, time-consuming and expensive. No one breaks into Fort Knox for just $500. Of course, value stored on the blockchain will not earn you interest or improve your credit rating; and the blockchain will not lend you money to buy a house or car. The blockchain does not replace your bank, but very soon banks will be using the blockchain too. Q: How is transferring data using the blockchain different to attaching a file to an email? A: Unlike emails with attachments, the blockchain enables the immediate transfer of data no matter how big the file. Also, there is less danger of spam or viruses and no need for firewalls or junk folders. Q: How is storing data using the blockchain different to storing my files on my computer? A: If you lose or break your computer or if it is attacked by a hacker or virus, you could lose that data. The blockchain resides in the cloud. Like any web-based storage, you just need your username and password to access your data from anywhere anytime. Q: What else can I use the blockchain for? A: Very soon the blockchain will be used for online transactions. It will enable smart contracts, crowdfunding and auctions. It will verify the provenance of artworks and diamonds; transfer title to real estate and other assets; and store information about people, products and property. Apps for music distribution, sports betting and a new type of financial auditing are also being tested. Q: Why is the blockchain described as “riskless”? A: The blockchain verifies and authenticates both ends of each transaction. It will not release a purchaser’s funds until it has checked that the vendor will deliver as promised. Q: Is the blockchain safe? A: Standards and regulations are needed so that the technology can be readily used across different organisations, industries and jurisdictions. Blockchains can be private (like an email) or public (like Facebook), so users need to know which type is being operated before joining a new blockchain. My tips for safe use of the blockchain are: keep your virtual wallet details secure; do not let an unknown third party hold virtual currency or data for you; and do not provide your online banking details to anyone. As seen in a recent attack on a crowdfunding project, the blockchain is at its most vulnerable when significant value is stored in a single address. The blockchain may be trustworthy, but the people on it might not be. Philippa Ryan, Lecturer in Civil Practice and Commercial Equity, University of Technology Sydney. This article was originally published on The Conversation. Read the original article.
<urn:uuid:6c8e11db-64ae-4507-a1cc-9ff2cad018ce>
CC-MAIN-2022-40
https://www.govtech.com/budget-finance/demystifying-the-blockchain-a-basic-user-guide.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00082.warc.gz
en
0.906674
1,106
2.890625
3
A recent environmental report from the Intergovernmental Panel on Climate Change (IPCC) warns that the world faces climate hazards over the next few decades due to global warming. The IPCC, a UN organization studying climate change, published its report February 27, 2022. Here are key takeaways from the report and what it means to businesses. The IPCC report's main warning is that the irreversible impact of climate change on nature is making it difficult for many humans to adapt. The study found that over 40 percent of the global population is highly vulnerable to the effects of climate change. The report suggests in order to avoid massive disasters, temperatures will need to stay below 1.5C. Nature is being altered as a result of unpredictable climate conditions. Rising temperatures are causing coral reefs to die and trees to dry out. Coastlines are sinking in various parts of the world including the United States. The IPCC expects a billion people to be affected by coastal climate disasters in the next two decades. Professor Debra Roberts, who is an IPCC co-chair, has stated ecosystems and species that have been around all our lives may cease to exist. She has proclaimed the 2020s to be the decade of action to face the challenges posed by climate change. Health and Wellbeing Concerns Climate change events such as floods and heatwaves are harming people much worse than projected in the past. The study indicates that severe climate conditions are affecting certain people more than others, depending on where they live. Areas with high risks of severe storms, floods and droughts are Africa, South Asia, Central America and South America. Diseases will likely spread more quickly in the coming years, according to the report. The IPCC notes the risk of mosquito-borne dengue fever spreading toward the end of the century. Risks will decrease, however, the more focus is placed on education and reducing poverty. Furthermore, healthy ecosystems provide stronger resilience to climate change than areas where disease is widespread. A worker's health and well-being is now a key concern for many employers. Workforces full of ill employees aren't as productive as one that promotes health and wellbeing. So it's up to employers to promote a work-life balance environment. An example of an unhealthy workplace is one full of indoor air pollution, which is known to cause illness. Employers must also be careful not to overwork employees, which can lead to fatigue and the beginning of worker burnout. It's important for businesses to create an upbeat atmosphere that workers find enjoyable, since too much stress contributes to reduced productivity. Allowing workers to take regular breaks gives them a chance to do healthy exercise like walking or stretching. Check out the recording of our webinar "The Shift to Earth 4.0", where our panel of experts shared their knowledge and perspectives on the state of the environment and how Industry 4.0 will impact the earth. Species Threatened by Warming Nearly half of living organisms analyzed by the IPCC are currently migrating to higher ground or toward the poles. If temperatures reach 1.5°C, up to 14 percent of species face a high risk of extinction. There is already a high extinction rate in areas identified as vulnerable biodiversity hotspots, where rising temperatures threaten extinction rates to double. Some of the ways local governments can help reduce wildlife extinction threats include passing stricter regulations on pollution to protect the ground, air, and water from toxins. The more polluted the ecosystem becomes, the more extinction rates will rise. Plastics consumed by marine life also spread toxins throughout the food chain. The decreasing bee population is a further warning signal the same ecosystem that affects humans is being disrupted. The honey bee population declined 40 percent just in the 2018-2019 winter season. The same rate loss occurred the following winter. Bee pollination generates $50 billion worth of agricultural production annually. Not enough media focus has been on the connection between the environment and the food supply, which impacts dramatically businesses and individuals. Facing Urgent Environmental Solutions The clock is ticking on governments around the world to implement tighter environmental regulations and encourage a shift toward clean renewable energy sources. But the report warns certain technological solutions could make things worse. It discourages ideas of deflecting the sun's rays or removing CO2 from the atmosphere. Instead, the IPCC emphasizes "climate resilient development." The concept of climate resilient development calls for mass adoption of sustainability as a foundation for society and business. It encompasses governments, media, businesses, educators and citizens working together to promote less harmful effects on the overall ecosystem. The concept embraces renewable energy and working toward sustainability goals on a daily basis. In other words, government alone isn't going to solve the problem of climate change and its effects. A widescale effort among everyone on the planet to adopt sustainable solutions is what's needed to reduce the growing risks of climate change. Media and universities need to spread the word about why sustainability is important for everyone. How Businesses Can Prepare for Climate Change The question as to whether climate change is real has been answered by record-breaking temperatures and increasing environmental disasters. Businesses must accept that there are major consequences to dependence on systems that make the ecology worse. Floods and fires are getting more severe in places that haven't had such problems in the past. No given business is expected to have all the answers to guard against the disastrous effects of climate change. But there's an emerging set of principles spreading across industries known as ESG, which addresses environmental, social and governance concerns. Here are some of the important concepts under the ESG umbrella: - Clean renewable energy - Green solutions (reuse, repurpose, recycle) - Energy conservation via smart tech monitoring tools - Use of eco-friendly materials instead of hazardous materials - More efficient operational processes - Use of IoT devices to identify waste - Cybersecurity strength to protect data privacy - Software that ensures government compliance - Business continuity planning - Workplace diversity - Safe work environment - Fair treatment of employees - Open and transparent leadership Every business can benefit from digital technology to become more efficient so that they reduce waste while increasing productivity. Smart technology has been monumental in helping large manufacturers and utilities identify and resolve problems relating to production waste. It has also sped up decision-making and provided more reliable, agile and accurate processes via automation. Businesses should develop an infrastructure designed for integrating smart technology and other new advancements. The combination of automation and machine learning technology can provide your business with alerts when your system is in danger. Preparing for the worst possible climate change disaster is a step toward greater business continuity and sustainability. By storing your critical data in three different locations, which may include cloud servers, the business is less likely to be wiped out by a sudden natural disaster.
<urn:uuid:779203fa-eff5-43ce-a1c2-eb156472edb4>
CC-MAIN-2022-40
https://iotmktg.com/ipccs-report-irreversible-environmental-damage-climate-change-news/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00082.warc.gz
en
0.946666
1,376
3.109375
3
The human brain is an amazingly powerful biological machine. It’s thousands of neurons control everything from our bodies to our very nature. Everything we are, or could be, hides in those cells. Neuroscientists have a basic understanding of the human brain and how it works, but a lot remains a mystery which is why some researchers have started turning to data centers, of all places, to get some answers. How can machine learning help neuroscientists understand the human brain? What is Machine Learning? While we’re not up to true artificial intelligence — we have yet to create any learning machines that can pass the Turing Test, regarded as the standard for true artificial intelligence — our computers get smarter every single day. Machine learning is defined as a type of artificial intelligence designed to allow the programmed system to learn and grow from its experiences — much like the way our human mind works. A good example of one of these systems already in place is the software used for the Tesla autopilot. These electric cars utilize a cloud-based machine learning system. If one Tesla encounters an obstacle in the road or a new traffic pattern, it can upload that experience to the cloud and all the other Tesla’s in the area can download that new information, improving their software and learning from the experience of one of their fellows. How can this kind of machine learning help neuroscientists understand the brain? Data Management and Machine Learning Machine learning has two different things it can offer neurologists — data management and direct studies. As of right now, machine learning is used most often for data management. When paired with predictive algorithms, these programs can sift through massive amounts of data and in many cases find connections or patterns that a human analysis may have missed. This isn’t a slight against the human researchers by any stretch of the imagination, just a simple statement of fact — the human mind is a powerful tool, but it can’t process data as quickly as a computer can. Not yet, anyway. These programs can be told to look for a specific type of pattern, or they can simply be released to work their magic on a data set, finding any patterns that exist in that particular set. Understanding Our Gray Matter Currently, our best tool for understanding how the brain works is known as functional neuroimaging. It combines an MRI with an EEG to observe the functioning of the brain in real time, depending on the activity. It’s been used to monitor the brain activity of writers during the act of creation, of musicians while playing their instruments and even singers while listening to music. This gives the researchers a good idea of how the brain works under different stimuli. These machine learning programs are, in essence, creating a mind. Its neurons and pathways might be digital, but we’ve created these programs to mimic the growth of the human mind. We can study the simpler brains of smaller creatures — fruit flies and other larvae — to begin to understand how the brain works, but these invertebrates simply don’t have the sort of processing power the human mind does. Drosophila larva, a popular choice for these studies, has about 15,000 neurons in their simple brain. The human mind contains over 86 billion neurons, and this is where machine learning comes in. We can create programs that mimic the human mind and its 86 billion neurons. These programs can be trained to learn and grow in a way that mimics humans — or even exceeds our capabilities if all goes right. It’s simpler than studying a full-grown human mind. Most of the ones they would be studying are fully developed — with a machine learning program a neurologist can study the growth of a mind from the first neurons that form. There’s always the fear that artificial intelligence programs like these could take over the world and annihilate us, thanks to popular movies like the Terminator series, but in reality machine learning could change the way we do so many things. Right now it’s changing the face of neurology, but in the future it could potentially change the way we study the world around us.
<urn:uuid:d8827822-8725-4dc3-8b60-10153830b177>
CC-MAIN-2022-40
https://www.crayondata.com/machine-learning-helps-neuroscientists-understand-brain/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00082.warc.gz
en
0.939323
845
3.4375
3
The Internet Society has posted a brief history of the Internet. Internet technology makes simple many types of eavesdropping. The actual work of intercepting Internet traffic is often called sniffing. Here are some articles: - How to wiretap your network - Sniffing tutorial, part 1, part 2 - Wireshark: popular free software for packet sniffing Identity theft does not rely on the Internet; a theft can arise from stealing a wallet or purse. The Identity Theft Resource Center provides explanations, examples, and suggestions regarding identity theft. The Insurance Information Institute posted a summary of identity theft statistics. Stay Safe Online offers recommendations for victims of online identity theft. The Advanced Encryption Standard (AES) is the recommended cipher for encrypting private data. It is a US Government standard administered by the National Institute of Standards and Technology (NIST). AES is a “block cipher” and NIST provides an overview of it in its block cipher summary. Transport Layer Security (TLS) is the modern version of Secure Sockets Layer (SSL), the technology that made secure web transactions practical in the 1990s. Microsoft published an introduction in an old TechNet article. The Internet Engineering Task Force offers TLS version 1.2 as the proposed standard. Explains why and how encryption protects mobile purchases and other valuable transactions. #2 in the Cryptosmith series. Video notes: cys.me/vid/c02. Video #3 explains how public key cryptography is used to share secrets vimeo.com/197452327 The series begins with Learning Practical Cryptography vimeo.com/189732838 Last revision: December 30, 2016
<urn:uuid:019eae74-da0e-464d-a09a-cd34a85081e3>
CC-MAIN-2022-40
https://cryptosmith.com/vid/c02/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00082.warc.gz
en
0.852079
356
3.375
3
Data preparation is an integral part of designing enterprise software systems today using machine learning and AI. Enterprise scale businesses and government organisations often deal with terabytes and petabytes of data. They not only need to manage the complexity of data, but use the data in the right context at the right time to make better decisions. Data preparation is the key step in cleansing data to make sense of the information using machine learning. The data needs to be formatted in a specific way for it to be leveraged by ML algorithms. The quality of the datasets is paramount to providing pertinent insights for the organisation. When dealing with large volumes of unstructured datasets, there could be issues with missing values, obsolete data, invalid formats. outliers etc. So, for any algorithm to produce relevant, useful and contextual predictions, data preparation is a must. If data is not cleansed and validated properly, it can affect the accuracy of the system and even provide misleading insights. Here’s a look at the pivotal steps for good data preparation to build more accurate systems. 1. Defining the Problem The first step in data preparation requires defining the context in which data will be used. It needs clarity in terms of the key issues or problems that need to be addressed. For e.g. an organisation that is focused on improving its turnaround time for product development will need to analyse the project implementation steps. The breakup of the project schedule and identifying the parts that can be completed without any dependencies can be taken up in parallel. So, the model can provide relevant and contextual tasks to the team involved in the execution. The impact of each task on the project, service delivery and its quality can be assessed by mapping the relevant data. The focus needs to be well defined in terms of the outcomes an organisation wants to achieve. In the above case, it could be improving product development time by 30% and quality by 30%. The steps involved are then mapped as data inputs for the algorithm to suggest improvement measures. By focusing on the problem and KPIs, the objectives of the system are clear. It can simplify considerations about the types of data to gather for analysis. The intended purpose and key outcomes drive the design of the machine learning mode. Once the problem is well formulated, it is easier to map relevant data. The problem could be defined using some of these steps: i) Gather data from the relevant domain or case in point. ii) Let the data analysts and subject matter experts weigh in the system iii) Select the right variables to be used as inputs and outputs for a predictive model for your problem. iv) Review the data that is collected. v) Summarize & visualise the data using statistical methods. vi) Visualize the collected data using plots and charts for building predictive models. 2. Data Collection & Discovery The process of transforming raw data into actionable data sets for algorithms and analysts requires consolidation of data. There could be many sources for business data, structured or unstructured. These could be endpoint data, existing enterprise systems, customer data, marketing data, accounting and financial data etc. Data preparation requires mapping all the data sources as well as identification of relevant data sets. The behaviour of the model to make practical insights depends on the data sets. It may be pointed out that adding too much irrelevant information adversely affects the accuracy of the model. To start with a list of key performance indicators or questions that need to be answered are analysed. The relevant data sources are mapped, integrated and made accessible for analysis. 3. Data Cleansing Data cleansing helps to streamline information for analysis. The validation techniques for data cleansing can be used to identify and eliminate inconsistencies, aberrations, outliers, invalid formats, incomplete data etc. Once the data is cleansed, it can provide accurate answers upon analysis. There are tools that can help organisations to clean up their data and validate it before using it for machine learning. Good quality data is the backbone of an accurate machine learning model. Data preparation involves cleaning up, validating data formats, check missing values, and other things that can affect data analysis. Data cleansing also involves proactively looking at outliers or one time events in data sets. For e.g. correlation between online sales and lockdowns and identifying their correlation using ML models. The idea is to understand the causal relations inherent in data, but eliminate outliers that can affect the accuracy of the system. There are open source tools like Open Refine that may be used for standardising your organisational data. 4. Data Format & Standardization After the data set has been cleansed, it needs to be formatted and standardised. This step involves resolving issues like multiple date formats, inconsistent datatypes, removing irrelevant information, duplicity, redundancy, removing multiple sources of truth etc. After data is cleansed and formatted, some data variables may not be needed for the analysis and hence they can be deleted. Data preparation requires deletion of noise and unwanted information for building a robust automation system. The cleansing and formatting process should have a consistent and repeatable work flow. It can be used by the organisation to maintain consistency of data in the future iterations too. The data is constantly added to the model realtime with similar steps. For e.g. marketing data could be added every month based on relevant keyword searches on the internet. 5. Data Quality Do you trust the quality of your data? Erroneous data can lead to disastrous consequences. When the data is not reliable, it can create more problems than it solves. Take for e.g. an online retailer who needs to dynamically price the items on its portal, any inaccuracy in pricing may affect sales as well as reputation for the retailer. Low quality data is a deterrent to the design of a good machine learning model. Even with the best algorithms and models, the system could produce ordinary results, when data quality is poor. But, what makes good quality data? The answers may vary across industries and companies. Industries like pharmaceuticals and medical need very stringent data quality standards compared to other industries like consumer goods. An e.g. of Data Quality Assessment Framework adopted by IMF for data quality follows: Integrity: Statistics are collected, processed, and disseminated based on the principle of objectivity. Methodological soundness: Statistics are created using internationally accepted guidelines, standards, or good practices. Accuracy and reliability: Source data used to compile statistics are timely, obtained from comprehensive data collection programs that consider country-specific conditions. Serviceability: Statistics are consistent within the dataset, over time, and with major datasets, as well as revisioned on a regular basis. Periodicity and timeliness of statistics follow internationally accepted dissemination standards. Accessibility: Data and metadata are presented in an understandable way, statistics are up-to-date and easily available. Users can get a timely and knowledgeable assistance. Some important questions to ask regarding the quality of your data: Is the data reliable and representing realtime information? Is the data obtained from the right source? Is the data missing or omitting something important? Is the data representing sufficient information for you to make a decision? Is the data representing the relationships between key variables accurately? 6. Feature Engineering & Selection Feature engineering deals with adding or modifying attributes to model’s output. This is the last stage in data preparation for building a machine learning model. The feature engineering identifies the most important or relevant input data variables for the model. It involves deriving new variables from the available dataset based on adjusting and reworking the variables to enable models to uncover useful insights & causal relationships. The variables or predictors are tweaked to ensure better predictive performance of the system and this is known as feature engineering. The experimental approach explores different variables from the available data sets to make predictive insights. Some variables may look promising, but may not deliver the right results due to extended model training, overfitting and less weightage in relation to the predictive accuracy of the model. Many features may need to be evaluated and weighed before converging to the right model. Good data preparation delivers high-quality and trusted data for improving the predictive behaviours and accuracy of the enterprise software. Kreyon Systems provides enterprise software implementation for clients with end to end data lifecycle management. Our expertise is leveraged by governments and corporates for managing their data. If you have any queries, please reach out to us.
<urn:uuid:3e797d5e-ca39-4009-8ab0-caed9145e89c>
CC-MAIN-2022-40
https://www.kreyonsystems.com/Blog/six-important-data-preparation-steps-for-machine-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00082.warc.gz
en
0.902833
1,730
2.75
3
Digital Tools by Government of India in response to COVID-19 Aarogya Setu App - The Aarogya Setu App enables people to assess themselves the risk for their catching the coronavirus infection. - It calculates this based on people's interaction with others, using cutting-edge Bluetooth technology, algorithms, and artificial intelligence. - Once installed in a smartphone through an easy and user-friendly process, the app detects other devices with Aarogya Setu installed that come in the proximity of that phone. - The App can then calculate the risk of infection based on sophisticated parameters if any of these contacts are tested positive. - The Government of India has launched a WhatsApp chatbot so that the citizens can get instant and authentic answers to all of their queries related to the Coronavirus pandemic. - Users have to drop a "Hi" on the number +91-9013151515 or can call on the MyGov Corona Helpdesk to get answers to pertinent queries such as the symptoms of the deadly disease, nearest COVID-19 testing facility. - It is a COVID-19 tracker application, created by the Union Ministry of Electronics and Information Technology in collaboration with the Ministry of Health and Family Welfare. - This application provides users with realtime location of infected users who have activated the ‗Kavach‘ feature. - This application has been developed by the centre to get direct feedback from people who have undergone coronavirus treatment in the country. COVID-19 National Helpline - A 24x7 National Helpline number +91-11-23978046 and toll-free number 1075 have been launched where people can access corona related information by the government. - Also, the centre has an e-mail id email@example.com to attend to queries of people related to the disease. - Defence Research and Development Organisation (DRDO) has developed an app called ‘SAMPRAC’ to enable tracking people under quarantine. - It is a software that includes an app that can be installed on the smartphones of the infected COVID-19 patients. - It is a server-side application that is used by the state authorities to track the patients. - The system enables geofencing, AI-based automated face recognition (between selfie taken during registration and subsequent selfies sent by the patient), and would have the capability to display the information to the state officials on a map which can be color-coded to depict hotspots and containment zones. - The Survey of India (SoI) has developed an e-platform that collects geotagged information on the nation‘s critical infrastructure in order to help the government and public health agencies take critical decisions in response to the current COVID-19 pandemic situation. - The platform has geo-located information of hospitals, testing labs, quarantine camps, containment, and buffer zones as well as information on biomedical waste disposal sites. - The mobile-based application, called SAHYOG, works as a key tool in helping community workers carry out the government‘s objectives of door-to-door surveys, contact tracing, deliveries of essentials items, and create focused public awareness campaigns.
<urn:uuid:f8d87467-9631-43d1-844e-89b193d19e1c>
CC-MAIN-2022-40
https://www.knowledgepublisher.com/article/1440/digital-tools-by-government-of-india-in-response-to-covid-19.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00282.warc.gz
en
0.918508
696
2.59375
3
Google really stepped up its social responsibility game with the recent refugee crisis. The search giant set up a website and announced that it would match any donations received. With this exercise, Google managed to raise €10 million globally for migrants and refugees. These funds went to associations such as Doctors without Borders, the International Rescue Committee, Save the Children and the UN High Commissioner for Refugees. Google didn’t stop there, and also created “Google Fortunetelling.” A user would type a question in the search bar and Google would magically predict their future. Only when you do try to type a question Google will come up with its own questions such as ‘Where can I find a safe place?’ and ‘Will I ever be reunited with my family? ‘ The website then sends you to another page with information about the refugee crisis: OF COURSE WE CAN’T PREDICT YOUR FUTURE! But 60 million refugees ask themselves every day if they have a future at all. So we used a fake Google-site to get your attention because apparently you were interested in your own future. Please take a moment to think of their future. Google isn’t the only tech giant that wants to give back. Microsoft Research is an amazing organization. Much of what they do is pure research, and much of this is about solving the world’s most vexing problems – disease, nutrition, and climate. This work is very much akin to what the Bill & Melinda Gates Foundation does, but the two organizations are entirely separate. One recent project involves using drones to catch mosquitoes to see if they carry diseases such as malaria. It is interesting to note that Bill Gates has done amazing work to combat this dreaded disease, and has spent a cool (US) $500 million to do so. The new trap is ultralight, and instead of catching everything that flies by, it focuses just on the mosquitoes, so the natural ecosystem is not disturbed. The trap is dropped off in remote areas and then retrieved by drones. Like many Microsoft Research projects, Microsoft scientists team up with top academics in the field, in this case biologists. “That’s a huge leap forward from the current system. Usually, health officials only find out about an outbreak once people are already getting sick. This means things like vaccines and health clinics may not be up and running for as long as a couple of months after a disease has begun spreading,” said James Pipas, a professor of molecular biology at the University of Pittsburgh who also is working on Project Premonition. “If you know they’re coming, you can prepare your response ahead of time,” he said. Microsoft Research have a list of its projects under their belt, such as using nanotechnology to administer medicines and track moisture to see what areas are plantable, building a worldwide telescope, studying the environment, and so much more. The whole process started in 2003 when Bill Gates formed the Science division within Microsoft Research. Microsoft did gain insight into scientific computing trends that has commercial benefit, but much of the work is purely about solving critical global problems. The organization also has certain freedoms, meaning it can go a bit afield looking at things such as how life and the universe were actually created. Heading the group for over a decade is Stephen Emmott, who shared with me his insights into the group’s goals and top projects. One lofty goal is to revolutionize both science and computing – by blending them together. “We are at a profoundly important point in time where computer science and computing have the potential to completely revolutionize the sciences,” Emmott said. Smaller and smaller devices are one form of revolution, even bringing computers down the molecular level and their use is many fold. A machine small enough to fit into a cell can work within the body to detect, and even repair the body. These nano-computers can, for instance, find cancer and then release medicine in proper and fine-tuned amounts. This is the notion of smart drugs. Larger devices and sensors can do much the same thing for the environment. They could be scattered around desert areas to detect either climate change, or whether the possibility to plant crops exists. Microsoft is also keenly interested in a deeper understanding of biology, which is the Simulating Biological Systems in the Stochastic Pi Calculus project mission. The project’s endeavors is to analyze how complex biological systems operate. Microsoft is using the Stochastic approach – referring to the randomness of things – to create richer and more complex biological models that incorporate the randomness built into these systems behaviors. Microsoft has been working on disease detection, prevention and eradication for years. Emmott has a “project with my team in Cambridge and one of the world’s leading mathematical biologists at Imperial College in London to build a global pandemic modeling system to predict when outbreaks of diseases will occur.” Microsoft does all this with almost no fanfare, and sometimes it is unfortunate the public doesn’t get to know about such amazing projects.
<urn:uuid:66faf188-3b90-4283-b765-7519674aea85>
CC-MAIN-2022-40
https://techtalk.gfi.com/microsoft-quietly-helps-humanity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00282.warc.gz
en
0.955745
1,063
2.71875
3
Cybersecurity has become an important strategic imperative and enterprises today need to monitor and defend their IT assets from the ever-changing cyber threat landscape. All modern enterprises need a robust and comprehensive cybersecurity program to prevent, detect, assess, and respond to cybersecurity threats and breaches. In many ways, cybersecurity is unique – much of detection and monitoring is all about correlation and prediction—and can benefit from the infusion of artificial intelligence and machine learning solutions for assessment, analytics, and automation. Copyright by www.dqindia.com Augmenting cybersecurity with artificial intelligence and machine learning In a hyper-connected digital world, organizations need to process humongous quantities of data originating from disparate systems to detect anomalies, locate vulnerabilities, and pre-empt threats. Unlike most manual tracking methods, AI and ML-based systems can monitor millions of events on a daily basis and facilitate timely threat detection as well as appropriate and quick response. AI algorithms are developed based on past and current data to define the ‘normal’ and can identify anomalies that deviate from this ‘normal’. Machine learning can then recognize a threat from these patterns and can also be used to evaluate and classify malware and conduct risk analysis. An AI algorithm can track and record even the smallest anomaly and has a faster learning curve that better understands and analyzes user behaviour. It thus, reduces the workload of security teams which can then focus on incidents that require higher cognitive performance since the algorithms can identify and filter false alarms. Organizations can also arrest any damage at an early stage by using AI systems to reduce the meantime to detect and the meantime to respond from days to minutes. Automation of security tasks and processes help improve the overall security posture of an organization and transform itself from being a deterministic enterprise into a cognitive one. It helps in the collection and correlation of security data, detection of existing compromises, and generate and implement protections much more rapidly than humanly possible. Automation can help with complex security processes in a time-sensitive manner while avoiding manual errors and compliance issues as well as reducing the load on IT resources. It also helps by triggering self-healing processes in case of an attack facilitating quick fixes and the quarantine of injured systems. Automating mundane and routine security processes can also free up members of the security team allowing them to focus on more strategic aspects of cybersecurity. It reduces their fatigue by keeping them at bay from multiple daily alarms and repetitive tasks like patch management, software updates, identity management, horizon scanning, etc. […] Read more: www.dqindia.com
<urn:uuid:93d3de8e-903b-41df-aa50-765b3d2e7fa8>
CC-MAIN-2022-40
https://swisscognitive.ch/2021/04/24/artificial-intelligence-and-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00282.warc.gz
en
0.935055
525
2.9375
3
Memorial Day was established as a solemn day of remembrance for those members of the U.S. armed forces who died while serving. However, today, one often hears and sees on social media well-meaning but incorrect “thank you for your service” messages that are more appropriate for Veterans Day or Armed Forces Day. In the broader culture, Memorial Day has transformed into the “official beginning of summer” or an opportunity for great savings at our favorite retail establishments. But Memorial Day is not about sun or sales, or even about celebrating service, noble as that may be. It is about honoring sacrifice. If our country is losing sight of the day’s purpose, can we properly honor those in whose memory it was established? Perhaps the challenge stems from the fact that there’s no clear celebratory origin story (Thanksgiving, Independence Day), fixed date (Veterans Day), or individual (Martin Luther King, Jr) associated with Memorial Day. It’s about remembering a difficult thing. Remembrances of soldiers killed in battle is an ancient tradition. Ours began in the aftermath of the Civil War, as General John Logan, the head of the Grand Army of the Republic, a Union Veterans organization, established May 30 as Decoration Day, the time to decorate the graves of the war dead with flowers. Over time, the day moved beyond its Civil War focus to one commemorating those lost in any conflict. Finally, in 1971, Memorial Day was established as a federal holiday set on the last Monday in May. How can we reclaim Memorial Day’s solemn purpose? There are ongoing efforts. In the early part of this century, the National Moment of Remembrance was enacted, encouraging Americans to halt their activities at 3 pm on Memorial Day to reflect on the sacrifices of the fallen. In addition, leading veterans organizations have, at times, proposed restoring Memorial Day to its original fixed date of May 30 to remind the U.S. that it’s not about a three-day weekend but remembering. But perhaps the best way for Americans to restore the finest tradition of Memorial Day is to get involved in their local communities. Many of those communities sponsor Memorial Day parades or ceremonies. It’s an opportunity both to remember and to teach our children about the sacrifices previous generations made for them. Likewise, there are volunteer opportunities to honor fallen soldiers on Memorial Day. Many of the nation’s 155 national cemeteries host Memorial Day programs and as a component, a small American flag is placed on each grave as part of the commemoration. However, many people don’t realize that national cemeteries rely on volunteers and local civic organizations to place the flags. As we commemorate this Memorial Day, please remember the heroic people who, as Abraham Lincoln famously said, gave the “last full measure of devotion” to defend our country and the values and freedoms we hold dear. Joe Levy is Senior Director of Enterprise Sales at Dataminr. Previously, he held go-to-market leadership roles at Gavin de Becker and Adobe, and co-founded the OSINT competitive intelligence software company clearCi. Joe holds dual bachelor degrees from Florida State University and completed graduate work at the University of California, Berkeley. He served eight years in the U.S. Army Reserve as a drill sergeant and combat engineer, and is an instrument rated private pilot.
<urn:uuid:89100575-03d4-4528-a6c3-a6cdc5f7a541>
CC-MAIN-2022-40
https://www.dataminr.com/blog/commemorating-and-reclaiming-memorial-day-to-honor-the-fallen
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00282.warc.gz
en
0.953731
702
2.984375
3
Year after year computers increase in power. Processors become faster and gain more cores, memory also speeds up and becomes more plentiful. But we're reaching the limits of what can be achieved with current technology - a real sea change is needed to take things to the next level. Many futurists have set their sights on the possibilities encompassed in quantum computing. Eschewing the binary states of 1 and 0, bits are replaced with qubits which can hold three states - on, off, or both at the same time. This introduces the opportunity for much greater computing power, but also introduces more opportunities for errors to creep in. Now IBM engineers have found a new way to detect and correct errors, hopefully creating the building block on which future quantum computers may be built. In a paper published in Nature (opens in new tab), scientists from IBM's Watson Research Center explain that quantum systems are especially susceptible to errors. In addition to the additional states, there is the risk of interference from "noise" from outside sources. Until it becomes possible to devise ways to eliminate, correct or ignore the errors that inevitably crop up, the progress of quantum computers is going to be slow. The authors says that they "present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits". What does all of this mean? In essence, the team has managed to create a lattice between four qubits building a quantum circuit with built-in error-detection. What makes this a real step forward is that it is now possible to detect two types of error at the same time. Rather than looking out for bit-flip and phase-flip errors separately, they can now be picked up at the same time. The team says (opens in new tab) that the method they have come up with is also scalable meaning that there is a greater chance for it to be used in real-world, rather than just theoretical, systems. There is still a good deal of work to do, but the outlook is optimistic:
<urn:uuid:e9c233d4-9ef1-43af-995c-76d5847779cc>
CC-MAIN-2022-40
https://www.itproportal.com/2015/05/05/is-quantum-computing-the-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00282.warc.gz
en
0.948597
454
3.640625
4
Bots are the worker bees of the internet. Whether posting as customer service agents for business websites or scraping data from websites for improved search engine optimization, bots are hard at work helping build a better internet. For better or worse, these snippets of code have revolutionized the internet by automating many tasks that would be too tedious, time-consuming, or expensive to perform by human agents. In their most basic form, bots are simply software agents designed to perform an automated task on the internet. And this of course is as interesting and valuable for the bad guys as it is for legit businesses and services. The ‘good’ bots are an essential part of the internet. In fact, approximately 36 percent of all web traffic in 2015 was generated by good bots. At least 18 percent of all web traffic in 2015 was attributed to ‘bad’ bots, or bots created especially to harm sites, steal data, or perform other malicious acts. Since we are mostly fine with what the ‘good’ bots are doing, let’s take a deeper look at the ‘bad’ bots. Bad bots are bots that perform malicious acts, steal data, or damage sites or networks through such things as distributed denial of service (DDoS) attacks, which means simply flooding the site with far more data requests than it can handle. Bad bots are also often used to scan servers, computers, or networks to find exploits, that can be used to compromise those. Bad bots are mostly organized in botnets. These botnets are controlled by so-called C&Cs (Command & Control Servers). This centralization on a few C&Cs made botnets very vulnerable for Take-Downs. Make sure the C&Cs goes offline and the botnet will be not actionable anymore. This concept is changing slowly to botnets communicating via P2P, which will make it even harder to detect and will also make some existing security solutions obsolete. The classic differentiation in Hacker Bots, Spam Bots, and all the others has as well changed over the last years. Today compromised hosts that function as bots are multifunctional. They can steal information from the compromised computer while hosting a phishing site, participating in a DDoS attack, and usually start spamming or harvesting email addresses from websites at the end of their lifecycle. Two steps for more security Now that you have a slight idea what bots are, the next question you should ask yourself is this: “How can I protect my business from bad bots or other malicious attacks?” 1. The first step is to find a comprehensive solution that can provide the protection that your business needs to protect your critical systems from both external and internal security threats. 2. The second step is to make sure that you have a plan on what to do when your security was not strong enough and systems have been compromised. This is exactly where products like AbuseHQ can help. Contact us today for a demo of AbuseHQ, our revolutionary network abuse detection and containment system that can stop threats cold in their tracks, while freeing your abuse desk team for other projects.
<urn:uuid:16e3669f-c16b-473f-89b1-cd9b8de9978e>
CC-MAIN-2022-40
https://abusix.com/resources/abuse-desks/good-bots-vs-bad-bots-whats-the-difference/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00282.warc.gz
en
0.954374
653
3.109375
3
Whether you work or live in a professionally managed building, one normally doesn’t give much thought to how secure the building is, aside from good locks, security guards and fire alarms/sprinklers. However, our residences and workplaces should address cybersecurity issues as well, as the IT systems managing environmental and electrical systems are susceptible to attack. Building Management Systems (BMS) or Building Automation Systems (BAS) have been around for years, but recently these solutions have been connected to the Internet for easier management and remote support of these systems. Unfortunately, most of these systems normally aren’t designed with robust security controls, and those that do have some authentication and authorization may be installed with default userids and passwords, or weak and guessable passwords are used. To complicate the situation, many system manufacturers rely on sensors and other components which may be difficult to update and patch, yet still rely on Internet connectivity to perform their functions. Some systems may have direct Internet connections while other may be connected to the corporate network. Many companies are entirely unaware that their BMS are connected to the internet, and if they do, may not understand the implications. As more and more devices and appliances are connected to the Internet for management and support, the Internet of Things (IoT) universe expands, along with the opportunity for abuse and exploits. What are the implications of a BMS being accessed by unauthorized people? - Lighting changes, shutting down electrical power, physical access control system (opening or closing secured doors, monitoring or shutting down security cameras and alarms), shutting down heat or a/c or affecting temperatures of buildings, controlling elevators, disabling fire suppression systems: anything controlled by a BMS - Using the BMS to access other components of the corporate network it is connected to Losing control of a BMS can have serious effects and adversely affects security, availability, comfort and productivity for corporate and residential tenants/owners, with implications as an entry point to any corporate network resources it can access. How does this happen? BMS and their devices can be detected via scans of wired and wireless networks. Instructions for logging in and default ids and passwords can be easily found on the internet. It doesn’t take technical expertise to break into a system. Web sites like shodan (https://www.shodan.io/ )scan and collect devices as part of the IoT universe can be a starting to point to find sites with a BMS. Most break-ins use credentials guessed / stolen or default passwords. Real world examples: Target: millions of customers’ credit card information was stolen—point of entry was credentials to a heating and ventilation system. In 2012, hackers illegally accessed the Internet-connected controls of a New Jersey-based company’s internal heating and air-conditioning system by exploiting a backdoor in the software. In 2013: Researchers gained access to Google Australia ‘s BMS using a default password. In 2013, hackers had broken into an unnamed state government facility and made it “unusually warm”. In 2016, IBM researchers hacked into an unnamed business through its BMS. In 2016, a security researcher took control of a company’s physical security using its internet connected BMS. What Can Be Done? The following are suggestions to protect a corporate BMS from being exploited. - Companies should inventory what they currently have in place for their BMS, including a physical inventory to determine if a standalone Digital Subscriber Line (DSL) or cable connection is connected to BMS controlled systems. Determine if the BMS is connected to the corporate network. - If a company has a cybersecurity staff or function, get them involved with the evaluation and ongoing security of the BMS. - Add cybersecurity controls to the facility budget. - Change all default userids and passwords. - Shared userids and passwords should not be used—every person requiring access should have their own account. - Network access to the BMS should be behind a corporate firewall. - Remote access should require a Virtual Private Network (VPN). - BMS systems should be isolated from the internal corporate network through its own Virtual Local Area Network (VLAN) and a firewall. - Choose vendors carefully, and be aware of exactly what BMS functions are accessible via online portals. - If possible limit access to the BMS to specific networks. If the BMS vendor requires remote access, limit access to that network. - Be alert for patches for the BMS and its sensors. Appendix of Real World BMS Attacks Intruders hack industrial heating system using backdoor posted online Tomorrow’s Buildings: Help! My building has been hacked Building automation systems are so bad IBM hacked one for free Hacking the Doors Off: I Took Control of a Security Alarm System From 5,000 Miles Away Researchers Hack Building Control System at Google Australia Office
<urn:uuid:4cd14fd8-273b-43cb-9723-b5b78dee65f6>
CC-MAIN-2022-40
https://www.kaizenapproach.com/2017/07/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00282.warc.gz
en
0.904205
1,178
2.734375
3
Duplicates: Files vs Records & Why You Need to Know the Difference Within each database, and eventually, each enterprise content management (ECM) system, businesses must manage the limits of storage. Relational databases are filled with countless records and files; unfortunately, many of those are duplicated, which take up much-needed storage space within your ECM environment. First, a quick rundown of terminology: File Management: Daily activities involving your business’ physical or digital files (e.g., capture, storage, modification, and sharing). File management focuses on: - Organization and faster search of existing documents - Reducing lost or misfiled documents - Improving processes and efficiencies - Reducing space needed to store documents Records Management: Policies and standards for maintaining diverse types of records, focused on: - Creating a files inventory - Establishing retention periods (how long to store files) - Managing files disposition - Develop and implement records policies and procedures We all understand intuitively that duplicates are a significant issue in most organizations, but like many aspects of information governance, solving it is not so simple. With files, we must consider the following. #1 Indiscriminate Deletion A policy analyst might work on a position paper in isolation and save that document in their “section” of a shared drive or ECM. The paper is then submitted to a management committee for review or approval, creating two copies of that document: the working copy and the “official” copy. At this point, the working copy can be deleted because the copy submitted to committee would take precedence, but it is not inconceivable that the working copy has a newer system date. Indiscriminately deleting either version based on date introduces risk to the organization. #2 Access Control People often create copies when they want to collaborate or submit information for peer review, but not all collaborators or reviewers work in the same technical environment; whether it is a volume on a shared drive or in an ECM system. In this scenario, an author emails a document to a number of peers, and they each save a copy. If we delete all duplicates across all repositories, people without access to the specific, remaining copy lose their document. This scenario is the corollary of the access control scenario. In some cases, everyone in an organization has access to content in a legacy system, and files are migrated into a new environment. Management may want to take this opportunity to apply access controls by segregating content into different volumes and designating access to each one. Again, indiscriminate file deletion may restrict access to those who need it in the new environment. These same issues exist in records management, just on a larger scale. Imagine the deletion of an entire customer record with hundreds of associated files, or the inability for your team to access and collaborate on records across the enterprise. The same problems associated with file management magnify to larger scales, which introduce greater risks to your organization.
<urn:uuid:6daf9474-c6b2-4ae3-900e-335b94b413d1>
CC-MAIN-2022-40
https://data443.com/blog/duplicates-files-vs-records-and-why-you-need-to-know-the-difference/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00482.warc.gz
en
0.908294
624
2.515625
3
Due to increasing cases of cancer, there is an increase in cases of chemotherapy-induced neutropenia. The pharmaceutical companies are involved in developing cost-effective drugs, and treatment methods which are expected to decrease the incidences of neutropenia associated with chemotherapies and results in the advancements. What is neutropenia? Neutropenia disorder is related to health problem. People suffering from neutropenia usually have a low count of neutrophils in the blood. Neutrophils are the type of white blood cells that protect against infection-causing bacteria and viruses that attack our bodies. Neutrophils fight the infection by destroying disease-causing infectious pathogens that invade the body. It is formed in the bone marrow, a spongy tissue found in long bones. Signs and symptoms of neutropenia: A person suffering from neutropenia doesn’t show symptoms. Most people come to know about neutropenia when they have a blood test for any unrelated reasons or any get any infection. Neutropenia may cause tiredness and drowsiness in some people. Neutropenia is common in cancer patients who have received chemotherapy as the treatment for cancer. Other infections may also cause neutropenia. In fact, in those who are already suffering from neutropenia, bacterial and viral infections can cause more complications. Neutropenic fevers are difficult to identify. Even if the cause of infections is not known then also Neutropenic fevers are treated with antibiotics. The immune system becomes weak due to neutropenia due to which patients can fall sick quickly. The severity of Neutropenia is more it is untreated for a longer duration of time and neutrophils count go very down. Causes of neutropenia Since neutrophils are produced in the bone marrow so problems related to bone marrow may cause neutropenia. Leukaemia also affects bone marrow. Radiation and chemotherapy interfere with neutrophil formation as well as destroy the neutrophils in the blood. Nutritional deficiencies also result in neutropenia. Bacterial Infections like tuberculosis, dengue and viral infections HIV, viral hepatitis, etc. can also cause a decrease in the count of neutrophils. Autoimmune diseases are also one of the reasons for neutropenia. Depending on the severity, causes, and infections related to neutropenia, treatment varies. Mild cases don’t require treated, it can be cured with having a good nutritional diet. There are mainly two types of primary treatment for neutropenia first is using antibiotics and second is treating with drugs to stimulate the bone marrow to produce neutrophils. In other bone marrow transplantation is also be considered as an option. Scientists and researchers are trying to discover drugs that directly acts on bone marrow. Recently, new drugs have been developed that stimulate the bone marrow to produce neutrophils and helps in restoring the normal function of the immune system. In neutropenia treatment, drugs called granulocyte colony-stimulating factors are also very effective, since it induces the formation of neutrophil that is a granulocyte. These colony-stimulating factors are small molecules that decrease the infection and reduce the chances of getting hospitalized. It always keeps the neutrophils count above the normal level in the blood. In the case of severe neutropenia, patients develop a fever. In such a case, they are treated with antibiotics even without knowing the exact cause of fever. Antibiotics help in fighting with the infections if the fever is caused by any bacterial infections. Long term repeated use of antibiotics may have side effects like the emergence of drug-resistant bacteria, diarrhea, and inflammation in the intestine. It may also have an adverse effect on the kidney and liver. Bone marrow transplant Bone marrow transplant is a good option in treating neutropenia when it is severe and drug treatment doesn’t work. Granulaocytetransufusion is also done when the bone marrow doesn’t respond to drug or bone marrow is unable to form granulocytes at all. Neutropenia is diagnosed by performing a blood test, in which blood cells count is done. In some cases, bone marrow biopsy may be a better option if it is necessary to diagnose the exact cause of neutropenia. Many pharmaceutical companies are working to develop molecules that act as immune-stimulatory that helps in the growth and activation of a broad range of white blood cells important in activating the body's immune response to fight infection. Regenerative Medicine Advanced Therapy (RMAT) designed for the prevention of serious bacterial and fungal infections in patients with de novo acute myeloid leukemia (AML) undergoing induction chemotherapy. The neutropenia treatment market has been witnessing rapid growth due to the increasing demand for the novel drugs' developments and increase and supply for both developing and developed countries. The neutropenia treatment market has shown substantial growth and its demand will keep on increasing. The increasing cases of neutropenia with the passage of time have led to the expansion of the neutropenia treatment market. Some of the drugs that are used in cancer treatment like Fulphila have been recently approved by the FDA for the treatment of neutropenia. The occurrence of febrile neutropenia is the most serious side effect of chemotherapy, so drugs like Ziextenzo are given to the cancer patients who have been given chemotherapy. The adult patients with non-Hodgkin’s Lymphoma (NHL) and Chronic Lymphocytic Leukaemia (CLL) are also treated with a recently developed drug TRUXIMA and have proved the good result in treating neutropenia. But the high cost of neutropenia care and strict regulations on drug approval has become a hindrance. Free Valuable Insights: Global Neutropenia Treatment Market to reach a market size of USD 18.9 billion by 2026 The pharmaceutical companies are involved in developing new drugs and therapies which are expected to decrease the incidences of neutropenia associated with chemotherapies as well as reduce the chances of bacterial and fungal infections in patients. Since the high cost of neutropenia treatment, there is a high demand for cost-effective drug development in the neutropenia treatment market. As a result, manufacturers are more focused on the production of small molecules that can lower processing costs than biologics.
<urn:uuid:7535c075-2231-4575-895a-bfa2316eb793>
CC-MAIN-2022-40
https://globalriskcommunity.com/profiles/blogs/you-should-know-about-the-neutropenia-treatment-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00482.warc.gz
en
0.947565
1,384
3.203125
3
Our economy builds products, sells them, and dumps them, but what happens next? With the passing of time comes the need to adapt, and the global economy is no exception to the rule: “The only thing that is constant is change.”Heraclitus of Ephesus (c. 500 BC) Since the dawn of the industrialized age, embracing a linear model for product creation, use, and disposal propelled the economy forward. But, the depletion of natural resources used to create new products is shifting the focus onto a new kind of industrial model: the circular economy. So, what is the Circular Economy? The circular economy is exactly what it sounds like: it’s a circular pattern that continually reduces, reuses, and recycles scarce resources back into the supply chain. Thus, it keeps existing materials in circulation for as long as possible. Rather than extracting new, natural resources every time a product is created and disposing of it once its lifecycle is complete (aesthetically and/or functionally, depending on the product) as the linear model does, the circular model plans for reuse from the get-go. The circular economy designs and creates with the mindset that all of your materials will be reused to create another product — resourcefulness and restoration are sewn into the fabric of its structure. Prior to producing new equipment, only commodities necessary for the creation of the end product are gathered. This method is particularly beneficial for companies dependent upon scarce resources, such as those required for the semiconductors and circuit boards in so many products, ranging from enterprise storage to the phones in our pockets. Once the product is used to its full potential, the materials are recycled and repurposed, minimizing waste and limiting the energy usage required to harvest more materials. In some cases, these materials can also be upcycled, meaning that they can be recycled into higher-value products. To extend the life of existing products and materials, the circular structure also supports maintenance through repairs and remarketing strategies. In other words, the cycle continually optimizes resources through processes commonly referred to as, “reduce, reuse, recycle” (In that order!). Coming Full-Circle with Technology Now, you might be thinking to yourself, “So, what does the circular economy have to do with the IT industry?” For starters, technology plays a huge role in monitoring the use and availability of materials and services. Information derived from technology asset management tools and special sensors provide businesses with hard data on where their materials come from, their availability, and the amount of energy it takes to produce them. The mined information also enables you and your tech administrators to identify weak spots in the production process, enabling you to recalibrate in a way that benefits your organization’s productivity and financial standing. For example, costs that would typically be associated with the mining of new raw materials and waste management could instead be redirected towards additional R&D or retained to improve profitability. However, the world’s increasing dependency on technology is generating unprecedented amounts of e-waste. Products supporting technology like batteries, switchboards, computers and more can be created with more sustainable materials that won’t just sit in a landfill, but actually contribute to repurposing them. Companies like Dell and Apple have successfully reconfigured their supply chains to include elements from the circular economy. Some of the benefits like lower costs are already starting to show, but others like reduced water consumption, lower emissions, and less waste in landfills will only be realized many years from now. But we have a feeling our children will thank us.
<urn:uuid:db10a8d1-d91f-4a2e-8ff7-a9504ac08747>
CC-MAIN-2022-40
https://www.aptosolutions.com/blog/so-what-in-the-world-is-the-circular-economy
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00482.warc.gz
en
0.936898
745
3.328125
3
NAND is not limited to SSD media: the memory cell architecture resides on circuit boards, which may be housed in SSDs or embedded directly into a server or other device. Still, the majority of NAND flash is delivered via solid state drives in storage arrays, which comprise the core of enterprise nonvolatile flash memory storage. Nor are flash and SSD interchangeable terms. NAND flash memory is a type of nonvolatile storage, where silicon memory chips persistently store data with or without and external power source. And SSDs are not limited to NAND flash. They also house memory technologies like volatile DRAM. In the data center, SSDs are an answer to enterprise workloads whose performance suffered with disk-based storage arrays and server storage subsystems. With the growth of hybrid and all-flash arrays, SSD storage serves intensive workloads with very high I/O performance. SSDs have the added advantage of low energy usage, which helps data centers to keep the energy budget items under control. What is an SSD? An SSD is a storage device with no moving mechanical parts that houses flash memory and controllers. SSDs use the same external form factors as HDDs, because they are marketed as hard drive replacements. Using the same form factors does not require massive re-engineering of storage arrays at the factory or data center levels. Since SSDs have no moving parts, they run considerably more quietly, enjoy faster access time, and lower power consumption over hard disk drives. And better reliability developments have made SSDs as durable as disk drives. How NAND Flash SSDs Work SSDs store information in memory cell arrays embedded on a circuit board. The memory cells are essentially transistors with floating gates. Each transistor has two gates: one is a source that admits a current, and the other is the drain that expels it. The memory cells act as switches to control the energy flow between the source and drain terminals. Semiconductors called floating-gate (FG) transistors generate electrical charges to the memory cells, whether connected to an external power source or not (over time a powered-off SSD will leak energy). As long as there is sufficient charge from the FG, the data retains integrity. Memory cells may house one or more bits per cell. In a single-level cell (SLC), the control gate (CG) will sense if a floating gate is charged with electrons or not. In response, the control gate will record either 0 or 1 bytes. Multi-level cells (MLC) work in a similar way. The SSD not only houses the interconnected memory cells and circuit boards, but also adds a layer of intelligence with the flash controller. The speed and performance benefits of SSDs are highly significant. Advantages of SSD: Why Is a Solid State Drive Better than an HDD? When comparing SSD vs. HDD, solid state drives truly shine. - Higher performance. Even the fastest 15K RPM hard drive cannot compete with the performance of NAND flash SSDs. NAND I/O typically achieves 1 Gb/s, while 3D NAND achieves 1.4 GB/s. Newer developments are pushing 3D NAND to 3.0 GB/s. The reason is physics: a hard drive with mechanical components that are in constant usage will break down faster than an SSD that has no mechanical parts. Instead of mechanical arms and read heads, the SSD uses electricity to generate data storage responses. Faster performance means faster boot time, faster data movement, and higher bandwidth. - Low energy usage. HDDs moving mechanical parts need more energy than the tiny amounts of electrical current shuttling through SSD memory cells. SSDs also avoid the high heat build-up that hundreds of spinning disks generate in a data center, which requires a large investment in HVACs and climate control. - Commensurate durability. SSD and HDD durability comparisons are more complicated than they might appear. HDD mechanical parts and drive surfaces are more susceptible to environmental damage than SSDs, although new technology is shock-proofing hard drives against physical drops. And SSDs cannot be powered down for long periods of time without a leakage, but powered down HDDs can last decades in environmentally controlled environments. However, SSDs durability is growing thanks to storage intelligence added to the controller. These technologies protect the SSD against data leakage or corruption, and include error-correcting code (ECC), garbage collection, and read and write caching. Disadvantages of SSDs: Challenges Amid the Speed Nothing is perfect in the data storage world, and SSDs are no exception. Their disadvantages include higher expense, limited storage capacity, and a shorter delete lifecycle than hard drives. - Higher cost. SSD dollar-per-GB prices have gone down considerably in the last several years, but so has HDD pricing. Still, flash drive costs have lowered enough so their higher performance becomes cost-effective. Performance is really the key: if HDDs are slowing down transactional databases and other intensive applications, then buying hard drives for affordability is a false economy. - Lower data storage capacity. NAND SSD capacity lags HDDs thanks to NAND’s memory cell write limitations. The more memory cells on a circuit, the greater density the SSD will achieve. However, planar (2D) NAND can only hold a limited number of memory cells before the cells begin to fail. In response, researchers developed 3D NAND by stacking memory cells vertically as well as horizontally. This enables 3D NAND to achieve higher density, lower power consumption, better endurance, and faster reads/writes, at a lower cost per gigabyte. - Shorter lifecycle than HDDs. SSDs have a much more limited write cycle than HDDs before failure. The primary reason is that SSDs cannot overwrite existing blocks, but must erase blocks first and then write new data. This process eventually affects the integrity of the memory cell. NAND writes differ according to the number of bits per cell: single-level cell NAND flash supports 50,000 to 100,000 write cycles, multi-level cell generally takes up to 3,000 write cycles, eMLC (enterprise MLC) sustains up to10,000 write cycles, triple-level cells are low at 300-1000 write cycles, and 3D NAND can achieve 1500-3000 write cycles. - Poor archival media. Businesses want the ability to access, analyze, and monetize their data archives. With their limited number of write cycles, SSDs are not suitable for active archives and repeated analysis on the same data sets. Since the idea of active archives is the ability to access data at will, this overwhelms the number of write cycles that the memory cells can withstand. What is SSD Good For? Given these advantages and disadvantages, SSDs are excellent choices for intensive enterprise workloads such as highly transactional databases, web streaming, and dense environments like VDI. Furthermore, the fast read write speeds of SSD allows them to handle data at the remarkably rapid speeds required of today’s modern business. In fact, businesses can hardly go back to the slower HDD for their top line data usage. So, is SSD Worth It? Although the above usage cases are SSD sweet spots in the enterprise, this will raise prices on storage media purchases, and require more SSD swaps than hard drive media. Are SSDs worth the extra time and cost? In high performance environments, yes. Because SSD form factors are the same as HDDs, replacing disk with SSDs is not a major technology refresh. And because of their higher performance and falling prices, SSDs continue to be a highly competitive storage media in the data center. SSD Benefits Comparison Chart Among the benefits of SSD are a much lower failure rate and far faster access time. |Differentiator||NAND Flash SSD||10K-15K RPM SAS HOD| HDDs have a higher tolerance for writes, so |Capacity||In March 2018, Nimbus packed 30 TB into a 2.5″ SSD.||Seagate offered a 16TB hard drive as of Dec. 2018.| |Access Time||0.1ms||5.5-8.0 ms. HDD access time is slower because multiple physical operations take time, particularly seek time and rotational latency. |I/O||2D NAND: 1Gb/s; 3D NAND 1.4 Gb gigabit to /s||It is possible for disk to reach 2D NAND I/0 speeds using clustered high-speed disks. But this configuration results in underutilized disk, high cost, and complex storage infrastructure along with expensive energy demands.
<urn:uuid:1db0902e-33f8-4742-8dc0-74febb340dbc>
CC-MAIN-2022-40
https://www.enterprisestorageforum.com/hardware/benefits-of-ssd-speed-and-performance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00482.warc.gz
en
0.90963
1,814
3.671875
4
The influx of new technology coupled with the ever-changing political and social landscapes has meant security is having to evolve. Artificial Intelligence is now allowing law enforcement, security personnel and organisations to a transformational method of fighting crime, maintaining public security and significantly finding persons of interest. AI-powered surveillance camerasUtilising surveillance cameras with implemented AI has revolutionised finding missing persons. Facial recognition and IREX.ai’s “Searchveillance” have equipped both the public and private sector with the tools to collaboratively work together in finding persons of interest. Finding missing persons has been an underfunded and challenging issue across society, with many countries having no funding at all after the initial police investigation. Through artificial intelligence, surveillance cameras will never sleep on finding missing people by setting up alerts for once a missing person appears under surveillance. How it’s happening Technology like IREX.ai has delivered an AI collaborative security solution which is implemented into surveillance cameras enabling them to become “smart cameras”. Both public and private sector have not been able to collaborate through utilising their existing cameras, which are now powered by AI-backed smart video technology. With surveillance systems now veering toward becoming cloud based, this now allows an unlimited number of cameras for an organisation or city to connect to. The AI platform is helping bringing about a collaborative network to help monitor crowded public areas in real time, something that would have taken a lot of manpower, time and cost to produce. Quicker response through “searchveillance” This has now become a crucial element in the fight against COVID-19. The ability to track and trace has been very effective but this particular AI module may only just be getting started in the fight to find persons of interest. When an individual goes missing or is abducted, every second is crucial along with information gathered. Unfortunately, this brings in human-error, when a person believes they may have seen the person of interest, it can lead law enforcement and authorities critically, in the wrong direction. AI-powered Facial Recognition helps eliminate human-error through 99.5% accuracy success, leading authorities to definitive sightings and factual information through the help of AI. “Searchveillance” enables authorities to liaise with the public and private sector organisations who have implemented the AI into their surveillance to run a single search with the person of interest’s photo. Instantaneously the user receives immediate results of, if and when this person last appeared under surveillance. Long-term missing personsIt’s common knowledge that after 72 hours, statistically speaking the chance of finding the individual quickly diminish, but that doesn’t mean people stop searching. This week alone, September 2020, US Marshalls found and rescued 25 missing children in Ohio, many of whom had been missing for years. Implementing AI into surveillance cameras is becoming more frequently adopted around the world, enabling alerts in surveillance cameras and notifying appropriate law enforcement when a missing person appears under surveillance are all extraordinary tools. The alert system from the persons photo in the software. Facial recognition allows law enforcement to receive real-time footage of the missing person and their location. It is of great assistance for law enforcement to simply receive a notification and a real time feed of individual they are looking for. Security is becoming a more collaborative effort In light of recent events throughout the world, with protests surrounding police brutality, rioting, violence and deaths, security is evolving and it has to. Enabling technologies and building security collaboration and communication platforms is assisting in the fight to find missing people. It’s not just smart cities and smart airports who are providing a fishing net for find persons of interest, it’s also stadiums. The Superbowl and other major sporting events generate some of the biggest human slavery and trafficking busts of the year. Stadiums are now harnessing the responsibility to help counteract this and set up these alerts, utilise the AI in their cameras and collaborate with authorities, thus playing their part in finding persons of interest. IREX.ai has helped deliver the AI and the platform for collaborative security and communication, as technology grows and becomes more in our lives than we care for it to be, you often forget of the results it can provide, such as reuniting a family.
<urn:uuid:e788ee71-681a-47f7-b13d-272d2ee77405>
CC-MAIN-2022-40
https://irex.ai/blog/tpost/umtatpdj41-finding-missing-persons-collaboratively
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00482.warc.gz
en
0.95184
886
2.515625
3
A solid-state drive (SSD) is one of the most popular, fastest, and energy-efficient solutions for… By Mike Cobb, DriveSavers Director of Engineering Simple mistakes can lead to horrific consequences when computers are involved. Just ask the British businessman who, in this story at least, wound up losing everything with the touch of a button. It was the “enter” button on his computer keyboard. And it was a “delete” command that brought everything tumbling down. The story comes from Slashdot (https://slashdot.org), a UK website where the troubles of the alleged victim, Marco Marsala, recently came to light after the information was posted on Server Fault (http://serverfault.com), an online forum for server professionals. The posts have been removed after it was learned the story was not true. It was all a hoax, attributed to a viral marketing campaign, but the central storyline is something that can really happen. We know. We’ve seen it. And it could happen again. To you. Here’s the fabricated story: Marsala operates a data hosting service that is responsible for safekeeping information from more than 1,000 users. So, it’s not just his own data that has been lost. All data from all users is gone. “All servers got deleted and the offsite backups, too,” Marsala reportedly said. There is Hope A deleted file may be recoverable; however, in this hypothetical case, the deletion affects not just one file, not just one computer, but everything that was on any device attached to Marsala’s computer—including the backup files that stored extra copies of his customers’ files in a remote location! DriveSavers engineers—who have been recovering data for over 30 years at the highest levels of success—would have a chance at recovering deleted data from a situation like this, but only if allowed access to the affected devices as soon as possible after the deletion. In instances of logical failure, our engineers could work on the device either using our remote data recovery service or by bringing the device into our lab. Deleted files and virtual machines are not destroyed immediately. Instead, the system eventually writes over those items when it needs to store new data. Once overwritten, the original files or virtual machines are lost forever. This is why data recovery engineers should be given access to the system before anything else is done to the affected computer and/or other attached devices. There can be varying degrees of data recovery success depending on such factors as the type of filesystem used. An evaluation soon after the failure would provide the best idea of how successful a recovery may be. Single Click, Global Impact Accurate statistics on global data loss are hard to come by, but a study released last year by the Ponemon Institute estimated the cost to a moderate size company might average $3.79 million, up considerably from just two years prior. Are you or your business relying on cloud storage as your data backup? While this option does tend to be relatively safe and secure, it’s always a good idea to maintain an additional backup…just in case.
<urn:uuid:3902eff3-8d71-45e7-829a-638f3fc7c83b>
CC-MAIN-2022-40
https://drivesaversdatarecovery.com/data-loss-it-can-wipe-you-out/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00482.warc.gz
en
0.957354
663
2.609375
3
What is WebSockets? WebSockets is a bi-directional, full-duplex communications protocol initiated over HTTP. They are commonly used in modern web applications for streaming data, Chat applications, and other asynchronous traffic. It is a protocol where the client and server can send the messages simultaneously over the channel. Table of Contents Types of Tests When it comes to WebSockets, we generally encounter the following security testing methodologies: Black Box Testing: We know how in black box testing, an entity is tested without much knowledge about its internal structure or design. Let’s find out how black box testing is conducted in the case of WebSockets. 1. Identify that the Tested Application uses WebSockets This can be done by inspecting the client-side source code of the application for the presence of ws:// or wss:// URI (Uniform Resource Identifier) scheme. Developer tools of browsers like Google Chrome can also be used to view the Network and WebSocket communication. ZAP’s WebSocket tab could also be used for the same purpose. Resources like websocket.org also offer ways to drive WebSocket tests. They let the users save a piece of code locally into their PCs and run that code in the browser to open a WebSocket connection and run the required test. One could also use the developer tools in the browser to process WebSocket requests directed to locally hosted APIs. 2. Verify the Origin Header Field In the WebSocket standard, an origin header field is defined which is generally used to differentiate between the connections coming from different hosts and also the connections established between the browser and another network client. Origin headers are basically added by the user agents to elaborate the security contexts which led the user agent to start an HTTP request. These origin headers are later used by the HTTP servers to prevent and mitigate the Cross-Site Request Forgery (CSRF) vulnerabilities. If the Origin header is not verified during the initial WebSocket handshake, the server may accept connections from any origin and this may result in serious security vulnerabilities. The origin header can be verified by using a WebSocket client to attempt to connect to the remote WebSocket server. In case a connection is established, this would mean that the origin header is not being verified in the WebSocket handshake. 3. Integrity and Confidentiality In order to maintain the confidentiality and integrity of information during the entire process, it is essential to verify that the WebSocket connection uses SSL to transport sensitive wss:// information. It is also essential to check the SSL implementation for security issues. WebSockets are not known to handle authorization/authentication and that is why normal black-box tests should be carried out for them. This means that if a WebSocket is opened via a page, it doesn’t receive any kind of authorization/authentication and you need to take some extra steps in order to secure the WebSocket connection. You can also use the same authentication measures you are using on your web views for your WebSocket connections too. 5. Input Sanitization Injection attacks are as probable over WebSockets as they are over any other mechanism like the HTTP connections. Three of the top five website vulnerabilities namely Cross-site Scripting, SQL Injection and Remote File Inclusion can be attributed to input sanitization. Some of the common pathways which hackers use to alter user input data and infiltrate into WebSockets include GET requests, POST requests and cookies. That is why if the data is coming from an external client, input sanitization must be done before processing it. Input sanitization essentially means the cleansing of the user input in order to prevent it from exploiting the security loopholes in the system. But, it’s important to understand that thorough sanitization of user input is not an easy task. And the best approach one can follow while handling this tough task is to focus on the context in which the user input will be utilized. Simple measures like sanitizing data before output, enclosing attributes within quotes, and escaping user inputs before including them in SQL queries can go a long way and prevent attacks and exploits which might result because of poor input sanitization. In the case of grey-box testing, the tester has only partial knowledge about the application structure. The only major difference from black-box testing is that the pen-tester may have the API documentation in this case which might include information related to WebSocket requests and responses. What is the main difference between WebSocket and normal HTTP Communication? HTTP is a half-duplex stateless protocol where the client sends a request to the server and then waits for the server’s response whereas, in the WebSockets, it is a full-duplex stateless protocol that is initiated over HTTP and is long-lived. It doesn’t wait for the server to respond back, instead, the client can send any number of requests to the server. A full-duplex and persistent connection means that instead of the conventional request and response, the WebSocket connection stays active for as long as the application is in running state and allows simultaneous communication between the client and server because it is full-duplex. WebSockets are preferred in the applications which require low latency communication. Although both HTTP and WebSockets have equal sized initial handshakes for connection, in case of WebSockets, the handshake is performed only once. Efforts are being made to improve the latency and performance of HTTP protocols, but it is still likely that WebSockets will always have an edge in terms of latency when it comes to client to server data transfer. How the WebSocket handshake is done and related security issues var ws = new WebSocket("wss://normal-website.com/chat"); To establish the connection, the browser and server perform a WebSocket handshake over HTTP. The browser issues a WebSocket handshake request like the following: If the server accepts the connection, it returns a WebSocket handshake response like the following: At this point, the network connection remains open and can be used to send WebSocket messages in either direction. Issues and Observations : ● The Connection and Upgrade headers in the request and response indicate that this is a WebSocket handshake. ● The Sec-WebSocket-Version request header specifies the WebSocket protocol version that the client wishes to use. This is typically 13 and is not a vulnerable parameter. ● The Sec-WebSocket-Key request header contains a Base64-encoded random value, which should be randomly generated in each handshake request. This header is not the one which uniquely identifies a user or can be used for authorization purposes. ● The Sec-WebSocket-Accept response header contains a hash of the value submitted in the Sec-WebSocket-Key request header, concatenated with a specific string defined in the protocol specification. This is done to prevent misleading responses resulting from misconfigured servers or caching proxies. ● The wss protocol establishes a WebSocket over an encrypted TLS connection, while the ws protocol uses an unencrypted connection. So if the server is accepting connections from ws protocol it is vulnerable to MITM attacks. ● This protocol doesn’t prescribe any particular way that the servers can authenticate clients during the WebSocket handshake. The WebSocket server can use any client mechanism available to a generic HTTP server, such as cookies, HTTP authentication, or TLS authentication. WebSockets does not follow same-origin-policies. How to Test the Security of WebSockets? The number of tools which can test WebSocket implementations is not that large. However, the two best-known tools for the security testing of WebSockets are ZAP and Burp. With the help of these tools, you can intercept and improvise WebSocket frames with ease. You can also use the Chrome development tools to keep a check on the WebSocket traffic. Once you identify the most suited tool for your purpose, the rest of the security audit part for WebSockets (access rights tests, injection tests, workflow tests) is nearly similar to that of general HTTP requests. If the protocol of the WebSocket is vulnerable to expansive attack surfaces, focusing on the configuration part can limit the risks manifold, as is the case for HTTP protocols. In order for the identified security testing tool to work properly, it must have some crucial features present. Here is a list of some of those security testing functions: 1. Open a Connection to WebSocket Server The tool must be able to open a WebSocket connection to the server. The server can either be used with non-encrypted (ws://) or encrypted (wss://) connection. The features of manipulating the Origin Header and support of user-provided subprotocols should also be there. 2. Close WebSocket Connection if Requested If the user or the WebSocket server requests, the tool must be able to close the WebSocket connection as well. 3. Keep the Connection Alive And Send/Receive Messages The tool should be able to send/receive user-provided and server-provided messages and also keep the connection alive. Message formats like UTF-8 and binary must be supported by the tool as well. 4. Log Data And Messages The tool must be able to log data being sent to the log functions and also log all the WebSocket messages while the connection is in a live state. All the logs must be maintained on files as well. 5. User Input The tool should support user input and be able to handle and process it as well. 6. Print Data The WebSocket security tool should also be able to print essential data to any user interface for informational or documentation purposes. 7. Proxy Support The security tool should also support the use of HTTP proxies. How to Intercept and modify WebSocket messages? You can use Burp Proxy to intercept and modify WebSocket messages, as follows: ● Configure your browser to use Burp Suite as its proxy server. ● Browse to the application function that uses WebSockets. You can determine that WebSockets are being used by using the application and looking for entries appearing in the WebSockets history tab within Burp Proxy. ● In the Intercept tab of Burp Proxy, ensure that interception is turned on. ● When a WebSocket message is sent from the browser or server, it will be displayed in the Intercept tab for you to view or modify. Press the Forward button to forward the message. ● The WebSockets messages can be seen in WebSocket history, where you can see all the requests and responses that have taken so far with the server. ● If you wish to replay any message you can simply just send the particular message from the history to the repeater. Common Vulnerabilities associated with WebSockets ● Origin Verification → It is the server’s responsibility to verify the Origin header in the initial HTTP WebSocket handshake. If the server does not validate the origin header in the initial WebSocket handshake, the WebSocket server may accept connections from any origin. This could allow attackers to communicate with the WebSocket server cross-domain allowing for CSRF-like issues. ● Most Chat applications which are using WebSockets are vulnerable to XSS because the input sanitization is not implemented properly by the application and it trusts the user input. ● Misplaced trust in HTTP headers to perform security decisions, such as the X-Forwarded-For header. ● Flaws in session handling mechanisms, since the session context in which WebSocket messages are processed, is generally determined by the session context of the handshake message. ● Most applications are vulnerable to IDOR and authorization issues as there is no header in WebSocket which can check for such issues. ● WebSockets let an unlimited number of connections reach the server. This lets an attacker flood the server with a DOS attack. This greatly strains the server and exhausts the resources on that server. Then the website slows down greatly. ● The most widely found vulnerability in WebSocket is Cross-Site WebSocket Hijacking. 1. Cross-Site WebSocket Hijacking (CSWSH) A Cross-Site WebSocket Hijacking attack is essentially a CSRF on a WebSocket handshake. When a user is logged into victim.com in her browser and opens attacker.com in the same browser, attacker.com can try to establish a WebSocket connection to the server of victim.com. Since the user’s browser would automatically send over her credentials with any HTTP/ HTTPS request to victim.com, the WebSocket handshake request initiated by attacker.com would contain the user’s legitimate credentials. Related Topic- Understanding OWASP Top 10: Cross Site Scripting (XSS) This means the resulting WebSocket connection (created by attacker.com) would have the same level of access as if it originated from vicitm.com . After the WebSocket connection is established, attacker.com can communicate directly to victim.com as a legitimate user. 2. Structure of an attack To carry out the attack, an attacker would create a script that will initiate the WebSocket connection to the victim server. She can then embed that script on a malicious page and trick a user into accessing the page. When the victim accesses the malicious page, her browser will automatically include her cookies into the WebSocket handshake request (since it’s a regular HTTP request). The malicious script crafted by the attacker will now have access to a WebSocket connection created using the victim’s credentials. Impact of Cross-Site WebSocket Hijacking Using a hijacked WebSocket connection, the attacker can now achieve a lot of things: WebSocket CSRF: If the WebSocket communication is used to carry out sensitive, state-changing actions, attackers can use this connection to forge actions on behalf of the user. For example, attackers can post fake messages onto a user’s chat groups. Private data retrieval: If the WebSocket communication can be used to retrieve sensitive information via a client request, attackers can initiate fake requests to retrieve sensitive data belonging to the user. Private data leaks via server messages: Attackers can also simply listen in on server messages and passively collect information leaked from these messages. For example, an attacker can use the connection to eavesdrop on a user’s incoming notifications. The key takeaways when it comes to WebSocket security are as follows: 1. Always thoroughly examine your WebSockets When it comes to security, WebSockets are often ignored by security practitioners. It must be made a regular practice to examine WebSocket traffic and other security parameters using tools like ZAP and Burp Suite and even use developer tools from the browsers to do that. Preferably this should be done during the penetration testing phase. 2. WebSockets have nothing to do with cookies Although a cookie may be returned on the same request that initiates a WebSocket, it doesn't mean that cookie has anything to do with the WebSocket protocol. Your browser's protocol changes to establish a WebSocket connection. In this process, however, the server doesn't get any cookies to validate the established connection. Generally, if an application is using WebSockets, you must examine its traffic and determine if any substitute method of authentication or authorization is taking place. Chances are that you won't find any such instance. 3. WebSockets are not affected by the Same Origin Policy WebSockets are not affected by the Same Origin Policy and the corollary CORS, i.e. Cross-Origin Resource Sharing. But why so? This is because WebSockets are an entirely different protocol from the HTTP. Most of the security solutions make this assumption that the WebSockets will follow the same rules as the HTTP traffic, but this is not the situation. And this becomes very important to understand while testing them. 4. In case of WebSockets, prefer web application penetration testing over other methods Most of the typical security scanners won't detect the serious authorization and authentication flaws that might exist in WebSockets. In order to detect such vulnerabilities, it is important to know where to look for what and thoroughly understand the context. That is why penetration testing techniques should be the go-to method in case of WebSockets. Also, the previously explained methods of black and grey box testing of WebSockets will also ensure a thorough security analysis of all the involved mechanisms. Pentesting tools for WebSocket Related Topic- 12 Best Penetration Testing Tools for Security Assessment The WebSocket landscape is continuously evolving and along with it are evolving the associated security vulnerabilities. One thing which needs to be understood is that every WebSocket isn’t a vulnerability. However, if a WebSocket vulnerability is encountered, it must be treated with high priority. Moreover, the organizations dealing with WebSockets must be extra cautious and understand the importance of implementing all the security controls in place when it comes to WebSockets.
<urn:uuid:63eff9d7-adb6-4e74-93ae-50707933027f>
CC-MAIN-2022-40
https://www.appknox.com/blog/everything-you-need-to-know-about-web-socket-pentesting
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00482.warc.gz
en
0.890993
3,572
3.609375
4
Best Zero Trust Network Security Architecture When computers and networks were in their infancy, progress seemed lightyears away. We are now long past the point of this naïve view of technology and the pace it can accelerate at. As early as 1965, Moore’s law predicted that the number of transistors on a computer chip would double exponentially around every two years. A prescient prediction, but ultimately the accuracy of which even Moore himself must have been surprised by. Indeed, computer chips have grown in complexity to the point where present-day technology would seem alien to someone from only two decades ago. The potential for computers was always immense as was illustrated by a very rudimentary version of modern-day computers reliably getting humankind to the moon. How far we’ve come. Now conversations revolve around building colonies on Mars! Of course, we’ve spoken about computers as a singular entity, and they have been responsible for our constant march forward in their own right. However, we would be remiss if we failed to mention one of the most crucia Of course, most people would understand this better as simply “the internet”. The internet has been essential to our progress as a species, the monumental leaps we’ve taken over the past few decades, and the promise that we see in the future. As computers grew in complexity and capability, so too did networks. Naturally, where there is good there will always be evil. Networking allowed us to deliver incredible solutions to everyone from consumers to governments to commercial enterprises. However, it also presented risks. Security was a simple matter when networks were simple and one-dimensional, but as already discussed, computers and networks have become extremely dynamic in this day and age. This means that innovative and smart security is the need of the hour. It isn’t as if security has not always been a concern. When you consider the variety of users for the internet, you’ll see that good security has always been a priority. Especially for entities like the military where secure exchange of information can be of strategic importance. As with most ambitious goals, it takes a combined effort to reach the goal. Just as with the moon landing, which was a joint effort by the government, the military, the private sector, and the scientific community, network security will also require close cooperation and partnerships. Private tech companies, such as those in Silicon Valley have developed a strong reputation for providing the innovative solutions that such a task requires. The call to action has been answered – Zero-Trust Network Architecture. The Need for Better Security Architecture Before we can discuss what a zero-trust network architecture is, we must first understand the circumstances that have necessitated its need. With networks growing in complexity thanks to the advent of new technologies such as big data, cloud computing, IoT, and mobile internet, the old boundaries that protected networks are quickly fading. Businesses and entities such as the armed forces require strong boundaries established by closed internal networks. These boundaries ensure that the devices on the network are secure from outside influence. However, closed internal networks also present limitations that businesses and other entities would understandably wish to surpass. Utilization of big data via cloud computing makes organizations faster, leaner, more efficient, and ultimately more powerful. Given that every organization wishes to make use of these innovations, the old security parameters are being eroded. This is because these parameters rely on networks being closed off from the world. As we’ve seen though, the future is headed towards broader more open networks. Making use of the Internet of Things is another goal for numerous organizations. Such technologies can increase the scope of what can be done with the available resources. Once again, the security challenges increase too. Under the old security parameters, you keep all devices within the rigid boundaries of the internal network. To participate in cloud computing, big data innovations, and IoT applications, devices need to interact with those that are outside of the network. The security parameters that exist at the moment cannot facilitate this. Neither can they create a situation whereby the devices can gain the benefit of leaving their secure bubble but restrict access of external devices to their network. The challenge is clear but also further complicated by another dimension to the story – malicious parties. The outlook for network security isn’t very positive either. The frequency and complexity of attacks is rapidly increasing. Attacks on enterprise and closed networks are becoming increasingly aggressive, highly targeted, and incredibly well-organized. Not to mention internal threats posed by unauthorized data access, unintentional data theft, mistakes by users. Surprisingly, it is the internal threats that illuminate the biggest problem with current network security protocols – trust. At the heart of the current network security architecture is trust. There is an implicit understanding that the devices on the network can be trusted. Following this line of reasoning, we can see clearly that if even one device on the network poses a security threat, the security of the entire network is compromised. The exact mechanism of how the device presented a security threat is irrelevant to our discussion. This is because, as we’ve already mentioned, networks are complex enough today that any small detail could create a massive liability. This brings us back to zero-trust network architecture and why so many believe it is the future. Transforming for Innovation and Sustainability securing future competitive advantage What is Zero Trust Network Architecture? Let us now take a closer look at zero-trust network architecture. Zero-trust architecture requires the fulfillment of five fundamentals: - One must assume that the network is hostile at all times. - There is a constant threat on the network, both internal and external. - Being on the network is not a sufficient criterion for trust. - Each device, and user must be authorized and authenticated at every stage. - Security policies must be flexible and proactive, collecting information from all available data sources. The above fundamentals should shed light onto what the foundational requirements of a zero-trust network are. Introducing Calico Enterprise Zero-Trust Network Security As previously mentioned, the private sector has a huge part to fulfill in helping organizations create safe and secure network environments. We’ve already seen that zero-trust network security architectures are best equipped to tackle the new frontier of information technology. Tigera is responsible for producing the Calico Enterprise Zero-Trust Network Security platform that takes zero-trust network security principles and implements them for organizations looking to protect themselves from the constant threats in today’s online environment. Tigera recognizes better than anyone the need for zero-trust security for todays organizations. The old model has long become obsolete, and Tigera is committed to bringing zero-trust security to everyone. The implicit trust provided an obvious risk, that the Calico Enterprise Zero-Trust Network Security platform hopes to overcome. The fundamentals of zero-trust have already been discussed so let’s take a deeper look at how Calico Enterprise Zero-Trust Network Security implements those fundamentals to create a superior network security solution. A variety of techniques are utilized by the platform which include, identity-verification, defense-in-depth, access limitation, data encryption, and privilege controls. As a result of Kubernetes open nature, it is more vulnerable to malware than other networks. The cluster can easily be compromised because essentially any pod can connect to another on the same network. As a result, malware can spread quickly and undetected. The one-time authorization that is a significant feature of the old network security architecture offer insufficient protection in this environment. Real-time and proactive monitoring is required in this case. This is the fundamental requirement for zero-trust policies, wherein each device must be constantly authorized for continued access to the network. This not only protects cloud assets, but also reduces liabilities and increases the overall strength of the network. Tigera’s Calico Enterprise Zero-Trust Network Security platform offers four key features that distinguish it from the competition: - Workload Identity: First and foremost, multi-factor authentication via general metadata, network identity, and x.509 certificates applies to all microservices. Even after authentication, access is only given to destinations that the microservice has prior authorization to connect to. - Least Privilege Access Control: The term access control should be rather self-explanatory. The least privilege part of the equation is what is so unique and great about Tigera’s Calico Enterprise Zero-Trust Network Security platform. It begins with a foundation of no trust for the device and then gradually provides access as required. This not only applies to traffic between microservices but also the flow of data into and out of the cluster. This broad approach protects the entire infrastructure stock. - Defense in Depth: We’ve already explained that a foundational part of zero-trust networks is that some part of the network is assumed to be compromised at any given moment. As such, Calico Enterprise Zero-Trust Network Security makes a determination at every connection request. This determination depends on whether the request has been authorized at all three layers – the host, the pod and the container. If even one layer is observed to be compromised, then access is denied, and you are alerted to the issue. - Data-in-Transit Encryption: When data moves between microservices it is especially vulnerable. Calico Enterprise protects all traffic by encrypting it with mTLS and IPsec encryption. Requirements of a Zero-Trust Network There are a few requirements that a zero-trust network must fulfill. Requirement 1: The very first requirement of a zero-trust network is that all connections must be subject to security protocols. You may think that a connection within the network that isn’t going outside the network does not need to be secured, however; this would go against the foundational principle of zero-trust network. Requirement 2: Removing single points of failure to determine a host’s identity. Previous security protocols have treated IP addresses and ports as sufficient proof of identity; however, it is now well-known that these can be spoofed. If the assumption of a zero-trust network is that it always harbors malicious parties then it is important that the identity of remote endpoint should always be determined using several criteria and not just a one-dimensional approach. Requirement 3: Any network flow that is expected and allowed is explicitly allowed. Conversely, a connection that fails to meet this requirement is denied automatically. Requirement 4: In the event that a workload is compromised, measures must be taken to ensure that it does not evade security policies. Requirement 5: Once again, operating from a position of zero-trust necessitates that there is not distinction between a trusted and untrusted network path. As such, every connection on the network must be encrypted. Requirement Implementation by Calico Enterprise We’ve discussed the requirements of a zero-trust network infrastructure. It is important to know exactly how Calico Enterprise is able to fulfill these requirements, so let’s take a deeper look. - Multiple Enforcement Points: There are two separate enforcement points that any incoming request to your Kubernetes workload must pass through. The first enforcement point is the host kernel. Using iptables at L3-L4 Calcio’s policy is enforced in the Linus kernel. If the incoming request is able to get through this point, it still has to get through the envoy proxy. This policy is enforced in the Envoy proxy at L3-7, and each request is authenticated cryptographically. Multiple points of enforcement ensures that the connection request has to validate their identity more than once, ensuring maximum security and minimum risk. In doing so, requirement 4 of a zero-trust network is fulfilled. - Calico Policy Store: Allowed flows are encoded in an allow-list in the Calico data store. This aims to fulfill the third requirement of zero-trust architecture. As previously mentioned, zero-trust requires a fair bit of flexibility for effective implementation. Calico enterprise provides plenty of it. Practically speaking, this component allows your network to have capabilities that legacy systems offered such as zones in tandem with zero-trust features like allow lists. What’s crucial is that these can be used simultaneously, if need be, layered on top of each other via the maintenance of multiple policy documents. - Calico Control Plane: This feature aims to meet the expectations laid down by the 4th requirement of a zero-trust network. The plane transfers the policy information to the previously highlighted enforcement points. This ensures that any connection to the cluster must be authenticated and authorized at multiple entry points based on the security policies. - Istio Citadel Identity System: Networks can be compromised through infrastructure points such as routers or links. To counteract this vulnerability, Tigera Calico Enterprise in tandem with Istio utilizes an Istio component by the name of Citadel. This component fulfills the second and fifth requirement of a zero-trust network by first, establishing cryptographic keys that that each service account must provide to validate its identity. Next, traffic is also encrypted using this same principle. Frequently Asked Questions Who is Zero-Trust For? The answer to this has two layers. The first layer provides the obvious answer – everyone! Of course, this doesn’t paint the entire picture. Almost all businesses and enterprises stand to gain from implementing the zero-trust network infrastructure provided by Tigera Calico Enterprise. However, it can also be argued that it is not the need of the hour for many businesses. Enterprises like some of the most powerful tech companies do not fit the latter description though. The critical function that such organizations serve to millions of people worldwide means that they cannot take security lightly. Their success also brings them in the spotlight more than many other businesses. This means that they are more likely to be victims of attempted attacks. There is no shortage of willing parties that would love to carry out an attack on some of the biggest tech companies in the world. As such, zero-trust network architecture has become a pressing need, rather than a flight of fancy that can be acquired when the circumstances are right. There are organizations that can no longer continue to rely on legacy network security systems though. One of the most obvious candidates for this is the military. This will not come to news to anyone to be honest, especially those who are closely associated with the armed forces. The need for strong network security has long been stressed by the military. Of course, this is only natural as well. The organization that is responsible for the security of the nation is expected to be aware of threats on all frontiers. The technological resources of many countries have now reached a point where conflict has entered a new paradigm – cyber warfare. When an entire country’s critical infrastructure relies on networks, it is essential to verify the strength of the security of these networks. Those in the military will be aware of the threats that cyber warfare poses. Alas, it is not enough to be simply aware and worried about cyber warfare. As the foundational principles of zero-trust have highlighted, hostility must be a fundamental assumption. This means that a response to any threat must be proactive rather than reactive. Threats do not just come in the shape of damage to infrastructure but also to businesses within the United States. A cyber-attack on a U.S. based business is essentially an attack on the country itself and should be treated as such. Just a few years ago, Sony was attacked by what was later revealed to be Foreign hackers. The purpose of the attack was to lodge their discontent with the characterization on Foreign Country in an upcoming production. Military men and women understand that the threats posed by improper network security are real and growing by the day. The attack on Sony only proves that. A foreign actor was able to infiltrate the network of a company on U.S. soil. The rules of engagement on cyber-warfare are still unclear but the need for protection could not be more evident. Matters are further complicated when we consider the kind of things that require a network to communicate in a military context. Weapons systems are one such item. Concerns have already been raised by the U.S. Government Accountability Office about the growing complexity of cyber threats. Furthermore, it turns out that most military branches were not adding cybersecurity standards into contracts. This means that third-party contractors could potentially add further vulnerabilities to an already suspect network. One must understand the amount of technology that is required to coordinate a force the size of the United States military. Add to that knowledge just a general awareness of the dependency that we have on technology, and it becomes clear why many people are concerned about the present situation. Each device poses a threat, each combination of devices only complicates the level of threat and introduces complications to the network security effort. Not only can this network be easily compromised but if it goes down, fixing it will also be harder. This is because the network is what is allowing for the coordination to occur in the first place. This is why zero-trust is such an essential area of inquiry. The scale at which the U.S. military operates is such that only a security infrastructure that is all-encompassing and uncompromising in its approach will be able to secure it to any satisfactory level. This is where Tigera Calico Enterprise comes in. We’ve already discussed its efficacy at implementing a zero-trust network infrastructure. Calico Enterprise is flexible and comprehensive in its approach which makes it the ideal candidate to take on the task of implementing a zero-trust network infrastructure for branches of the military. Partnership with a private organization is not unheard of either. The Air Force, lauded for being one of the only branches of the military to be ahead of the curve on the cyber-security frontier, has recently sough the help from a Silicon Valley-based tech company. This was achieved through a contract award, the details of which are not available publicly at the moment. The Airforce recognizes the importance of being able to trust the users on your own network and ensuring that no data on the network is compromised. Given that the President of the United States has issued an executive order making the implementation of zero-trust architecture for Federal civilian agencies. This is a positive step in the right direction that should see more government agencies and other branches of the military seek out private sector assistance in implementing zero-trust architecture. Benefits of Zero-Trust Architecture Zero trust is not just a IT fad that is being peddled for personal gain by security and networking infrastructure companies. There are concrete benefits that platforms like Tigera Calico Enterprise offer. Let’s take a look at some benefits of zero-trust network architecture: The first benefit is that it is possible for the network to be opened up to various stakeholders without the added worry of additional security risks. This is something that wasn’t possible previously. The reason that old architectures cannot do so, is because they create a closed-network. As we’ve already seen, not only do closed networks limit the range of possibilities for a business but they also don’t even guarantee safety. Zero-trust secures the network so comprehensively that organizations no longer need to worry about being an open network. The risks are mitigated in real-time and the organization gets to enjoy the benefits of an open network. One of these benefits is allowing partners to join in on the organizations network and access documents and files that can assist them. These partners can also utilize network resources, such as those required by retailers and suppliers. In essence what this means is that there is access without a security risk and the business can enjoy better functionality. User experience is also enhanced. Being able to consistently be on one network allows users to not have to worry too much about the security risk that they may accidentally be creating. Not only that, but there is also no need to migrate between a corporate or private netw Another benefit is allowing users who may be in locations where data is normally compromised to access the data on the network. This once again highlights the best thing about zero-trust, the benefits of a normal network are easily available while all the risks are eliminated. One of the key features of zero-trust is that it is able to control access at various levels. This provides an added layer of security for critical equipment on the network. This can be enterprise applications that reside on the network and if compromised could potentially cost the organization a lot of money. By restricting access, such applications receive much needed security from malicious parties. Utilizing Internet of Thing always raises concerns by network security of the network may be compromised. Thankfully, zero-trust network architecture can create an isolated enclave for such technology that allows it to be on the network but be unable to damage it in any significant way. For businesses, a particularly useful feature is the ability to allow SaaS applications to connect with internal enterprise software. This is immense because businesses rely heavily on SaaS application these days. In cases where zero-trust is not present then allowing external applications to interact with enterprise applications could compromise them. Of course, most legacy security network architectures would simply prohibit this. That isn’t a solution ultimately though, and it is refreshing to see that zero-trust allows this. The biggest and most obvious benefit is that the organization will not need to develop its own applications to gain the benefit of the SaaS application that they were using. Thanks to zero-trust you can enjoy the functions of you SaaS applications and the enterprise applications on your network. What clients say about Cloud Computing Technologies Migration to Zero-Trust Network Infrastructure Having looked at everything there is to know about zero-trust network architecture, there is still the question of moving to one from a traditional network architecture. It is understood that the migration will be riddled with its own set of challenges, and progress will not magically happen overnight. It is not enough to simply have the implementation of a zero-trust architecture as a goal and completely disregard the environment that it is being implemented in. More than anything, there needs to be an appreciation of existing network arrangements, business and organizational structures and protocols. If the starting point is a deep understanding of the existing paradigm, then it will become clear that significant planning and coordination will be required to successfully implement a zero-trust network architecture. No one can deny the very obvious benefits that zero-trust provides, but if it is done without due consideration, the chances of improper implementation increase. The entire point of zero-trust is that it is a carefully constructed and comprehensive system. As such, its implementation should follow careful procedure with clearly defined goals along the way. Let’s take a look at the ideal methodology for the implementation of zero-trust network architecture. 1. Clear Vision Any organization will have various internal stakeholders. A single department like the security department cannot hope to achieve the effective implantation of such a system without the support of other departments. In a military context, this means that simply because there is a desire at the higher echelons of the entity, doesn’t mean that constant collaboration with various stakeholders won’t be necessary. As discussed several times already, zero-trust is not an isolated security policy decision. It is an all-encompassing security philosophy manifested as a network architecture. This means that the approach to implementing zero-trust has to be a strategic goal. By making it an organization-wide goal, the decision-makers ensure that all departments and individuals are on the same page. It is only though this concerted and concentrated effort that such a fundamental organizational change can be achieved. This is because it will not be easy to acclimate to the new security environment. Users will experience a lot of confusion regarding the change in practices, and some may be openly hostile to the proposition as a result of unclear communication. Any organizational change is difficult to bring about; however, if a vision is clearly set and all feedback is carefully considered then the process can become a lot smoother. Failure to do so can mean that the project doesn’t even get off the ground. Admittedly, this is less of a concern for the military than it is for private enterprises. There is an expectation at such organizations that any decision will be fair and democratic. If the security or IT department attempts to implement a zero-trust architecture, but is unable to communicate its need clearly then there may be too much push back. It is hence critical, that those pushing for the implementation of zero-trust know exactly what it is they’re proposing and why it’s essential to the future of the organization. 2. Construct a Plan As with any strategic goal, planning is an essential component here as well. Planning ensures that all goals, challenges and timelines are clearly understood by everyone. It also ensures that people are able to prepare beforehand for any changes to their daily operations. The path to zero-trust being implemented completely is crucial. If the plan to get to the goal is not well-thought out chances of failure increase. The most important consideration is that the zero-trust network architecture should carefully consider the core business and the core product. This is due to the comprehensive nature of the architecture. Any attempt at implementation means that the entire business’s policies, structures, and products have to be analyzed from top to bottom. This is why planning is so crucial. Planning gives the decision makers a chance to consider everything prior to starting their journey. Anyone who has worked in business would know that once a process starts, it can be very costly to go back to the drawing board. Resources have been mobilized, commitments have been made, and sometimes products need to be launched, Improper planning can affect all these aspects of zero-trust network architecture implantation. Furthermore, it is not enough to plan. Ideally the plan will be step-by-step or as detailed as possible. This is because when undertaking any large-scale project such as this one, there is an assumption that things will go wring along the way. Not only that, progress markers are important as well to ensure that things are being tracked! If no one knows when a certain item was due to be completed, then the timeline for the whole process falls apart. The costs of such a mistake can be catastrophic. 3. Graduated Scope This is one of the essential components of project management, especially for projects of this scale. Zero-trust is an organizational philosophy which means that it will change pretty much every aspect of the organizations. Not only that, its implementation will also have an effect on several organizational-scenarios This naturally means that all the changes cannot be implemented in one-go. Organizations and networks are complex. Two unrelated layers can have a profound impact on one another without anyone ever seeing where the connection is. This is why when it comes to implementing such a massive change the goal is always to do it in graduated steps. First implement changes in a small part of the organization, see if it works, then graduate to a higher level. The same principle applies when implementing a zero-trust network architecture. It will not always be obvious or even known how two separate layers in the network are connected. It is hence advisable to roll out the changes, test them and them finalize them in batches. If the changes are rolled out in unison, and something goes wrong it will be impossible to isolate exactly where the problem is arising from. The process to implement a zero-trust architecture in this way has been hypothesized. The process states that one must follow three main steps to achieve their goal in this regard. First, proof of concept, then application migration, and lastly capability evolution. Here is an explanation of what this would look like practically. The first step means that the zero-trust architecture will be applied to a small scenario. This should be a medium zero-trust scheme and should deliver the results hoped for when the full rollout happens. The next step requires that you diversify the application of the zero-trust mechanism to other business areas. This means that as you go further along the process, more an more use cases will be tested out. During this process, you will begin to notice new requirements and considerations that you may have missed during the planning stage. You will now have to optimize the process to address these requirements. Conclusion for the Best Zero Trust Network Security Architecture Finally, you should now have enough information to enhance the zero-trust capabilities. This is the capability evolution phase. Zero-trust requires that it is constantly evolving because the nature of the security environment demands it. This is what distinguished zero-trust and why it is essential that businesses and other organizations utilize it. For the best zero trust network security architecture advise, please contact us today!
<urn:uuid:0f100421-a0e0-492b-9406-d563e28d33d5>
CC-MAIN-2022-40
https://cloudcomputingtechnologies.com/best-zero-trust-network-security-architecture/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00482.warc.gz
en
0.949998
5,909
2.671875
3
When does class start/end? Classes begin promptly at 9:00 am, and typically end at 5:00 pm. Rust is a modern systems programming language with the benefits of both a native and managed programming approach. In class, learn the standard features and the programming style of Rust. In...Read More Rust is a modern systems programming language with the benefits of both a native and managed programming approach. In class, learn the standard features and the programming style of Rust. In addition, Rust is a C-based language but with a host of unique features, such as ownership, lifetimes, panics, patterns, and more. The many unique features and benefits are reviewed thoroughly in class. Zero-cost abstraction is an important principle of Rust. For example, there is no garbage collection. This and other aspects of zero-cost abstraction are discussed in class. Rust favors composition over inheritance and is not considered a fully object-oriented language. However, object-oriented concepts are supported using traits, including polymorphic behavior. Students will learn how to use traits to implement polymorphism, abstraction, and extensibility. Finally, the class concludes with an introduction of threads. In this course you will learn the following: The audience for this course is software engineers and developers. Students should have six months of general programming experience. Types and variables Ownership and lifetimes Transfer of control Functions and Methods
<urn:uuid:4c783063-56fc-482b-becb-712e46ebfe8b>
CC-MAIN-2022-40
https://www.exitcertified.com/it-training/programming/rust-programming-59294-detail.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00682.warc.gz
en
0.941438
304
3.296875
3
Phthalates, which are used as plasticizers in plastics, can considerably increase the risk of allergies among children. This was demonstrated by UFZ researchers in conjunction with scientists from the University of Leipzig and the German Cancer Research Center (DKFZ) in a current study published in the Journal of Allergy and Clinical Immunology. According to this study, an increased risk of children developing allergic asthma exists if the mother has been particularly heavily exposed to phthalates during pregnancy and breastfeeding. The mother-child cohort from the LINA study was the starting and end point of this translational study. In our day-to-day lives, we come into contact with countless plastics containing plasticizers. These plasticizers, which also include the aforementioned phthalates, are used when processing plastics in order to make the products more flexible. Phthalates can enter our bodies through the skin, foodstuffs or respiration. “It is a well-known fact that phthalates affect our hormone system and can thereby have an adverse effect on our metabolism or fertility. But that’s not the end of it,” says UFZ environmental immunologist Dr Tobias Polte. “The results of our current study demonstrate that phthalates also interfere with the immune system and can significantly increase the risk of developing allergies.” At the outset of the study, the team of UFZ researchers examined the urine of pregnant women from the LINA (lifestyle and environmental factors and their influence on the newborn-allergy-risk) mother-child cohort study and searched for metabolites of phthalates. The concentration level determined in each case was found to correlate with the occurrence of allergic asthma among the children. “There was a clearly discernible relationship between higher concentrations of the metabolite of benzylbutylphthalate (BBP) in the mother’s urine and the presence of allergic asthma in their children,” explains Dr Irina Lehmann, who heads the LINA study. Researchers were able to confirm the results from the mother-child cohort in the mouse model in collaboration with colleagues from the Medical Faculty at the University of Leipzig. In this process, mice were exposed to a certain phthalate concentration during pregnancy and the lactation period, which led to comparable concentrations of the BBP metabolite in urine to those observed in heavily exposed mothers from the LINA cohort. The offspring demonstrated a clear tendency to develop allergic asthma; even the third generation continued to be affected. Among the adult mice, on the other hand, there were no increased allergic symptoms. “The time factor is therefore decisive: if the organism is exposed to phthalates during the early stages of development, this may have effects on the risk of illness for the two subsequent generations,” explains Polte. “The prenatal development process is thus clearly altered by the phthalate exposure.” Phthalates turn off regulatory genes In order to establish precisely what may have been modified, Polte and his team, in collaboration with colleagues from the German Cancer Research Center (DKFZ), took a close look at the genes of the young mice born to exposed mothers. So-called methyl groups were found in the DNA of these genes — and to a greater extent than is usually the case. In the course of this so-called epigenetic modification of the DNA, methyl groups attach themselves to a gene like a kind of padlock and thus prevent its code from being read, meaning that the associated protein cannot be produced. After the researchers treated the mice with a special substance intended to crack the methyl “locks” on the affected genes, the mice demonstrated fewer signs of allergic asthma than before. Dr Polte concludes the following: “Phthalates apparently switch off decisive genes by means of DNA methylation, causing the activity of these genes to be reduced in the young mice.” But which genes cause allergic asthma if they cannot be read? So-called T-helper 2 cells play a central part in the development of allergies. These are kept in check by special opponents (repressors). If a repressor gene cannot be read as a result of being blocked by methyl groups, the T-helper 2 cells that are conducive to the development of allergies are no longer sufficiently inhibited, meaning that an allergy is likely to develop. “We surmise that this connection is decisive for the development of allergic asthma caused by phthalates,” says Polte. “Furthermore, in the cell experiment, we were able to demonstrate that there is an increased formation of T-helper 2 cells from the immune cells of the offspring of exposed mother mice than is the case for the offspring of non-exposed animals. This enabled us to establish an increased tendency towards allergies once again.” From humans to mice and back again In mice, the researchers were able to prove that a repressor gene that has been switched off due to DNA methylation is responsible for the development of allergic asthma. But does this mechanism also play a part in humans? In order to answer this question, the researchers consulted the LINA cohort once more. They searched for the corresponding gene among the children with allergic asthma and studied the degree of methylation and gene activity. Here, too, it became apparent that the gene was blocked by methyl groups and thus could not be read. “Thanks to our translational study approach — which led from humans via the mouse model and cellular culture back to humans again — we have been able to demonstrate that epigenetic modifications are apparently responsible for the fact that children of mothers who had a high exposure to phthalates during pregnancy and breastfeeding have an increased risk of developing allergic asthma,” says Polte. “The objective of our further research will be to understand exactly how specific phthalates give rise to the methylation of genes which are relevant for the development of allergies.”
<urn:uuid:fbd9db2e-e126-487b-971b-d28cbd78cedb>
CC-MAIN-2022-40
https://debuglies.com/2017/05/06/phthalates-increase-the-risk-of-allergies-among-children/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00682.warc.gz
en
0.961004
1,239
2.609375
3
IBM on Monday released IBM Q, which introduces the first commercially available quantum computing systems enabled through IBM’s cloud platform. The difference between IBM Q and IBM’s Watson is that Watson can find the answers to problems based on large amounts of data, whereas IBM Q can find answers to complex problems where the data doesn’t yet exist. Quantum computing codes information in quantum bits (qubits) instead of binary code like classical computers, which makes this new technology potentially millions of times more powerful than today’s supercomputers. The IBM Quantum Experience allows users to connect through the cloud to run algorithms and experiments, and work with tutorials and simulations that demonstrate what might be possible with quantum computing. “IBM has invested over decades to growing the field of quantum computing and we are committed to expanding access to quantum systems and their powerful capabilities for the science and business communities,” said Arvind Krishna, senior vice president of Hybrid Cloud and director for IBM Research. “Following Watson and blockchain, we believe that quantum computing will provide the next powerful set of services delivered via the IBM Cloud platform, and promises to be the next major technology that has the potential to drive a new era of innovation across industries.” IBM Q began tackling problems in the field of chemistry by discovering the number of quantum states that a molecule can reach. IBM Q started by working with the molecule for caffeine. In the future, the goal is to work with more complex molecules to try to predict chemical properties with higher precision than possible with classical computers. “Classical computers are extraordinarily powerful and will continue to advance and underpin everything we do in business and society,” said Tom Rosamilia, senior vice president of IBM Systems. “But there are many problems that will never be penetrated by a classical computer. To create knowledge from much greater depths of complexity, we need a quantum computer.” Quantum computing will also focus on challenges in discovering new medicines, optimizing commercial delivery systems, finding new ways to model financial data and isolating key global risk factors to make better investments, making artificial intelligence more accurate, and enhancing private data security. IBM has worked with about 40,000 users from universities, industry, and government over the course of a year to enhance its quantum computing platform. “This breakthrough technology has the potential to achieve transformational advancements in basic science, materials development, environmental and energy research, which are central to the missions of the Department of Energy,” said Steve Binkley, deputy director of science at the Department of Energy. “The DOE National Labs have always been at the forefront of new innovation, and we look forward to working with IBM to explore applications of their new quantum systems.”
<urn:uuid:28f93912-3611-477d-8339-7bc149942cf6>
CC-MAIN-2022-40
https://origin.meritalk.com/articles/ibm-releases-quantum-computing-platform-for-the-cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00682.warc.gz
en
0.934906
565
2.921875
3
The Internet of Things (networks of uniquely identifiable endpoints, or "things," that communicate without human interaction using embedded IP connectivity) is the next industrial revolution. Estimates say there will be 24 Billion IoT devices installed by 2020, and $6 Trillion will be invested in IoT devices over the next 5 years. With that kind of growth and investment, protecting each of these “things” and their corresponding interactions with other components, including our networks, will be critical. So where is this growth coming from? Businesses, governments, and consumers are all using IoT ecosystems. It is estimated that consumers will have 5 billion IoT devices installed by 2020. While this is impressive, it is dwarfed by governments (an estimate of at least 7.7 billion devices installed by 2020) and businesses (at least 11.2 billion devices installed by 2020). But how secure will those devices be? An AT&T Cybersecurity survey of more than 5,000 enterprises worldwide found that 85% of enterprises are in the process of or are planning to deploy IoT devices, but only 10% feel confident that they can secure those devices against hackers. Industrial control system (ICS) is a general term that encompasses several types of control systems used in industrial production. ICS’s are typically used in electrical, water, oil, gas, and data industries. Industrial control systems worldwide are already using “smart” IoT devices and systems, and that use is growing. Some examples include: In the 1950’s the first analog based supervisory control and data acquisition (SCADA) systems were developed. They were usually monolithic, isolated, and proprietary, residing on minicomputers and backup mainframe systems for added redundancy. Over time, the market saw huge growth in the number of manufacturers and vendors supporting the ICS market. Unfortunately, as standards were still being established, this caused interoperability issues and added significant cost to maintain and upkeep these systems. Once standardization of application and protocols used to control various ICS systems was established, they allowed for interoperability between different vendors, adding a level of flexibility and interaction not previously seen. Next, IP communications in the late 1980’s and early 1990’s propagated the concept of local area networks (LAN) and process control networks (PCN), which drove the replacement of older, aging, and limited communication links performed over serial to Ethernet networks. As the IT revolution moved forward, these ICS LAN/PCN’s were upgraded to keep up with the latest benefits in new application and control developments for SCADA-based systems. Today, in what is known as the 4th generation of the Industrial evolution, the division of control between ICS and IT infrastructures has become muddled. With added interconnectivity between the very latest in IT and Cloud infrastructure offerings, businesses are able to increase operational efficiencies, and as a result, increase profits while reducing costs. CEOs, CFOs, and Board members are obviously thrilled with such technological advantages that they can leverage. However, the adverse impact of this next generation in Industrial convergence is the cyberthreat exposure this approach brings with it. While many cybersecurity threats and incidents that occur inside industrial networks are unintentional, meaning they are due to human error or device or software failure, external threats remain the top concern. Manufacturing and Energy, for example, have been the most targeted sectors in recent years, but many other segments of our critical infrastructure (Water, Transportation, Government Facilities) have seen multiple incidents of cyberattacks. Fortinet recently commissioned Forrester Consulting to conduct a survey to explore current state, challenges, priorities, and strategies for securing critical infrastructure. Forrester surveyed 214 U.S. organizations across all industries, focusing on companies of 1,000 or more employees, with distributed critical infrastructure sites such as hospitals, power plants, manufacturing plants, dams, government facilities, and refineries. The organizations surveyed acknowledge the importance of SCADA/ICS security. They currently undertake numerous measures to secure SCADA/ICS, and seek to increase investment in security over the next year. Fears of outside threats appear to drive this stance. 78% of respondents stated that security attacks from non-state actors drove their SCADA/ICS security strategy. These fears are justified: 77% of organizations report that their SCADA/ICS had experienced a security breach, with 2/3 of those occurring in the past year. Impacts from those breaches ranged from their ability to meet compliance standards to maintaining functionality and employee safety. Breach points are everywhere within Industrial 4.0 networks, from outside threats to inside threats, and from RTU (Remote Terminal Unit) or HMI (Human Machine Interface) exploits to breaches of air-gapped networks. You need a well-conceived, layered defense to make sure you’re covering all your bases. A Defense-in-depth strategy deploys application security at both the host RTU and the network level, with tightly integrated multiple detection mechanisms. Fortinet’s Defense In Depth Strategy prevents threats from entering the organization stringent boundary controls by enabling organizations to: Relying on perimeter security, such as a traditional edge firewall, to protect your internal network is no longer enough. The Fortinet Internal Segmentation Firewall (ISFW) is designed to sit between two or more points on the internal network to allow visibility, control, and the mitigation of traffic between disparate network segments, while protecting different network segments from malicious code as it makes its way through the internal network. To better understand how these products work together, keep in mind that: To truly protect ICS systems in your critical infrastructure, an approach like Fortinet’s ICS Layered Defense Model is the best solution. An ATP Framework allows you to detect and act on the latest, most advanced malware. A Defense-in-Depth approach provides you with tightly integrated, multiple layers of protection. And Internal Segmentation allows you to contain any malicious code that has made it past your external defenses, thereby containing a breach and limiting the damage. With the explosion of growth of IoT devices within industrial control systems, and so much at stake with Critical Infrastucture Protection, this is an area where we need to be concentrating our most advanced cybersecurity defenses.
<urn:uuid:be4bcc0e-d1a8-4a86-850d-ddb90dc29ae0>
CC-MAIN-2022-40
https://www.fortinet.com/blog/industry-trends/securing-the-internet-of-things-industrial-control-systems
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00682.warc.gz
en
0.950396
1,292
2.875
3
The fast-paced environment regarding technological advancements has affected our everyday lives by the constant connection to digital devices. By 2020, it is expected that an astonishing 50 billion devices will be connected to the Internet. The rapid growing network of the Internet of Things (IoT) represents an alarming change in the digital world with effects that pose potential risks for the majority of individuals and businesses. What’s happening now? Cyberattacks aren’t new. What is new is the grand scale of simplistic and ruinous attacks on the Internet of Things. The main principle of the IoT is connecting an array of technological devices that can be accessed and managed through the Internet. What does this mean? All compatible devices such as your smart fridge, smart thermostat and fancy fitness gadgets are imposing a threat on security and privacy by creating entry points for cyber-attackers. The extent of damage that these attacks cause differ in severity depending on numerous variables such as the type of device, the environment as well as the ability to secure high protection software. Below is a list of the most common cyberattacks that happen amongst the digitally connected community and how their threats impact on the potential of having a secure Internet of Things network. Are Botnets a threat? A Botnet is a combination of ‘highjacked’ devices in a network used to control and distribute malicious spam and malware. Most commonly, they are used in the hope to steal personal details, exploit online banking information and construct phishing emails. Why does this impact on the Internet of Things? The “Smartification” of devices has occurred rapidly, leaving little time to develop the necessary protection required to repel cyberattacks. The problem occurs when these Internet of Things devices become part of the Botnet, becoming “Thingbots”. But what happens when this network of devices starts to send spam email? The main threat here is that it is somewhat difficult for antivirus software to detect a potential cyberattack when emails are being sent through a Botnet from numerous different network devices. This can leave the recipients at an increased level of risk. Ultimately this is their aim – to send tens of thousands of varying emails in pursuit of a network crash in order to get access to personal or company information. Installation of antivirus and antispyware programmes are a trusted source. It is vitally important to keep all software up to date in conjunction with the with use of strong and complex passwords. Simplicity of Identity Fraud The most common components of identity fraud are scarily simplistic. General access to data that can be found on the internet, as well as information from social media accounts and other documents that portrays information build your online identity. Obviously, the more details that can be procured through the Internet based on someone’s identity the more devastating the attack can be. Now the IoT provides a new dimension of information from your Smart Fridge, Thermostats, and data from fitness trackers and devices etc. Lethargy regarding identity protection through internet connected devices is creating a breadth of easy chances for malignant attackers. The number of people who are victim to identity fraud in the UK has already risen by a third in 2016. It is now increasingly important that you are aware of what personal information can be accessed online and the growing need to protect your identity through protection and security software. This is just the start… The rise of Social Engineering Social Engineering is growing in popularity as a way of accessing individual’s confidential information – manipulation at its finest. In its simplest form, Social Engineering uses existing information about you in order to manipulate people into providing confidential data. The most common outcome that attackers seek is information regarding passwords and banking information or the ability to access company/personal computers in order to install malicious viruses. The most common strategy that cybercriminals use to access this information is phishing emails which aim to encourage individuals to send confidential information through the use of a legitimate looking website. Phishing accounts for over 77% of all social based attacks with a massive 37 million people reporting these types of attacks in 2015. Simple actions can be taken to protect against the threat of social engineering: – Always check the recipient of the email – Avoid any strange link – Never install software from unreliable sources – Do not give away any confidential information to strangers Do you need help with these common cyberattacks? Here at IntaForensics we offer many options specifically designed to help resolve attacks and support you to improve your cybersecurity. We are also able to create personalised packages for any unique requirements that your organisation may need. With this in mind, please don’t hesitate to contact us.
<urn:uuid:27383352-b549-4710-af0d-7b152d811ddc>
CC-MAIN-2022-40
https://www.intaforensics.com/2016/08/17/the-internet-of-things-alarming-threat/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00682.warc.gz
en
0.929432
962
3.25
3
Part 2: Mitigating the layers of digital equity In honor of this week's ISTE conference — a premiere education technology conference in San Antonio, TX— I want to discuss how schools, educators and IT can provide students with the best conditions for success. Want to create more engagement with students? Giving them access to educational resources is a great way to open their minds and fuel their desire to learn. But don’t make the mistake of limiting such exposure to the classroom. Allow students to continue learning after school and on the weekend by adding additional layers of technology into their lives. More exposure in diversified environments is a key component to creating digital equity among students. When home or out in the community, students have different levels of access to supplementary technology, such as powerful computers, printers, scanners and cameras. Additionally, their families and peers have varying levels of technological knowledge. Those with a strong understanding of technology can help the student in ways those with a limited understanding of the topic cannot. Coupled with inconsistent access to internet at home, a significant factor in learning, many students fall victim to the absence of digital equity. Digital equity: What it is and why it matters Digital equity is the concept of providing more students with equal access to quality technology tools, no matter where they are. Accomplishing a state of digital equity requires reaching beyond school walls to meet students with technology, including access to innovative affordance that doesn’t fully rely on the internet, in their home environments. There’s an easy way for schools to begin addressing the issue of digital equity – let students take their devices home. Giving students constant access to the innovative tools they need is a critical component to helping them maximize their learning potential. Whether it’s out in the community or at a friend’s house, learning shouldn’t stop. Digital equity is the key to ensuring it doesn’t. Schools are able to provide students with quality devices that are capable of innovative technology, with or without reliable internet access. It is all of these variables together that create a powerful, interconnected learning environment for students. This overall experience, which includes access to quality content, creative affordances and activities that foster critical thinking are the basics needed for digital equity. Unfortunately, they often disappear when a student goes home, and the equity gap continues. Schools should have both near- and long-term plans to mitigate the lack of high-quality internet for those students without it at home and in their extended community. In the absence of ubiquitous internet coverage, the devices the school provides should be capable of matching the equitable learning experience by all students. When schools begin to focus on providing the same technological experience for students both in and out of school, they will start solving the problem of digital equity and begin providing all students an equal opportunity to learn. At ISTE? Stop by booth 3326 to discuss this and many more education technology topics. And look for my next post on supporting classroom management with technology. Have market trends, Apple updates and Jamf news delivered directly to your inbox.
<urn:uuid:7111ed7f-5c3d-40bc-84ee-3e5a6c381bfa>
CC-MAIN-2022-40
https://www.jamf.com/blog/creating-the-conditions-for-student-success-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00682.warc.gz
en
0.942435
661
3.34375
3
Today, mobile networks are not limited to just mobile phones for consumers and businesses, but they also provide connectivity for machine-to-machine (M2M) and Internet of Things (IoT) devices. IoT, or the Internet of Things, has been around for a long time, and it is a system of devices connected to the internet to communicate with each other or with other systems. Mobile IoT or Cellular IoT (CIoT) is a system that uses mobile cellular networks for IoT connectivity. It employs Low Power Wide Area (LPWA) technologies like NB-IoT, LTE-M and GSM-IoT to allow low-cost and low-powered IoT devices to securely connect using the licensed spectrum of a mobile operator. Why use a mobile network instead of a Wi-Fi network for IoT? IoT devices can connect over any network, and they do not necessarily require a mobile cellular connection such as GSM, UMTS, LTE or 5G NR. However, there are many key advantages of using a mobile network for IoT connectivity. The first and the most important benefit of mobile IoT (cellular IoT) is the reach of the network. As mobile IoT utilises the existing cellular networks, it is able to leverage the presence of existing infrastructure. In addition, mobile networks use a licensed frequency spectrum, which allows them to minimise interference and provide connectivity in a highly secure manner. Cellular IoT can be currently enabled by variants of the GSM and LTE technologies, including EC-GSM-IoT, NB-IoT and LTE-M. GSM (Global System for Mobile Communications) belongs to the second-generation (2G) mobile networks and has been around since the early 1990s. Today, the latest cellular technology is New Radio (NR), which is used by the fifth generation of mobile networks or 5G; however, LTE (Long Term Evolution), which is a fourth-generation (4G) cellular technology, is the most widely available technology. Why mobile IoT requires a low-powered network? There are three key requirements for mobile IoT or cellular IoT connectivity: low power consumption so that the battery can last for up to 10 years, low cost so that mass deployment can take place, wide-area coverage so that the devices can connect to the network no matter where they are placed. The standardisation of mobile IoT or cellular IoT is in line with 3GPP specifications. The mass deployment of billions of IoT devices requires some practical considerations. The first requirement is for the connectivity technology to support low power so that the devices using the technology do not run out of battery every now and then. The second requirement is the cost of the device, which must be low in order to support mass deployment in an affordable way. The third requirement is for the coverage to be strong so that all devices can easily connect to the network no matter where they are situated. So, the key areas that outline the building blocks for cellular IoT (mobile IoT) are low device cost, extended network coverage and low power consumption. Unlike our mobile phones which spend most of their time with us, IoT devices are often required to be placed in awkward locations. For instance, if we consider the smart meter use case, the gas and electric meters are usually found in indoor locations inside a cupboard which are not the most accessible places for a mobile signal to reach. Therefore, a mobile signal needs to be extra strong in order to reach these problematic locations. As a result, the mobile IoT coverage requires the cellular signal to be 20 dB stronger than average. Furthermore, to keep the device costs low, the devices need to have a very low level of complexity. Finally, low power consumption requires the battery to last for several years (up to 10 years). How important is the connectivity type for IoT devices? IoT allows connected devices to provide information or trigger actions to facilitate many tasks that we encounter in our daily lives. While there are many ways to look at an IoT system, the three critical components of IoT are hardware, connectivity and software application. A simple example can be a smart lighting system where a smart light bulb (hardware) can be connected to a local WiFi network (connectivity) and controlled by an app (software application). Even though the actual value of an IoT system comes from the hardware and the application, connectivity remains an essential part. Cellular IoT is a type of IoT that uses a cellular network for connectivity. The key aspects of a good IoT connectivity technology include wide area coverage (long-range), low complexity and costs, and lower power consumption to save battery life. LPWA technologies: EC-GSM-IoT, LTE-M and NB-IoT EC-GSM-IoT, LTE-M (LTE for machines) and NB-IoT (Narrowband IoT) are low-powered wide-area (LPWA) technologies that utilise the existing cellular network infrastructure of mobile operators to provide connectivity to IoT devices using licensed frequency spectrum. EC-GSM-IoT: Extended Coverage GSM Internet of Things EC-GSM-IoT is based on 2G GSM networks. It is a long-range, low complexity, low-powered technology designed to be backwards compatible with existing GSM networks. EC-GSM-IoT uses the EGPRS (Enhanced GPRS) technology within GSM EDGE networks and can work with existing GSM base stations through a software upgrade without requiring additional hardware. The application of EC-GSM-IoT is in IoT use cases where low data rates are needed for non-real-time scenarios, e.g. metre readings from a smart metre. It can facilitate data rates of 160 bits per second (bps) or more with a latency of around 10 seconds. Furthermore, the battery can last up to ten (10) years due to low power consumption. LTE-M: Long Term Evolution for Machines LTE-M is an IoT technology based on the 4G LTE networks. LTE-M is different from EC-GSM-IoT in that it is suitable for use cases where a higher data rate is required for real-time scenarios, e.g. patient monitoring. LTE-M can enable up to 1 Mbps in both uplink and downlink with a bandwidth of around 1.080 MHz. The battery life for devices that support LTE-M is about ten years. It can support voice and data, and use cases include traffic lights, parking sensors and smart cities. NB-IoT: Narrowband IoT NB-IoT stands for NarrowBand Internet of Things or Narrowband IoT and is based on the 4G LTE technology. Like EC-GSM-IoT, it is designed for non-real-time use cases where a slight delay in the communication is acceptable, e.g. utility meters. NB-IoT has multiple categories: Cat NB1 can offer peak downlink data rates of up to 226.7 kbps, whereas Cat NB2 can offer peak downlink data rates of up to 282 kbps. NarrowBand IoT, as the name suggests, employs low bandwidth of 180 kHz. LTE-M vs NB-IoT: Which is better? NB-IoT (Narrowband IoT) employs smaller bandwidths to enable data connections with lower bit rates and wider coverage. LTE-M (LTE for machines) is designed for real-time communication (e.g. emergency communication, including voice), and it can deliver higher data rates than NB-IoT. The LTE-M technology can support most IoT use cases; however, the choice of the technology depends on the target use cases. LTE-M offers lower latencies and higher data rates, making it a perfect choice for use cases that require real-time communication (e.g. emergencies). The Narrowband IoT (NB-IoT) technology uses smaller bandwidth (hence the word narrowband) and therefore offers lower data rates offering wider coverage but is not suitable for real-time communication. A typical use case device category for NB-IoT can be sensors. Cellular IoT (CIoT) or Mobile IoT can be seen as umbrella terminologies encompassing various cellular technologies connecting IoT devices to the internet through mobile data. Mobile IoT employs various low-powered-wide-area (LPWA) technologies, including Extended Coverage GSM IoT (EC-GSM-IoT), NarrowBand IoT (NB-IoT) and Long Term Evolution for Machines (LTE-M) to connect IoT devices to the internet through GSM and LTE networks.
<urn:uuid:4b635622-8b2e-49c7-9bbb-cbe2271d2384>
CC-MAIN-2022-40
https://commsbrief.com/mobile-iot-cellular-iot-ciot-with-nb-iot-and-lte-m/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00682.warc.gz
en
0.918225
1,811
3.140625
3
Mobile networks are also called cellular networks because they consist of a large number of interconnected cells. In mobile communications, cells are the most fundamental piece of the network that enable wireless connectivity. A cell is a geographical area that defines the cellular coverage zone created by the base station of a mobile network. The base station, also known as a cell tower, is equipped with transceivers that transmit and receive radio signals at licensed frequencies between the network and mobile phones. Cell towers, also known as radio base stations, are part of a mobile network owned by a mobile network operator – MNO. The radio base station is responsible for communicating with the mobile phone and is part of a mobile operator’s radio network. Cells in mobile communications work together to create a cellular network. A mobile network consists of many interconnected cells created by a large network of cell towers that mobile operators deploy throughout villages, towns and cities within a country. These cells create cellular coverage that allows SIM-enabled mobile devices to connect to the mobile network. The concept of a cell in mobile communications The cells in a mobile network are represented by interconnected hexagons that cover a geographical area where cellular coverage is required. In real life, the shape of a cell is determined by its range which depends on how far the radio signals from a base station can travel before fading entirely. The hexagonal shape is just for the conceptual view for cell planning and documentation purposes. In real life, the cells have a certain level of overlap with neighbouring cells to allow for a handover when a user moves from one location to another. The radio signals are communicated between the base station and our mobile phones at certain frequencies and transmission power. When a mobile phone is in an area covered by a base station, it can transmit radio signals to the base station at frequencies allocated by the base station. So basically, a cell is just a geographical range within which the communication between a base station and a mobile phone can occur through radio waves. The base stations transmit and receive the radio signals at certain frequencies within a well-defined range, and the mobile phones within this range do the same for two-way communication. Cells are part of the radio access network (RAN) Cells are created by the radio waves emitted by the radio base station, which is part of a mobile radio network. The radio network in mobile communications is called a radio access network or RAN. The radio network is responsible for wireless connectivity through the air interface. These base stations have transceivers (transmitter + receiver) that can receive signals from the cell phone and transmit signals from the network back to the phone to enable two-way communication. The emission of radio waves from the base stations creates network coverage that the cell phones use to connect to the mobile network. The network coverage area created by the radiations through a particular radio unit within a base station is called a cell. The radio network that the base stations belong to is only one part of the overall mobile network. The mobile radio network connects to the mobile core network, which then connects to external networks like PSTN and the Internet. That way, a mobile service provider is able to connect you to anyone, no matter which mobile or fixed network they are on. A group of cells is called a cluster A cluster in mobile networks refers to a group of interconnected cells in a specific geographical area. The term cluster is used in cell planning where RF engineers ensure that the available frequency channels (e.g. ARFCN, UARFCN, EARFCN) are allocated to cells with minimal interference. At a very basic level, if any two adjacent cells get allocated the same frequency channel, that can lead to co-channel interference. If any two adjacent cells are allocated adjacent frequency channels, e.g. ARFCN # 1 and ARFCN # 2, that can lead to another type of interference called adjacent-channel interference. Our mobile phones always communicate with the mobile network even in idle mode when no one is using the phone. The mobile phone keeps the network updated about its location and presence. In cases when we are moving, e.g. driving from home to work, we may come out of the range of one cell and move into the range of another cell. When that happens during an active session, e.g. during a voice call, our session (or call) gets ‘handed over’ from one cell to another. This way, throughout a journey, our call or data session keeps hopping from cell to cell to make sure we stay connected without dropping the call or interrupting the data session. Frequency channels used by cells: ARFCN, UARFCN and EARFCN ARFCN stands for Absolute Radio Frequency Channel Number, and it is a range of frequency channels available in the GSM networks. UARFCN or UTRA ARFCN refers to the ARFCN in 3G UMTS networks, whereas EARFCN or Evolved-UTRA ARFCN refers to ARFCN in 4G LTE networks. ARFCNs have numbers allocated to them, and each ARFCN represents a pair of frequencies, one for transmission and one for the reception when different frequency bands are used for uplink and downlink (Frequency Divison Duplex – FDD). Have a look at our dedicated post on GSM frequencies to understand how ARFCNs are allocated. To learn more about FDD and how it is used in 4G LTE networks, check out this dedicated post on Duplex schemes for LTE networks. What is a handover between two cells? A handover, also known as a handoff, is when a voice call or data session is transferred from one serving cell to another. A handoff happens when you start a call or a data session in a certain location, and then during the session, you move out of the area such that the cell that was serving you cannot reach you any longer. In that case, the serving cell will hand over the responsibilities of handling your session to another nearby cell better situated to serve you. For example, if you sit on a train from London Heathrow airport to central London and start watching a YouTube video (assuming you are using mobile data and not WiFi), your data session will keep getting handed over from one cell to another as you move from one location to another. Why mobile phones are referred to as cell phones? A mobile phone is also called a cell phone because it works on cellular technologies like GSM, UMTS, IS-95, CDMA2000, UMTS, LTE and NR. While the term cell phone is often used in the US and the term mobile phone mostly in Europe, the term cell refers to cellular technologies. A mobile phone is often referred to as a cell phone because it employs cellular technologies such as GSM, UMTS, cdmaOne, CDMA2000, LTE and NR to communicate with other phones. From a terminologies viewpoint, the term cell phone is mostly used in the US and countries that follow the US terminologies. However, in the UK and many parts of Europe, the term mobile phone is used for referring to cellular phones. Cell phones connect to a network of interlinked cells that allows them to communicate with other phones and devices that are on the same or other networks. Mobile cellular networks use advanced technologies to establish a connection between the cell phone and the strongest available base station used by the mobile service provider. Mobile operators constantly keep introducing new cellular technologies and enhancements to keep up with the network traffic demand. For a user, that means having to upgrade phones from time to time. For example, you may be someone who once bought a GSM phone and then had to upgrade to a 3G phone and then to a 4G phone. With enhancements like High-Speed Packet Access, LTE-Advanced and Advanced Pro, and now 5G NR, it can be confusing for a mobile user to know if they need to buy a new phone. As a rule of thumb, all cellular technologies are backwards compatible. It means if you have a 3G phone for a particular 3G technology (e.g. UMTS), the phone will also work on the 2G technology (e.g. GSM) relevant for that 3G technology. However, cellular technologies will keep on evolving, and the older technologies will at some point be phased out. Have a look at this GSM vs CDMA post if you live in the US and are on CDMA technology (e.g. CDMA2000/cdmaOne). If you have a 4G phone and wonder whether it should work on 5G or not, check out this dedicated post on 4G phones on 5G networks. Types of cells used by a mobile network A mobile network consists of various types of cells, including macrocells, microcells, picocells and femtocells. The microcells, picocells and femtocells are collectively called small cells. Macrocells have the longest range (tens of km), whereas femtocells have the shortest range (up to 10m). There are various types of cells used in mobile communication, including macrocells, microcells, femtocells and picocells. The cells are differentiated in this way based on the range they cover and the capacity they have. Macrocells are the largest cells and can cover tens of kilometres, whereas femtocells are the smallest of the cells that cover a range of up to 10 metres. Microcells are the largest of the small cells with a range of up to 2 kilometres. Picocells are slightly larger than femtocells, with a range of up to 200 metres. |Cell Type||Cell range| |Macrocells||Tens of kilometres| |Microcells||Up to 2 kilometres| |Picocells||Up to 200 metres| |Femtocells||Up to 10 metres| —Table showing the types of cells and the range for macrocells, microcells, pico and femtocells— Macrocells are the large or regular cells that provide the main mobile network coverage in your area. These cells usually have their antennas mounted at the top of tall masts on the ground, rooftops of high-rise buildings and other similar locations. Macrocells have a range of tens of kilometres, and they need to be mounted at a height from where they have a (mostly) clear view of the area they are serving. These cells require dedicated sites with adequate power supply, and usually, the operator pays rental fees for these sites. Macrocells form the main layer of cellular coverage within a geographical area. Microcells are a type of small cell that are low-powered cellular base stations. They are the biggest of the small cells with a range of up to 2 kilometres. Microcells can add capacity and coverage to the existing mobile network alongside macrocells, picocells and femtocells. Due to the area they can cover, microcells can be a good solution for areas like large train stations and address temporary capacity needs for any sporting events, concerts, etc. Macrocell is the cell that provides primary cellular coverage Macrocells are the cells responsible for providing the primary cellular coverage and are used in geographical areas where the main challenge is network coverage as opposed to capacity. When a mobile operator serves an area with a low population density, they need fewer cells per square kilometre. Macrocells are ideal for rural and sparsely populated areas such as remote villages and towns, which may have a large land area, but the population size is small. Macrocells have high transmission and reception power, giving them a large range to provide primary network coverage to vast geographical areas. Macrocells are more suitable to serve rural areas where the traffic load on the mobile network is not as high as that in heavily populated cities. Macrocells are installed, operated, controlled and managed by the mobile operator and use a licensed frequency spectrum. Multiple macrocells can originate from the same base station of a cell site. Which cells are used for densely populated areas? Mobile operators use smaller, more targeted cells to improve network capacity and coverage in densely populated areas with more people per square kilometre trying to access the network. Small cells, including microcells, picocells, and femtocells, are an extension of the primary cellular network. In densely populated areas such as central London, with thousands of people per square kilometre, there is a massive demand for a mobile network to ensure that everyone can access the network, get enough bandwidth/bit rates, and have no coverage gaps. In urban areas, many users try to access the network simultaneously for voice calls, web browsing, video streaming etc., which increases the demand for network capacity. Urban areas also have coverage challenges with too many obstacles such as large buildings, brick walls, elevators, underground train stations, interference from WiFi/WLAN signals, and reflective surfaces to name a few. In these densely populated areas, a mobile operator can use smaller, more targeted cells, e.g. microcells and picocells, to fill the coverage and capacity gaps. Microcells are an extension of the primary cellular network, which consists of macrocells. Microcells are controlled and managed by the mobile operators themselves. The key considerations for deploying microcells include the connectivity to the mobile core network, the frequency spectrum and the power supply. Here are some helpful downloads Thank you for reading this post. I hope it helped you in developing a better understanding of cellular networks. Sometimes, we need extra support, especially when preparing for a new job, studying a new topic, or buying a new phone. Whatever you are trying to do, here are some downloads that can help you: Students & fresh graduates: If you are just starting, the complexity of the cellular industry can be a bit overwhelming. But don’t worry, I have created this FREE ebook so you can familiarise yourself with the basics like 3G, 4G etc. As a next step, check out the latest edition of the same ebook with more details on 4G & 5G networks with diagrams. You can then read Mobile Networks Made Easy, which explains the network nodes, e.g., BTS, MSC, GGSN etc. Professionals: If you are an experienced professional but new to mobile communications, it may seem hard to compete with someone who has a decade of experience in the cellular industry. But not everyone who works in this industry is always up to date on the bigger picture and the challenges considering how quickly the industry evolves. The bigger picture comes from experience, which is why I’ve carefully put together a few slides to get you started in no time. So if you work in sales, marketing, product, project or any other area of business where you need a high-level view, Introduction to Mobile Communications can give you a quick start. Also, here are some templates to help you prepare your own slides on the product overview and product roadmap.
<urn:uuid:74e3d61a-8328-4740-84ff-1c158a542bee>
CC-MAIN-2022-40
https://commsbrief.com/what-are-cells-in-mobile-communications/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00682.warc.gz
en
0.94524
3,082
4.0625
4
With cybercriminals releasing, on average, 74,000 new strains of malware every day, one would expect many of these variations to be copies of one another. However, several experts have recently reported that malware is actually becoming more customizable, meaning hackers can personalize their weapon of choice to exploit vulnerabilities in a specific network. In addition, it would appear that the days of brash hackers trying to make a name for themselves with a pyrotechnic network breach are long gone, as cybercriminals increasingly favor stealth measures. The early days of malware were marked by individual programmers creating their own unique malicious code. That gave way to an era of mass-produced tools that could be easily recreated and deployed by countless hackers. Over time, cyberdefense specialists have been able to mitigate the damage caused by the latter thanks to the proliferation of networks containing information regarding threats. These databases facilitated the sharing of critical information including how to identify and neutralize widespread malware. The best (or worst) of both worlds Many of today's cybercriminals have combined the advantages provided by both approaches to malware design. By integrating customizable components with their mass-produced programs, hackers can create numerous malware variants to target the vulnerabilities of specific networks. For instance, cybercriminals have customized their phishing tools to create more sophisticated and effective campaigns. Similar to the way that marketers use consumer data to create more focused digital advertisements, hackers have leveraged information gathered from online sources such as social media sites to inform their phishing emails. Instead of creating the generic and conspicuous mass emails typically associated with phishing attacks, cybercriminals can now craft personalized messages that are more difficult for users to identify as threats. Criminals value stealthy malware Hackers have also been developing more sophisticated stealth-based behavior in their cyberweapons. One analyst noted that traditional malware intrusions usually resulted in a telltale service disruption, but this is becoming increasingly rare. Instead, security experts are finding that the amount of time that malicious programs spend embedded in a network has increased over the years. According to a study on the state of cybersecurity, it took an average of 210 days for IT personnel to identify a network breach. At the high end of the spectrum, five percent of respondents said malware infections lasted for upwards of three years before being detected. As much as cybersecurity experts work to create effective network-level defensives, industrious hackers are finding ways around them. Businesses and individuals alike can no longer rely on these measures alone to protect their systems. A holistic approach to cybersecurity, including the use of application control, is much more effective. Users can prevent unknown or unwanted programs from running on their machines and prevent malware including zero day viruses from infecting their systems.
<urn:uuid:37b45f42-a509-47a8-8b40-ce1a28440e8b>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/cybercriminals-launch-stealthy-customizable-attacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00682.warc.gz
en
0.956873
554
2.875
3
Christie’s made the headlines in 2018 when it became the first auction house to sell a painting created by AI. The painting, named Portrait of Edmond dae Belamy, ended up selling for a cool $432,500, but more importantly, it demonstrated how intelligent machines are now perfectly capable of creating artwork. It was only a matter of time, I suppose. Thanks to AI, machines have been able to learn more and more human functions, including the ability to “see” (think facial recognition technology), speak and write (chatbots being a prime example). Learning to create is a logical step on from mastering the basic human abilities. But will intelligent machines really rival humans’ remarkable capacity for creativity and design? To answer that question, here are my top three predictions for the role of AI in art and design. 1. Machines will be used to enhance human creativity (enhance being the key word) Until we can fully understand the brain’s creative thought processes, it’s unlikely machines will learn to replicate them. As yet, there’s still much we don’t understand about human creativity. Those inspired ideas that pop into our brain seemingly out of nowhere. The “eureka!” moments of clarity that stop us in our tracks. Much of that thought process remains a mystery, which makes it difficult to replicate the same creative spark in machines. Typically, then, machines have to be “told” what to create before they can produce the desired end result. The AI painting that sold at auction? It was created by an algorithm that had been trained on 15,000 pre-20th century portraits, and was programmed to compare its own work with those paintings. The takeaway from this is that AI will largely be used to enhance human creativity, not replicate or replace it – a process known as “co-creativity.” As an example of AI improving the creative process, IBM’s Watson AI platform was used to create the first-ever AI-generated movie trailer, for the horror film Morgan. Watson analysed visuals, sound, and composition from hundreds of other horror movie trailers before selecting appropriate scenes from Morgan for human editors to compile into a trailer. This reduced a process that usually takes weeks down to one day. 2. AI could help to overcome the limits of human creativity Humans may excel at making sophisticated decisions and pulling ideas seemingly out of thin air, but human creativity does have its limitations. Most notably, we’re not great at producing a vast number of possible options and ideas to choose from. In fact, as a species, we tend to get overwhelmed and less decisive the more options we’re faced with! This is a problem for creativity because, as American chemist Linus Pauling – the only person to have won two unshared Nobel Prises – put it, “You can’t have good ideas unless you have lots of ideas.” This is where AI can be of huge benefit. Intelligent machines have no problem coming up with infinite possible solutions and permutations, and then narrowing the field down to the most suitable options – the ones that best fit the human creative’s “vision”. In this way, machines could help us come up with new creative solutions that we couldn’t possibly have come up with on our own. For example, award-winning choreographer Wayne McGregor has collaborated with Google Arts & Culture Lab to come up with new, AI-driven choreography. An AI algorithm was trained on thousands of hours of McGregor’s videos, spanning 25 years of his career – and as a result, the programme came up with 400,000 McGregor-like sequences. In McGregor’s words, the tool “gives you all of these new possibilities you couldn’t have imagined.” 3. Generative design is one area to watch Much like in the creative arts, the world of design will likely shift towards greater collaboration between humans and AI. This brings us to generative design – a cutting-edge field that uses intelligent software to enhance the work of human designers and engineers. Very simply, the human designer inputs their design goals, specifications, and other requirements, and the software takes over to explore all possible designs that meet those criteria. Generative design could be utterly transformative for many industries, including architecture, construction, engineering, manufacturing, and consumer product design. In one exciting example of generative design, renowned designer Philippe Starck collaborated with software company Autodesk to create a new chair design. Starck and his team set out the overarching vision for the chair and fed the AI system questions like, “Do you know how we can rest our bodies using the least amount of material?” From there, the software came up with multiple suitable designs to choose from. The final design – an award-winning chair named “AI” – debuted at Milan Design Week in 2019.
<urn:uuid:f626ee25-009f-4b2f-ae82-7aad88bf1c55>
CC-MAIN-2022-40
https://bernardmarr.com/3-predictions-for-the-role-of-artificial-intelligence-in-art-and-design/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00682.warc.gz
en
0.951647
1,027
3.390625
3
Listen to this article now The future of work is up for debate between tech giants and CEOs alike. And when it comes to artificial intelligence, the fog has yet to lift. Is it due to the uncertainty around AI and its ever-growing presence in the tech industry? Or, is it due to the looming changes that the use of AI could create? Whatever the case, no one can seem to agree. Even tech giants Mark Zuckerberg and Elon Musk cannot seem to agree, creating two of the five schools of thought that exist on AI. There are many questions revolving around the AI world adding to the opposition—and confusion. Will it take our jobs? Will it create new jobs? Will we work together? Will AI get smarter than humans? Only time will tell. For now, according to Harvard Business Review there are five schools of thought on AI. We need to look at the opinions centered around AI to understand where you fit in—and how you’ll prepare and react to it in your workplace. This school of thought is all about seeing the positive effects of AI on the economy. Utopians believe that AI will bring forth a new era of extreme wealth and growth without any economic decline. HBR explains it this way, “AI and computing power will advance in the next two decades to achieve ‘the singularity’ — when machines will be able to emulate the workings of the human brain in its entirety.” Brains will be downloaded and replicated. These replicated brains will do the cognitive work while robots will do the physical work. Utopians believe that this switch in cognitive and physical skills will create a growth in economic output, doubling every three months. The main belief in this school of thought is this: with AI and robots doing all the work, humans will be able to apply their skills and talents to meaningful actions, a way to officially “do what you want” with what you have. On the other hand, dystopians focus on what could be the negative effects of AI and robotics on the market and the world. HBR calls it a “Darwinian struggle” that machines will dominate. These AI systems that are put in charge will dominate the heart of middle and high skill jobs. The positions that are left in the low-skill range will be given to robots. The result of these changes will be high unemployment rates, critically low wages and economic illness. Human productivity will go down, incomes will decrease, demand for goods and services will decrease. Our economy could go into a tailspin. Elon Musk thinks that this could be a possibility and believes that Universal Basic Income will be necessary. Although some companies and tech-enthusiasts believe that we are still years away from perfecting the recipe of AI, tech-optimists are focused on the optimism of the technology advancements AI may bring to the table. While companies are still learning how this type of intelligent technology can make a difference in their business, this thought believes that eventually, businesses will grasp the technology and take advantage of it. When they grasp the concept, a “leap in productivity” will produce gold in the industry, creating growth in the economy as well as a higher standard of living complete with “consumer surplus and the value of free apps and information”. This thought also states that will all these changes, jobs may be lost, and negative income will need to be combated. Investments in education and training as well as technology will be required to make this work. The Realist Thought Although it is always best to remain optimistic, and those in this thought are, it is also crucial to be a realist. This school of thought focuses on the realism behind AI and the changes it may make in the business world. They believe that much like previous tech waves the wave of AI and intelligent machines can create the productivity it promises. Companies that can implement the requirements needed for this technology will have a rapid productivity increase. Although new jobs could be created, the tech may worsen what has happened in the past: middle-skill job levels to decrease and low and high levels to increase. The realists believe that the questions cannot be answered yet, due to lack of complete research. This research will be needed to make a smart decision towards AI and machine intelligence. Lack of Productivity Thought It seems that what most of the thoughts can agree on is the increase of productivity. However, this thought believes that there will be a lack of productivity compared to what is expected. HBR states, “Despite the power of intelligent technologies, any gains in national productivity levels will be low. Combine that with headwinds from aging populations, income inequality, and the costs of dealing with climate change, and the United States will have near-zero GDP growth.” Those who are in this school of thought believe there isn’t anything to fuss over, just to wait and brace for stagnant growth. Where Do You Stand? Although these five schools of thought differ, one thing is for certain: business owners must prepare now for the future, regardless of the future of artificial intelligence. I personally go back and forth on what school of thought I’m in. Each one makes sense to me and seems like it could be a possibility. At the end of the day though, I know that regardless of AI predictions, I need to be prepared for the future. This article was first published on Forbes. Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. Read Full Bio
<urn:uuid:0c754afe-42c4-466a-a7ac-a22542091194>
CC-MAIN-2022-40
https://futurumresearch.com/there-are-5-schools-of-thought-on-ai-which-one-are-you-in/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00682.warc.gz
en
0.952335
1,197
2.671875
3
If you use a telephone, there’s high probability you’ve been spoofed – that is, received a spoof call from a phone number that looks similar to your area code and exchange (the first six digits of your phone number) or a caller ID displaying the name of a nearby town or local business. However, the person on the other end is neither a neighbor nor legitimate business caller. The caller is typically a spammer trying to lure you into a scam. In fact, according to a recent First Orion Scam Call Trends and Projections report, 83% of all scam calls in 2019 featured a fraudulent phone number ID, such as an area code that matches that of the call recipient’s area code (“neighbor spoofing”) or a familiar business name or number (“enterprise spoofing”). The practice of enterprise spoofing, in particular, has been fueled by the recent spate of large data breaches, giving scammers access to the personal information of millions of prospective victims. The scammer can target segments of the population associated with a business, such as a bank or large retailer, change the number in the caller ID to one associated with that business, and, if the call is answered, further legitimize themselves with knowledge of the victim’s home address, email address etc. The result: In the first eight months of 2019, phone scammers were able to defraud the American public out of $285 million. How Did Call Spoofing (Spoof Calls) Become So Prevalent, and What Can Be Done To Stop It? For those wondering why call spoofing become so prevalent, and what one can do to stop it, there are a number of factors that come into play. Robocalling, the practice of using a computerized auto-dialer to generate phone calls with a pre-recorded message, has been part of the telephony landscape since the earlier 90s. It was traditionally used for political messaging and telemarketing phone campaigns. However, robocalling crossed the line from an accepted practice to a disruptive nuisance when illegitimate players seized on the technology to begin delivering mass numbers of unwelcome calls. While the FCC stepped in with regulations limiting telemarketing calls to cellular phones, land-line protections were anemic at best, including fines for difficult-to-trace fraudsters and offering consumers the Do Not Call registry designed to limit the reach of telemarketers. While well-intended, these measures did little to stop robocall offenders already committed to malicious behavior. As the volume of spam and robocalls has grown, so, too, has the practice of ignoring calls displaying unfamiliar or anonymous caller IDs. Thwarted by the decreased response, spammers began looking for other ways to connect with their would-be victims. Their new path was paved when VoIP (Voice over Internet Protocol) entered the telephony scene. VoIP allows phone calls to be transmitted through Internet-connected IP devices, completing calls through the Internet rather than through traditional voice telephone. Developed in 1995 as a way to reduce the cost of long distances and international calls, VoIP capabilities were soon embraced by carriers and businesses anxious to capitalize on the higher quality, greater flexibility, and lower cost of voice over Internet. As is often the case, this positive advance in technology came with a dark side. No longer bound by toll costs to make calls, spammers and scammers quickly discovered how easy and cheap it is to utilize VoIP and an auto-dialer to deliver mass calls to a targeted list of numbers. As Alex Quilici, CEO of YouMail stated for Consumer Reports, “It’s become very easy and cheap to make an enormous number of calls, to the point where you don’t even need technical expertise. If I wanted to pick a borough in New York City and hit every person with a voicemail telling them to go visit some website, I can do it for a couple of thousand bucks.” More significantly, the caller ID of these calls can now be disguised. While caller ID spoofing has actually been available for many years to law enforcement and other specialized services requiring personal contact phone number protection, it required complex, specialized, and expensive ISDN PRI circuit connectivity with the telephone company. With the advent of VoIP came the ability to easily and cheaply create a personal Caller ID embedded in the call data through readily available open source software. While a legitimate and useful tool for individuals like remote workers wishing to maintain a workplace caller ID, it is also easily exploited by those with nefarious intent. Criminalizing Call Spoofing (The Truth In Caller ID Act) As the practice of illegitimate call spoofing increased, Congress moved to rein it in, passing the Truth in Caller ID Act of 2009. While originally written to criminalize the act of causing “any caller identification service to transmit misleading or inaccurate caller identification information,” the final bill used more qualified language, adding that call spoofing would be deemed illegal if done “with the intent to defraud, cause harm, or wrongfully obtain anything of value.” The Act has done little to slow the exponential growth of scam calls as evidenced by a recent FCC report that reveals more than 60% of complaints from consumers are now related to suspected phone fraud. While consumers are encouraged to report suspected scams, efforts to trace the real source of a spoofed call is, at best, complex and usually futile as the scammer has likely moved onto another spoofed number by the time a report is filed. New Legislation To Stop Spoof Calls (TRACED Act And STIR/SHAKEN) The TRACED Act, passed in December of 2019, is new legislation specifically designed to address the problem of call spoofing. It requires carriers to implement specific measures for caller ID authentication (STIR/SHAKEN protocol) that is passed through from the source provider to the recipient provider in the form of a digital certification. Numbers that fail the authentication process are identified as potential spoof calls. While a step in the right direction, the legislation will take time – some even say 10 years – to fully implement as smaller carriers may not be equipped to carry out the authentication process, and more sophisticated scammers can simply move their operations overseas onto unregulated networks. The system is only designed to flag suspected spam in the caller ID. The call still rings through and how it is handled is left to the call recipient. The weakness with this approach is once the phone rings, the damage caused by digital distraction is done. Clearly, until the STIR/SHAKEN protocol actually results in a reduction of voice spam, it will do little to help businesses eliminate the growing threat and distraction of unwanted calls. Companies seeking relief now may need to act on their own. Call Spoofing Prevention Solutions For Enterprise Voice Networks Several telephony solutions developers, like Nomorobo, have stepped up with innovative filtering systems that keep a dynamic database of known spammers and, when integrated with a voice network, blocks those incoming calls. However, these systems cannot detect the true origin of a call that is using a spoofed ID. One developer, Mutare, Inc., stands out for having confronted head-on the unique challenge spoofed calls create for business. With its multiple layers of protection approach, Mutare’s Voice Traffic Filter integrates several advanced methods to identify spam and robocalls before they enter the enterprise voice network, giving administrators a set of tools that allows them flexible control over how those calls are blocked, passed, or routed before they ring an end device. Integrated into that system is Mutare’s own, unique “spoof radar” detection technology, built on a platform that combines advanced call pattern recognition, heuristics and machine learning to spot robocalls or spammers with suspect caller IDs. If the system detects an abnormal pattern of incoming calls, such as an unusual number of calls from the same source or a sudden increase in call velocity, the voice traffic filter triggers specific actions defined by the system. This includes the option to send those calls through the Voice Traffic Filter’s voice CAPTCHA system which employs a reverse Turing test to challenge the caller to enter randomized digits. Callers that pass the test are let through; those that fail are dropped. Humans pass the test easily, while robocall bots do not. “We have been building advanced applications for business voice networks for more than 30 years. We have at our disposal not only a deep knowledge of enterprise voice communication systems but also a vast source of data that allows us to anticipate where to direct our efforts so we stay ahead of the voice spam bad actors,” says Rich Quattrocchi, Vice President of Digital Transformation for Mutare. While STIR/SHAKEN is still in its early formulation, Mutare recognized early on the value of caller attestation scoring to further refine its Voice Traffic Filter spoof detection capabilities and has created an additional layer of filtering protection gleaned from that data. “Our ability to parse out information from what is being passed along by the carriers and then apply it to our other analytic tools is going a long way in helping the filter identify a suspected spoof call right down at the individual call level and is enabling even smarter automated call filtering,” says Quattrocchi. Mutare Advanced Spam Filter Capabilities Mutare recently launched a service using its advanced spam filter capabilities to run a detailed voice traffic analysis for other companies looking to better understand how much voice spam is in their networks and its impact on their operations. Says Quattrocchi, “The more a business knows about the source, type and volume of spam entering their voice networks, the better prepared they will be to combat it.” Call Spoofing Q&A Q: What is phone number spoofing? A: Caller ID spoofing is the process of changing the caller ID to any number other than the calling number. When a phone receives a call, the caller ID is transmitted between the first and second ring of the phone. Q: What is neighbor spoofing? A: Neighbor spoofing is when scammers use reliable-looking phone numbers to disguise their identities. The phone number might have a prefix with your area code or look like it belongs to a local business or even someone you know. Q: What is Enterprise spoofing? A: Enterprise spoofing is when scammers change their caller ID to match an actual business’s phone number. For example, a scammer trying to get your banking account information or other sensitive financial information may call your cell phone and display your bank’s caller ID. For example Citi Corp’s customer service number: 1 (888) 248-4226 Q: Is caller ID spoofing legal? A: Caller ID spoofing is generally legal in the United States unless done “with the intent to defraud, cause harm, or wrongfully obtain anything of value.” The relevant federal statute, the Truth in Caller ID Act of 2009, does make exceptions for certain law-enforcement purposes.
<urn:uuid:b1e3cdf9-4b65-4842-b759-7c3e2ef44dd3>
CC-MAIN-2022-40
https://www.mutare.com/spoof-proofing-enterprise-voice-network/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00682.warc.gz
en
0.934895
2,299
3.015625
3
What is Multi-Factor Authentication (MFA)? Multi-Factor Authentication, or MFA, is the requirement that users bring something tangible with them, in addition to knowing a password, when trying to login. The security concept is called “Bring Something, Know Something.” Futuristic examples include retina scanning or even DNA blood sampling, but there are more practical ways to perform MFA. The physical device requirement could be as simple as a pre-authorized mobile phone that can receive a text. It could be a smartphone or smartwatch running a synchronized app. Multi-factor authentication solutions can be achieved by a proprietary keychain-size device that generates a unique token or a USB key that needs to be inserted into the computer a user wishes to login to. Insistence on requiring such a physical device, in addition to entering a password, decreases the likelihood that a hacker could log in remotely equipped only with the correct username and password. Why is MFA important? MFA is an important security tool because logins and passwords are easily found on the dark web. Additionally, computing power has accelerated to the point where “brute force” techniques have become practical, enabling hackers to use a computer to programmatically guess passwords. In situations when MFA is required, simply knowing the password is almost useless without having access to the associated physical device. Similarly, MFA is not a substitute for complex passwords. A smart combination would be MFA plus passwords that are longer, more complex, harder-to-guess and are unique to each login platform. Any breach would be isolated and the damage could be mitigated. MFA for Enterprises and Managed Cloud Services Providers (MSPs) MFA is not unique to Amazon Web Services (AWS) or any of the other cloud vendors. Microsoft Azure, Google Cloud, other public clouds, and even on-premise data centers can all benefit from multi-factor authentication. Cloud administrators must know their role and do their part in the Shared Responsibility Model: The cloud vendors are responsible for the security of the cloud and the customer is responsible for security in the cloud. That applies to passwords in general and MFA in particular, as Identity and Access Management (IAM) falls within the domain of the customer. The value of MFA is clear. Enterprises should enable MFA for their end users and service providers should encourage their clients to do so as well. With the public cloud’s Shared Responsibility Model, it is incumbent upon each organization, and ultimately each individual, to do their part to secure their resources. A user’s identity is perhaps the most important—and weakest—link in the security chain. Multi-factor authentication can reinforce that link. Add unified secure configuration, activity monitoring and regulatory compliance to your cloud infrastructure with cloud management by CloudCheckr. CloudCheckr makes tracking IAM and permissions simple by centralizing control and applying best practices. Get started today with a live demo or free, 14-day trial.
<urn:uuid:6c99fcf2-6868-4010-9aa4-9d7b2869dda0>
CC-MAIN-2022-40
https://cloudcheckr.com/cloud-security/what-is-multi-factor-authentication-mfa-cloudcheckr-cloud-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00082.warc.gz
en
0.934101
621
3.359375
3
Security Bug Could Affect Nearly Every Android Device To view a PDF of this First Cut, click here. Author: Lou Latham Issue: What are the trends impacting mobile computing? Summary: A security research firm has found a flaw in the Android operating system that makes nearly a billion phones and tablets vulnerable to malware intrusion. The vulnerability has already been exploited. Google has created a fix, but many devices may never get it. Event: In February, 2013 security research firm BlueBox Security notified Google of a security vulnerability in the Android mobile operating system code. Google later released a fix to carriers and manufacturers. In July 2013, Symantec discovered multiple exploits of this flaw. Bluebox discovered a coding error in Android that would allow a hacker to hide malicious code in an Android app without tripping the internal alarm that is supposed to detect such changes. Bluebox says the flaw (dubbed the “Master Key” vulnerability) makes it possible to alter an Android app without changing its cryptographic signature, which verifies that the app is legitimate. This tricks Android into reporting that an app has not been tampered with, even if it has been. Bluebox notified Google in February 2013, and Google issued a corrective patch in March. Many phones with firmware installed after April 2013 will have the fix; the first one we know of is the Samsung Galaxy S4. However, nearly a billion older phones are still vulnerable until their firmware is updated. On July 23, 2013, Symantec discovered the first known exploit of this flaw, in an Android app installer (.apk) file from a third-party app store. The payload was a Trojan called Android.Skullkey that takes control of the phone, sending text messages and stealing user data. Since then, additional exploits have been discovered. There is no evidence that they have done significant damage, but it is clear that the vulnerability has been successfully analyzed. Because it doesn’t distribute Android updates directly to users (except for a small number of Nexus devices), Google can’t fix this alone. The fix is implemented in the device firmware, so each manufacturer has to build a solution for each model. Carriers also have to participate in delivering the update to customers. This means the availability of fixes for different devices will vary widely. Some devices may never be fixed. Bluebox has distributed a diagnostic tool that will tell you whether or not your device has the flaw and whether any apps on your device contain the code to exploit it. The diagnostic is in the Google Play store, called “Bluebox Security Scanner.” Google has also updated the security filter in the Play store itself, so the store will not download an app that exploits the flaw. The Amazon and GetJar stores are now similarly protected. At some point, Google will release one or more versions of Android with the fix built in, and we’ll publish the version numbers when we learn them. That doesn’t mean that any particular version will run on any particular phone, though. Android provisioning decisions are divided among Google, the device manufacturers and the wireless carriers, and governed by many factors. There’s no reason to panic about this. There are always security holes in operating systems. This isn’t the last one; there will never be a last one. However, we shouldn’t be complacent, either. Most phones run Android, so it’s a tempting target. While Symantec rates the damage potential of the current exploit as moderate, the next payload could be anything. Mobile devices affect our lives in many more ways than PCs do. An attack could blow out your phone bill with roaming charges or hidden calls; it could spoof your GPS and get you lost, or disable the phone in a variety of ways. If you BYOD, it could also get into your corporate network and cause more serious mischief. Users as well as enterprises benefit from robust enterprise mobile management in the workplace. Aragon has an extensive library of research on EMM (including, for paid subscribers, the recent Aragon Research Globe™ for Enterprise Mobile Management Software, 2013). - Enterprises evaluating mobile security products should ensure that all their candidate solutions include screening for this flaw. The same filter that protects the Google store can protect an enterprise app store. - Users should set their devices to reject third-party sideloads, which can occur without their knowledge. Download apps only from stores that are protected: currently these are Google, Amazon and GetJar. - Users should download and run the Bluebox Security Scanner from the Google Play store, and ensure that other scanning apps will scan for this flaw. - Users shopping for an Android device should know which models have the fix and which do not. Most devices built after mid-2013 should be safe, but not all available devices were recently built. Heavily discounted older phones may not be safe. - Enterprises with EMM capability should ensure that Android devices within their domains are set to reject app downloads from unauthorized sources, and push available patches to any device that will take them. They should also consider offering subsidies to replace devices too old to be patched. Mobile device security remains a critical area to monitor. Carriers need to be more vigilant with upgrades to devices, when it comes to major security flaws. Enterprises need to educate their users about Android settings and how they affect security, and make use of EMM software, which can improve security through advanced device management.
<urn:uuid:79a8c390-9246-4c56-a22c-1011bbc5c84a>
CC-MAIN-2022-40
https://aragonresearch.com/security-bug-could-affect-nearly-every-android-device/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00082.warc.gz
en
0.944466
1,135
2.640625
3
Network traffic analysis for IR: Statistical analysis Introduction to statistical analysis Statistical analysis is one of the three main categories of analysis that can be performed on network traffic data. It provides a much more detailed analysis than simple connection analysis and takes a different approach to identifying potential indicators of compromise than event-based analysis. Statistical analysis is typically geared toward performing anomaly detection. Based on the wealth of information available to the analysis algorithm, it can make educated guesses about what should be considered “normal” versus what is “abnormal” or “anomalous.” Any deviations from the norm may be an indicator that something is going on, making statistical analysis ideally suited to helping an incident responder determine where their investigative efforts can be focused to maximize their probability of success. Performing statistical analysis In order to successfully and rapidly respond to a potential incident, cyberanalysts first need to know where to look for potential indicators of attack. Data science is extremely good at identifying patterns and correlations from large amounts of data. Statistical analysis uses the tools and techniques of data science. Data science is a very large field, and most incident responders don’t have the background to be a data scientist. However, even simple statistical analysis techniques can be extremely useful for incident response. Techniques like clustering and stack analysis can be easily performed by anyone and can be extremely helpful in drawing attention to data that may warrant further investigation. Clustering is an application of unsupervised machine learning where the developer does not provide any input to the algorithm to point it toward a certain solution. Instead, the developer provides the desired number of clusters that they believe should exist in the dataset and the algorithm generates what it thinks is the best allocation of data points to clusters. Several different clustering algorithms exist, but one of the most common ones is K-means. K-means works by randomly assigning initial cluster centers and then updating them by reassigning data points to the cluster that they best fit into, recalculating the cluster centers as the center of the points assigned to them, and repeating the process for a set number of iterations or until the clusters stabilize. Clustering is useful for incident response since it can help an analyst discover unknown relationships within a dataset. The initial random state of the cluster centers means that different runs of the algorithm can produce different results. Performing multiple runs of the algorithm (potentially with different numbers of clusters) may draw attention to data points that are anomalous and worth further investigation. An example of useful intelligence generated by traffic clustering is the image above created by TrendMicro. The researchers who generated this image took advantage of the similarity of traffic from Gh0stRAT variants to build a clustering tool that could detect them based on their C2 traffic. As shown, many of the Gh0stRAT variants’ traffic formed large clusters, making it easier to identify and investigate potential infections. However, false positives also exist inside the clusters of Gh0stRAT variants and false negatives are located outside of them. Clustering is useful for identifying data that may require more investigation, but it can’t be trusted to correctly classify something as a threat or not to miss something. Stack counting is a simple means of performing anomaly detection on one feature of a dataset. In stack counting, an analyst puts data points into bins based on their value for a certain feature. These bins are then sorted based on the number of data points that they contain. Under most circumstances, benign events are common and malicious events are uncommon. Therefore, for a given feature, anything that falls into a bin with a low number of data points inside it may be worth further investigation. The image above, generated by Sqrrl, shows the result of stack counting a collection of web traffic going to a web server, based on its destination port. As shown, the vast majority of traffic has a destination port of 80, 443 or 25 (HTTP, HTTPS and SMTP). However, four different ports each have one hit apiece. Based upon this analysis, the traffic to those four ports should receive further analysis. An important step when performing stack counting is ensuring that identification of data points with uncommon values for the selected feature actually are worth further investigation. In the example, destination ports of traffic going to a server were used, which makes sense since the server should primarily be running applications that communicate on set ports. The use of source ports on a server or destination ports on a client machine, on the other hand, would produce meaningless results for stack counting. Since clients use a random high number port when initiating a connection to a server, the fact that only one piece of traffic uses a particular port is meaningless. When performing stack counting, it’s important to choose a feature where benign samples are expected to have values that are clustered into one or a few bins. Conclusion: Statistical analysis for incident response The clustering and stack counting examples in the previous section represent only a few of the simple algorithms that can be applied to incident response. Data science is extremely good at extracting patterns from and identifying anomalies in massive quantities of data, which is a common problem when starting an incident response investigation. When developing an incident response methodology, and when planning out threat-hunting exercises, it’s important to have processes and tools in place to allow the team to operate efficiently. Monitoring solutions are designed to provide massive amounts of data to an analyst, but it’s also important for that analyst to be able to sift through that data to differentiate indicators of compromise from random noise. Implementing statistical analysis solutions can help with filtering data to bring the most important features to the analyst’s attention first. - Introduction to K-means Clustering, Oracle Data Science Blog - Machine Learning to Cluster Malicious Network Flows From Gh0st RAT Variants, Trend Micro - Four Common Threat Hunting Techniques with Sample Hunts, LinkedIn We've encountered a new and totally unexpected error. Get instant boot camp pricing A new tab for your requested boot camp pricing will open in 5 seconds. If it doesn't open, click here.
<urn:uuid:1995613a-1206-4261-8aa7-7af97ec758a3>
CC-MAIN-2022-40
https://resources.infosecinstitute.com/topic/network-traffic-analysis-for-ir-statistical-analysis/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00082.warc.gz
en
0.925806
1,282
2.8125
3
As anyone who has ever jumped into a body of water during a summer heat wave will tell you, submerging yourself in cool (or even warm) water is a great way to cool down. This common-sense insight has not been lost on mechanical and thermal engineers charged with cooling servers, storage and network communications devices. In fact, all the way back in 1899, the first patent concerning the use of oil as a coolant and insulator for transformers was granted. Given that the electricity used by computers has a 1:1 ratio with heat, it’s not surprising that over 50 years later, as the computer industry was in its infancy, IBM patented a method for immersing computer components in dielectric fluid to cool them. Since those days, approaches to immersion cooling have continued to evolve. In this post, we’ll provide a brief overview of immersion cooling, the different methods in practice and the relative strengths and weaknesses of these methods. What Is Immersion Cooling? To date, the most common method for cooling IT hardware (from laptops to the data center) has been air. But the fans, ductwork and HVAC systems required to cool data centers with air consume a lot of space and electricity. Using fluid to cool IT hardware, such as one might find in cutting-edge gaming rigs or high performance computing clusters, requires pipes, pumps and a significant amount of space. Plus fans are still needed to deal with the heat that the ‘cold plates’ and ‘heat pipes’ can’t address. Because immersion cooling is more energy- and space-efficient, interest in the technology has steadily grown over the years, and the technology itself has evolved considerably. But what is immersion cooling? As the phrase clearly implies, immersion cooling involves immersing system components, such as a motherboard or an entire computer system, in a fluid which, ideally, has a high coefficient of heat rejection and low thermal resistance. The process requires the use of fluids that will not damage the IT components or degrade system function. Further, for safety reasons, these fluids need to be “dielectric,” meaning they do not conduct electricity. The dielectric fluids used for immersion cooling today fall into two categories: oils (synthetic, mineral, bio) and engineered fluids, such as 3M’s Novec or Fluorinert lines. Single-Phase Immersion Cooling vs. Two-Phase Immersion Cooling There are two basic approaches to immersion cooling. The first approach is referred to as single-phase immersion cooling. In this approach, the components are immersed in a dielectric fluid, typically an oil or engineered fluid of some kind. The heat generated by the IT components is absorbed by the fluid and then the fluid is pumped and circulated around an enclosure, chassis or tank to help remove the heat. This entails pumping out the hot oil/fluid to be cooled by a secondary air-to-liquid or liquid-to-liquid heat exchanger, and pumping cooled oil/fluid back into the immersion bath. A second approach takes advantage of the heat rejected through phase change. As with single-phase, the IT components are immersed in a dielectric fluid, but the fluid used in this case is engineered to have a boiling point which is below the temperature of heat-emitting IT components like CPUs, GPUs, ASICs, power supplies, DC/DC converters and more. In essence, when the IT gear is operating, the heat is removed by a liquid-to-gas phase change. The interesting thing about phase change is that, once the boiling point is reached, the dielectric fluid itself doesn’t get any hotter. Rather than continually raising the temperature of the dielectric fluid, the heat is rejected when the vapor gas comes into contact with a specially designed vapor-to-liquid heat exchanger that is inside of a specially designed DataTank™. Since the heat exchange happens inside the DataTank, there is no need for a secondary heat exchanger and pumping system. This eliminates a potential point of failure and drastically lowers the complexity and cost of the immersion cooling system. The interesting thing about 2-phase immersion cooling is that it consumes little to no energy in and of itself. The rising gas caused by the phase change inside the DataTank condenses on the tubes (heat exchangers) inside the top of the DataTank, then changes back to a liquid in the form of small droplets, which fall back into the liquid ‘bath’, as the whole process begins anew. Immersion Cooling Implementation Methods There are also two basic methods for implementing immersion cooling. The first method employs an entirely enclosed IT chassis. The customized chassis provides containment of the dielectric fluid and oftentimes less fluid is needed as a result. Part of the appeal of this method is that it allows self-contained, immersion-cooled chassis to be installed in conventional server racks. A Coolant Distribution Unit (CDU) can likewise be used to manage the coolant across multiple chassis. A major challenge of this approach is that the immersive chassis must be replaced with every IT refresh, an average of 10 complete replacements over the usable life of the IT rack. The second method employs an entirely enclosed IT tank, or DataTank. The tanks provide containment of the dielectric fluid and are typically designed to accommodate IT gear that would otherwise be mounted in 19”, 21” or OCP style racks for example. Since the DataTanks accommodate almost all types of IT gear, an IT refresh is simply out with the old and in with the new — meaning there is no need to replace the DataTank for an IT refresh. This makes the flexibility, cost and TCO highly appealing for DataTanks versus the chassis approach, whereas the chassis approach enjoys the benefit of some dielectric fluid volume savings. Immersion Cooling and the Sustainable Data Center As you can see from this brief description, there are several major differences between single-phase immersion and two-phase immersion. First, single-phase relies on pumping an oil or dielectric fluid to a secondary heat exchanger and pumping system (Coolant Distribution Unit or “CDU”) which then rejects the heat to the building’s primary water heat rejection loop. There are some one-phase immersion systems with heat exchangers located inside of a specially engineered IT chassis, which reduces the need for a CDU. In either case, the oil or dielectric fluid needs to be pumped across substantially large IT heat sinks on the server boards because the process does not benefit from the higher heat rejection capacity of phase change. Using mineral oil or synthetic petrochemicals can involve a lot of messy cleanup if you ever need to swap out IT gear, and is therefore a non-starter for some potential users. Further, most of these fluids have a flash point, meaning they are flammable, and this represents a potential hazard and risk in data center operations. Finally, because single-phase immersion relies on additional pumps to circulate and cool the oil, the energy efficiency gains made by shifting from air to immersion cooling are reduced. That is not to say that 1-phase immersion cooling isn’t beneficial; it certainly provides dramatic energy efficiency and IT heat load densities versus air cooling. The benefits of one-phase immersion cooling include: - Better energy efficiency than air cooling - About 10X heat rejection capacity vs. air cooling - Mineral oil is less expensive than 2-phase engineered dielectric fluids - Oils generally do not evaporate (however, ‘oil blooms’ are generally experienced within a 1-2 meter radius of most single-phase immersive tanks/enclosures) - Lower CAPEX than air cooling in some cases - Less space required versus air cooling - Better TCO than air cooling in some cases - Lowers or eliminates the use of water for outside heat rejection - Quiet operation The benefits of two-phase immersion cooling include: - Best known efficiency in any form of cooling - 2X (or greater) heat rejection capacity vs. one-phase - Half the space requirement versus one-phase (no bulky heat sinks or CDUs) - Lower CAPEX than air cooling (per kW) - Lower TCO than air cooling (per kW) - Waste heat can be re-used for hot water, district heating or energy generation - Dielectric fluids are clean and make servicing or replacement of IT gear simple - Faster builds than air-cooled data centers - Lowers or eliminates the use of water for outside heat rejection - Silent operation Because immersion cooling, particularly two-phase immersion cooling, is so energy efficient, many see it playing a central role in the evolution of the sustainable data center. If you would like to learn more about that, we invite you to read our white paper, Liquid Cooling: The Key to Data Center Sustainability. If you would like more information on LiquidStack’s two-phase immersion cooling systems, please get in touch.
<urn:uuid:fb7916ed-a314-4a68-bec4-f1092b666e38>
CC-MAIN-2022-40
https://liquidstack.com/blog/understanding-different-approaches-to-immersion-cooling
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00082.warc.gz
en
0.93058
1,885
3.265625
3
Network News Transfer Protocol is a prime set of rules which is used by both computer’s clients as well as servers in order to manage the comments situated on the Usenet newsgroups. In the past, NNTP was introduced as the substitute of original “UUCP” protocol. But now NNTP servers can handle the collected international network of Usenet newsgroups. Moreover, such NNTP servers are included at your ISP (internet access provider). But an NNTP user will be included as a part of following explorers like: Netscape and Opera etc. But another separate client program can be added too as a newsreader. NNTP protocol is the well suited for sending Usenet news communication either between servers or a news server and newsreader client. This very simple protocol is to some extent similar to POP3 and SMTP protocols. Basically, NNTP protocol in documented form is appeared first in RFC 3977 that was made public in the month of October, 2006. This RFC was the result of NNTP IETF working group’s hard work which set aside RFC 977 (issued in the year 1986). But RFC 3977 launched a capability labels registry so to be used for any further expansion of the network news transfer protocol in the future. Up to now, RFC 3977 itself is only created the extensions in this protocol. Anyhow, to record a new additional room, this extension is needed to be made public as a standards follow or else as an experimental RFC. But extension, opening with X will be reserved for the private usage. Below are those commands, identified as well as responses revisited by a NNTP server. List of these commands can be mentioned as: ARTICLE, BODY, HEAD, HELP, LAST, SLAVE, LIST, NEWNEWS, NEXT, POST, QUIT, STAT and GROUP. But NNTP reply code grouping is included: Code: 1yz Description: Informative message. Code: 2yz Description: Command ok. Code: 3yz Description: Command ok as far as this, launch rest of that. Code: 4yz Description: Command was acceptable, but couldn’t be executed for certain reason. Code: 5yz Description: Command can’t be implemented, or else incorrect, or any serious error relating to program is occurred etc. Code: x9z Description: Debugging output. Besides this, NNTP reply codes are available for the certain tasks. These can be mentioned as under: Code: 100 Description: Help content follow. Code: 199 Description: Debug output Code: 200 Description: Server all set – posting acceptable. Code: 201 Description: Server all set – no posting permitted. Code: 202 Description: Slave status renowned. Code: 240 Description: Article post ok. Code: 400 Description: Service discontinued etc. In any case, NNTP spells out a specific protocol for the giving out, inquisition, repossession and redeployment of the news articles with the help of a consistent stream like TCP server-client representation. NNTP is intentionally planned as a result that news articles require simply, is being stored on presumably central host while contributors on the other participating hosts linked to the local area network (LAN) may examine the news articles by means of stream acquaintances to the news-host.
<urn:uuid:4c33871a-64bf-4455-a8e1-b42113a35f11>
CC-MAIN-2022-40
https://howdoesinternetwork.com/2012/nntp
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00283.warc.gz
en
0.922522
697
2.515625
3
Real-time voice along with video communication can be attained with the internet standard SIP (Session Initiation Protocol). That was initiated by the IETF (Internet Engineering Task Force) and you can find it in published form from RFC 3261. As for live communications, internet protocol “SIP” is used for establishing voice and video calls and within an IP network, one or more than one participant can whether create, alter, or end sessions with the help of this signaling protocol. It is very important in explanation of the functionality of the VoIP technology as one of the Voice over IP protocols. But at this point of learning about SIP, first you have to understand the term “session” within a communicating network. Actually, it is a clear-cut two-way phone call procedure. But in case of multi-media conference session, it can be consisted on loads of participating persons. Session initiation protocol working group is at the present in the authority to make improvements in it and to maintain the standard of this text based SIP such as upholding its function of starting interactive communication session for the users. Moreover, this group is doing their best for maintaining the basic architecture of SIP model by using on hand internet protocols so to retain the architecture and the simplicity of model etc. Peep to peer protocol SIP is required just a simple but scalable (network ability to deal with growing amount of exertion in a proper and capable manner) central network along with intellect distributed to the edge of network, fixed in the endpoints (terminating hardware of software devices). Signaling protocol “Session Initiation Protocol” (SIP) is designed especially for a series of services such as: internet conferencing, instant messaging, IP telephony, presence, voice contact, video communication, data alliance, live gaming, distribution of application and much more tasks are possible with this protocol assistance. “SIP” acts in the same way for the real-time unicast or multicast communication as HTTP protocol takes steps for the web. In addition to this, SIP forking is referred to the course of dividing a particular SIP call into several SIP endpoints. With this powerful SIP feature, a sole call can ring on a lot of endpoints simultaneously e.g. with it, you can at the same time ring your deskphone as well as Android phone sip. Most of all, in both devices cases no rules for forwarding are required in order to make them ring. Best example of SIP forking can be as: an office device with this protocol can let the secretary to reply all calls to the phone extension of boss whenever he/she is out of office. Such SIP telephone system is featured with secure, considerable cost-saving and improved user’s mobility and efficiency functions facilities. In short, SIP was originally developed within the IETF (Internet Engineering Task Force) MMUSIC (multiparty multimedia session control) group. But a number of regularities, associations and other groups are considerably using SIP such as: IETF PINT working group, IMTC, ETSI Tiphon, PacketCable DCS etc. The Application layer SIP protocol is designed to be self regulating of the transport layer. It can be run over the TCP, UDP, or SCTP and it is incorporated several features of the HTTP and the SMTP.
<urn:uuid:234f54d3-67a1-4a3a-9432-82077e18a518>
CC-MAIN-2022-40
https://howdoesinternetwork.com/2012/sip
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00283.warc.gz
en
0.929966
697
2.96875
3
Professor of Economics Lincoln University, New Zealand Social Scientist, GNS Science and Director of the Joint Centre for Disaster Research Massey University, New Zealand Dr. Thomas Pratt US Geological Survey, Department of the Interior Canterbury Department of Emergency Management Earthquakes in the central and eastern U.S. are rare (low probability) events, but the M5.8 Virginia earthquake in 2011 reminded us that earthquakes can and will occur where least expected. Every earthquake has some consequences and fortunately, the Virginia event was centered far away from any urbanized areas where it could have caused significant damage and injuries. This was not the case for the Canterbury region of New Zealand. The low probability earthquakes that struck in 2010 and 2011 had dire consequences - the M5.5 to M6.3 earthquakes destroyed most of the urban center of Christchurch, with losses totaling 20% of New Zealand’s GDP, killed 185 and injured over 6,000. This session spotlights the lessons learned in New Zealand. Experts from there who have both personal and professional roles contending with the immediate and long-term consequences of the Canterbury earthquakes will share their real-life experiences. Session discussion will put these lessons into a U.S. context, touching on what could happen if the Virginia earthquake occurred closer to an unsuspecting central or eastern U.S. city and how we might become more resilient physically and economically, not just to the hazards posed by local earthquakes, but by earthquakes worldwide. - What worked and didn’t work during the response and recovery phases of a major disaster in an environment very similar to that in many moderate-sized US cities, - What nature can deliver and the potential impacts in a typical, long-lived earthquake sequence, - Earthquake hazard forecasts regionally and globally.
<urn:uuid:9be8bf65-7e71-4727-a0f9-b4d94d4ba66c>
CC-MAIN-2022-40
https://govsecinfo.com/events/govsec-2014/sessions/wednesday/slp2-3.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00283.warc.gz
en
0.948683
393
2.78125
3
How do you 'Identify'? NIST Cybersecurity Framework: Identify National Institute of Standards and Technology (NIST) maintains one of the most widely adopted cybersecurity frameworks for critical infrastructure and organisations in many other sectors. NIST Cybersecurity Framework is an excellent system to base the creation of policies and procedures on for the purposes of managing risk, security hardening networks, and incident response. There’s a lot of content in the Framework, which was designed to cover a lot of ground. The Cybersecurity Framework consists of three main components: the Core, Implementation Tiers, and Profiles. The Framework Core provides a set of desired cybersecurity activities and outcomes using common language that is easy to understand. The Core guides organisations in managing and reducing their cybersecurity risks in a way that complements an organisation’s existing cybersecurity and risk management processes. Fortunately the most important ideas in the Framework can be organised according to its five functions – identify, protect, detect, respond, and recover. We will tell you what you should know about all five functions. Let’s start with the first function, identify. You can’t protect what you can’t see. So the first function is all about identifying all of the important resources your organisation must protect. What assets do we have? What is the relative importance of those assets in the wider context of our business environment? Which threats do they face, and what’s a manageable level of risk for your business or institution? How are we going to manage risk? How are we going to asses risk? The subject of risk are the assets of your organisation, not overlooking the supply chain. Your organisation’s assets The NIST Cybersecurity Framework considers your organisation’s assets to be both physical and in software, and recommends that you establish an asset management programme for them. Asset management is one of the main tasks of the identify function. Many organisations these days have hybrid networks. Those are networks that have both an on premises component and a cloud services component. Your on premises network and your cloud network become integrated into one hybrid network. Or perhaps your organisation only has a network on your own premises. Your network could even be almost entirely in the cloud, with only client machines on your premises. Whichever form your organisation’s network takes, you must consider each server machine and networking appliance to be one of your assets if it’s important to the functioning of your business. As far as cloud providers are concerned, they consider their responsibility to be their cloud infrastructure, whereas your organisation is responsible for your applications and data within their cloud infrastructure. If your organisation’s network is partly or completely in the cloud, you will have to consider this separation of responsibilities in the design of your asset management programme. Regardless of which type of network your organisation has, you have operating systems and applications which are assets. You have servers, clients, data storage, and networking devices which are assets. Your organisation’s data is another crucial asset. Your organisation and security stakeholders must take inventory of all of these assets and determine which are the most important to your organisation’s daily business processes. Asset management can be prioritised from there. Your business environment The next task of the identify function is to determine your organisation’s business environment. Your business may exist as a component in a supply chain. For example, one company produces steel. The next company buys the steel and manufactures it into automobile components. The next company buys the automobile components and uses them to manufacture cars. Then the last company buys the cars and sells them to consumers through their auto dealership. The companies within this supply chain are interdependent. A cyber incident which affects the steel producer could impact the automobile component manufacturer, which could harm the supply chain all the way up to the auto dealership. These relationships and their consequences must be fully considered in the business environment task. All organisations have objectives, legal, regulatory requirements, contractual requirements and a diverse business environment that they operate in which need to be fully understood. Regardless of your organisation’s supply chain, your organisation and security stakeholders must also consider the prioritisation of the company’s mission, goals, all stakeholders, and processes. That information must be used in the creation of roles, responsibilities, methodologies and key security decision-makers. Administrative security controls are essential to every organisation’s cybersecurity. Administrative security controls are your policies and procedures, and how they’re enforced. In this governance task of the identify function, you need to understand your organisation’s various security policies for managing and monitoring regulatory, legal, risk, environmental and operational requirements. This task is especially important if your organisation is implementing the NIST Cybersecurity Framework after already having security policies. Have those policies and procedures been effective so far? Have they been enforceable? Have they made an impact? So your organisation determined your assets in a previous task. Now you must determine which risks they face. Absolutely everything has some degree of risk. How could those risks affect your organisation’s users, your business, your employees, your clients, your critical IT systems and platforms you use in your everyday operations? What impact would particular cyber incidents have? If data is lost or stolen in a breach, how would that affect your operations, your legal standing, your regulatory compliance? All of these risks have practical effects and associated price tags. Now that you have taken an inventory of your organisation’s assets and you’ve determined all of their associated risks, it’s time to manage them and there is a multitude of frameworks and guidance documents to turn to including ISO 31000, COSO, Octave, Ebioss, FAIR, Mehari, ISO 27005, CRAMM, SP800-30 etc... Usually, but not always, a compromise must be made between usability and security, cost and benefit. For example, implementing a lot of sophisticated methods of authentication can be good for assuring the confidentiality and integrity of your network. But your users will also need to find these authentication methods to be usable. What are the risks to your data by making these authentication methods more or less complex? What level of risk can your organisation manage? Where is the cost benefit? Systems that are of the greatest priority to your organisation may have much less risk tolerance, while lower priority systems may have more risk tolerance. You will need to identify how to manage risk throughout your network under the guidance of a cybersecurity lead with good judgement. Effective risk management should also enable opportunity to ease off/relax controls to obtain business benefit but remain within the organisations risk appetite. Supply chain assets are as important as internal assets and should be subject to an equivalent level of appropriate risk management. Identification is vital You cannot protect what you do not know exists. So you’ve determined your organisation’s assets. You’ve considered the role of your business environment to your organisation’s supply chain and also within your business itself. You’ve analysed your organisation’s various policies and procedures. You’ve assessed your organisation’s risks, prioritised and decided how to manage them. Now you’re ready to move onto the next NIST Cybersecurity Framework function, protect. Our next article will explore the Protect function.
<urn:uuid:21f9a4e5-f9ac-496b-80e5-5af1eceea894>
CC-MAIN-2022-40
https://www.cybersecurity-professionals.com/post/how-do-you-identify
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00283.warc.gz
en
0.930389
1,527
2.9375
3
A number of technology giants including Google, Microsoft, Apple and Mozilla are coming together to put an end of TLS 1.0, the Transport Layer Security standard that had its roots in 1999. First reported by ArsTechnica, the unified approach sees the web giants looking to disable TLS 1.0 and 1.1 by March 2020. TLS, or Transport Layer Security, is the fundamental protocol used to secure connections on the open internet. It is a crucial component, forming connections that are authenticated and tamper-proof, as well as confidential. Apple’s WebKit blog elaborates on the details, with the Secure Transport team at the world’s most-valuable company explaining: “Transport Layer Security (TLS) is a critical security protocol used to protect web traffic. It provides confidentiality and integrity of data in transit between clients and servers exchanging (often sensitive) information. To best safeguard this data, it is important to use modern and more secure versions of this protocol.” The original TLS (1.0) was first published in January 1999 and was heavily based on Netscape’s SSL 3.0 protocol. It took another seven years for TLS 1.1 to take shape while TLS 1.2 quickly followed in 2008 with new capabilities. TLS 1.3 was most recently finalized in August. While TLS 1.2 represents some 99.6 percent of all TLS connections made from Safari from an Apple perspective, over 94 percent of websites support TLS 1.2, according to the company. “Now is the time to make this transition. Properly configured for App Transport Security (ATS) compliance, TLS 1.2 offers security fit for the modern web,” Apple said. “It is the standard on Apple platforms and represents 99.6% of TLS connections made from Safari. TLS 1.0 and 1.1 — which date back to 1999 — account for less than 0.36% of all connections.” Image credit: LIFARS archive.
<urn:uuid:0262c15b-c8de-407c-bcbd-c30995b08a80>
CC-MAIN-2022-40
https://www.lifars.com/2018/10/apple-google-microsoft-join-kill-tls-protocol/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00283.warc.gz
en
0.933545
417
2.78125
3
The world of Artificial Intelligence (AI) may not have a more well-known figure than Elon Musk, entrepreneurial head of Tesla and SpaceX. In a 2014 interview, Musk stated: “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah, he’s sure he can control the demon. Didn’t work out.” While Musk’s concerns haven’t been proven completely unfounded, AI today functions strictly within the realm it is programmed for. Becoming a “demon” with expanding abilities and intelligence out of our control is currently beyond its limited capacity. Even AI that is designed to “learn” only learns within the confines of what it has been specifically programmed for, and as driverless cars show us, learning and doing are two different things. One of the most glaring examples of AI’s current limitations is in the rise in autonomous vehicles. General Motors, Uber, Waymo, and Tesla’s Autopilot are only a few in the race to true self-driving cars. Driverless car software utilizes machine learning, and the software is far from perfect. Recognizing a bicycle and then anticipating which way it’s going to go is just too complicated to boil down to a series of instructions. Instead, programmers use machine learning to train their software. They might show it thousands of photographs of different bikes, from various angles and in many contexts. They might also show it some motorcycles or unicycles, so it learns the difference. Over time, the machine works out its own rules for interpreting what it sees. – Zachary Mider, Bloomberg But as much as a computer learns about what potential hazards might look and act like, computers still have no grasp of three basic concepts: time, space, and causality. This approach, while it does keep a self-driving car from becoming KITT or Herbie, also keeps self-driving cars from being truly 100% error free. Unlike human drivers, cars will never be able to fully understand consequences or anticipate randomized behavior that may be apparent from factors these cars haven’t been programmed to recognize. And what about scenarios that self-driving cars aren’t being trained for? In October 2019, testing by AAA indicated that dummy pedestrians crossing the road were hit 60 percent of the time, even in daylight and at low speeds. Such obvious limitations should, and do, give users pause. While self-driving technology may be improving in some ways, seven in ten Americans still have no interest in using—or even sharing the road with—driverless cars. Driver-assist technology, on the other hand, is far more popular. Human input and supervision, it seems, still offer something that machine learning cannot replicate. Driver-assist technology allows users to benefit from the areas where AI has excelled, such as alerting drivers when they veer from the center of a lane or back up too close to an obstacle, but humans still stay in control to make split-second decisions on the road. Functioning in Limited Capacities While driverless car technology dominates headlines, AI is being used successfully in more limited capacities. Where machine learning does excel is in analyzing data sets far too large for human processing. Using publicly available data and AI, university student Anne Dattilo discovered two exoplanets. Footage from Kepler space telescope tracked “100,000 stars in its field of view.” Dattilo’s modified AI was programmed to identify and flag stars that, due to light fluctuations, appeared to have planets. Dattilo and human colleagues confirmed the findings. Elsewhere, scientists are using AI to sift through audio recordings for the sounds of elephants in the rainforests. This data is then used to count the animals, helping to provide a more accurate picture of population and poaching rates. These highly focused uses of AI have proven successful in accomplishing what humans alone could not. Success in the science field, however, doesn’t seem to translate to highways or a grander intelligence. A Constrained Approach Some researchers hold the belief that machine learning and AI as it exists now simply may not hold the keys to the change promised by innovators and fiction writers, and the ways AI has been used so far seem to support this. According to skeptics like [Gary] Marcus, [professor of cognitive psychology at NYU,] deep learning is greedy, brittle, opaque, and shallow. The systems are greedy because they demand huge sets of training data. Brittle because when a neural net is given a “transfer test”—confronted with scenarios that differ from the examples used in training—it cannot contextualize the situation and frequently breaks. They are opaque because, unlike traditional programs with their formal, debuggable code, the parameters of neural networks can only be interpreted in terms of their weights within a mathematical geography. Consequently, they are black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases. Finally, they are shallow because they are programmed with little innate knowledge and possess no common sense about the world or human psychology. – Jason Pontin, Wired While AI may provide some incredible shortcuts in daily life and populate wells of information, these limitations mean that fears of an all-powerful, all-knowing “demon” remain far from tangible. The need for oversight and limits in AI is no joke, and Elon Musk may yet prove to be a technological prophet of sorts. But the truth is, we have no reason to fear our robot overlords today. Are you interested in the ways AI can enhance your life and business? Are IoT devices creating gaps in your carefully maintained cyber security? Contact Anderson Technologies today for enlightened solutions to all your technological troubles. We can be reached by phone at 314.394.3001 or email at email@example.com.
<urn:uuid:4eecba76-03ca-4f6d-8649-1d3703d4d7f6>
CC-MAIN-2022-40
https://andersontech.com/ai-should-we-fear-the-demon/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00283.warc.gz
en
0.943554
1,238
2.828125
3
Virtual machine introspection (VMI) is a term coined by Garfinkel and Rosenblum in 2003 in their paper “A Virtual Machine Introspection Based Architecture for Intrusion Detection” and they describe VMI as the “approach of inspecting a virtual machine from the outside for the purpose of analyzing the software running inside it”. VMI and its capabilities and techniques have evolved quite a bit since then, but the principle remains the same. However, before we start a discussion on VMI, we will cover some of the basic concepts of virtualization. Virtualization is a technology that makes it possible for multiple operating systems (OSs) to run concurrently on a single host system without those OSs needing to be aware of the others. The physical machine is multiplexed into several virtual machines (VMs), on top of which unmodified OSs (referred to as guest OSs) can run. Since each VM can have its own OS, this allows multiple guest OSs to run in parallel on a single physical computer. This is how cloud computing providers provide their services, for example. They run multiple customer guest OSs on a single server. Virtualization is often confused with emulation as they achieve similar results. The difference between virtualization and emulation is that virtualization requires support from the hardware. This leads to a generally faster approach that is more aligned with the actual capabilities of the underlying hardware. Emulation, on the other hand, is generally less performant and will lack any features of the physical hardware unless specifically implemented but does not require any special hardware as it is completely implemented in software. While emulation has its benefits, we will concentrate on virtualization for virtual machine introspection in this blog post. Hypervisors and VMI While virtualization does rely on hardware support, there is a software component. This software component is referred to as the hypervisor or the virtual machine monitor (VMM). This software layer sits on top of the hardware and exposes the VM to the guest OS. This is generally accomplished in one of two configurations. A type 1 hypervisor (or bare-metal hypervisor) runs directly on the underlying hardware. The Xen hypervisor is an example of a type 1 hypervisor. On the other hand, a type 2 hypervisor (or hosted hypervisor) runs within an OS. In this case, we refer to the OS within which the VMM runs as the host OS. KVM is an example of this. See figure 1 for a diagram of the two configurations. The VMM will assist the hardware and gain control when the guest OS performs some operation that the hardware is not designed or instructed to handle. That is, the VMM can program the hardware as to which actions of the guest OS it wants to handle and which actions it wants to let the hardware handle. For example, the VMM may program the hardware to give it control everytime the guest writes to a specific register. Going forward any time the guest OS writes to this register, the hardware hands control to the VMM. We refer to this as a trap to the VMM. This ability of the VMM enables VMI. The Power of Virtual Machine Introspection VMI allows us to take advantage of the hardware and the VMM to inspect the guest. As we learned above, the VMM can program the hardware to trap certain events. This is valuable for virtual machine introspection as it allows us to trap specific actions the guest might take and inspect the guest’s state at exactly that moment. This leads us to the next advantage: The VMM has full visibility into the entire guest state. This includes the CPUs, the memory, and any devices (e.g. network cards, hard disks, etc.). Finally, the VMI components benefit from complete isolation from the environment that is being inspected. This makes it difficult to detect or attack the inspecting components and reduces the observer effect on the guest environment. The observer effect describes the intrusive nature of any observer of a system. Even if one tries to be as unobtrusive as possible, there will always be some side effects. This effect is amplified when the observer is located within the system being monitored. Leveraging these advantages allows us to create powerful tools for the analysis and debugging of software within the guest. For example, one can set breakpoints within the guest and trap to the VMM every time the breakpoint is executed. This is a capability that debuggers and other analysis tools have had for some time. However, by leveraging virtual machine introspection we can set the breakpoint such that it is impossible for the guest to see the breakpoint in memory or detect that a debugger is attached to the process. Occasionally software may change its behavior when it detects it is being inspected. This form of anti-analysis is common in malware or in software that attempts to protect its IP. This is just a single example of the power of VMI. There is so much more that can be done with this powerful technique. VMI allows us to directly inspect the virtual hardware providing us with a unique view of the system that is not possible by any other means (i.e. the entire physical memory and all CPU registers are always accessible). This allows us to build powerful inspection components. For instance, we can protect memory against modification, inspect each process before it comes into context, or generate an entire control flow graph and we can do all this while remaining completely isolated from the guest and any components that might want to thwart our efforts. Applications for Virtual Machine Introspection The applications of virtual machine introspection are vast. The most common use case is perhaps the analysis of potentially harmful software in a sandbox. Using VMI provides isolation from the potentially harmful malware and makes it very difficult for the malware to determine it is being analysed. At the same time, VMI provides access to the entire physical memory of the guest making it very difficult for the malware to hide from the analyst. Such a VMI sandbox may aid a malware analyst by providing a platform for manual analysis or by being part of an automated analysis workflow. In the manual case, the analyst would execute the suspect software and leverage the sandbox to instrument the environment to suit his or her needs. For example, the analyst might set very specific breakpoints for the sample he or she is analysing. On the other hand, VMI lends itself well to automated analysis as well. In this case, a sample would be automatically detonated in the sandbox and VMI instruments the environment to record predetermined actions. For example, an automated run might record all files a sample opens. VMI also has applications as an intrusion detection system (IDS) for real-time monitoring of critical systems. This can be performed without modifying the guest OS but provides an expansive view into the behavior of a system. It allows a VMI component to record potentially malicious actions and forward them to an IDS or SIEM. Finally, VMI can be a useful technique for kernel debugging or for debugging the interaction between several processes within an OS. It does not require any special setup or configuration of the guest OS and interactions between multiple components can be debugged from a single instance, rather than having multiple debuggers running simultaneously. BedRock System’s Products and Offerings We leverage virtual machine introspection in several of our products and offerings. Our Active Security suite leverages VMI to help you protect and monitor your deployed critical systems in real-time. For more information see our Products page. In addition to our product offerings, we maintain an open source VMI platform that supports both ARM and x86 architectures called tenjint. tenjint can be leveraged to perform analysis as described above and is available here.
<urn:uuid:48b9fcfc-5773-487b-87a5-3b52dd33bf36>
CC-MAIN-2022-40
https://bedrocksystems.com/blog/virtual-machine-introspection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00283.warc.gz
en
0.945892
1,586
3.15625
3
October is Cyber Safety Month - Some basic Rules to keep you safe! Playing it SAFE in a High Tech World - By Bill Alderson! Remember The IRS, Social Security, Police, your Bank, credit card company..etc will never send you an email or a phone call asking for you to give out your information , never go to an embedded special link in an email, give out information, ask you to send money, give a credit card info........etc. Every call, email, txt..etc should be suspect! Phishing is the #1 method of stealing your information! Phishing is a Cybercrime in which a target or targets (You) are contacted by email, telephone or text message by someone posing as a legitimate institution to lure individuals into providing sensitive data such as personally identifiable information, banking and credit card details, and passwords, including medical information! ALWAYS be suspect, and ask for their name and extension than hang up and call the institution back directly from the number on your card or another reliable source. NEVER call the number that they give you, or a link! Teach your children the follow rules also! This includes social media sites! Some basic rules to follow - 1. Be careful where and what you “click”. 2. Never click on a link or any item in an email from someone you do not know. 3. Never do bank business from a link in an email – go directly to the Banks website and log into your account in their secure system. 4. Always allow your computer, cell phone or other technology device to update automatically. 5. If you need to write passwords down, always leave out at least one character, maybe at the start or end; never keep passwords on or under your computer; and consider using a password application. (I use a center word that I never write down) 6. Do not use the exact same password repeatedly. Change at least 2-3 characters each time. Use a system only you know! **Suggestion - Change passwords monthly to quarterly, never wait for a breach alert! 7. NEVER give anyone your username and password over the phone. Never leave this information on a sticky note on your desk! 8. If a credit card company calls you asking for information DO NOT GIVE ANY INFORMATION. You call them with the phone number on your card and inquire if they need information. 9. Always take your receipts with you from restaurants. Plus anytime you use a charge! 10. Learn to recognize Phishing. Enter the web link manually! Never follow an embedded link!!!! Never follow an embedded link to answer questions or give personal information! Be suspect of every email, and phone call. Bill Alderson is the CTO and co-founder of HOPZERO, a company that limits data travel and creates a “safe house” for organizational datacenters. Alderson has worked with 75 of the Fortune 100 organizations and gained notoriety for helping the Pentagon recover communications immediately following 911.
<urn:uuid:69708621-a21b-4422-96f8-133b6758797d>
CC-MAIN-2022-40
https://www.networkdatapedia.com/post/2019/10/23/october-is-cyber-safety-month-some-basic-rules-to-keep-you-safe
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00483.warc.gz
en
0.909259
628
2.765625
3
In the good old days, data on hard disks and live “volatile” memory were the targets of bad actors. Any usable data that people will pay to get back and any secret information that they typed in while the computer was running, such as passwords and special access characters were the golden ticket to access the nirvana or treasures in an organizations digital chest. Now, the bad actors have taken a step further to manipulate previous technically sound but limited exploits known as Rowhammers to access information stored on memory chips. The advantage, unlike volatile memory, is that this type of data remains even when the machine is rebooted, so access to this level can expose the types of data that were previously assumed to be fully protected. This new RAMBleeds methodology that is making news is both interesting and a little shocking. The concept of flipping bits and analyzing the patterns to extract specific code variables is quite unique. It is important to note that while a recent report released by the University of Michigan, Graz University of Technology, the University of Adelaide and Data61 did provide important details on the process, it did not fully evaluate the impact on this type of attack on a production server. The amount of time and computer resources required to successfully extract and evaluate usable, exploitable data at this point in time would make this type of incident non-viable for most bad actor groups. What does this mean to you? The most important element presented by these dedicated Universities is that all organizations need to change the way they think about vulnerabilities. Technology is ever-changing, and exploits are now being attempted in new and previously unavailable areas of systems. Organizations need to be more vigilant on their observations of system performance degradations, memory leaks and any other unusual patterns that make your Spockian eyebrow go up! What can you do?
<urn:uuid:2fcb00b1-9e04-4bbb-ac28-5fbe42a97200>
CC-MAIN-2022-40
https://cbisecure.com/insights/rowhammers-and-rambleeds-evolving-data-threats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00483.warc.gz
en
0.957459
370
2.78125
3
What Are Web Application Vulnerabilities? A web application vulnerability is any system flaw that an attacker can exploit to compromise a web application. Web vulnerabilities differ from other common vulnerabilities like asset flaws or network vulnerabilities because web applications require the ability to communicate and interact with multiple users from different networks. The accessible nature of a web application makes it an easy target for a hacker. Continuous security testing is critical to identify security vulnerabilities and protect your organization. In this article: - Common Types of Web Application Vulnerabilities - Solutions for Preventing Web Application Vulnerabilities Common Types of Web Application Vulnerabilities 1. SQL Injection Many applications use Structured Query Language (SQL) to manage communications with the database. SQL vulnerabilities allow attackers to insert malicious SQL commands to exfiltrate, modify, or delete data. Some hackers use SQL to gain root access to the target system. SQL injection attacks target servers that hold critical data used by web applications or services. They are particularly dangerous when they expose critical or sensitive data, such as user credentials and personal information. The most common vulnerability enabling SQL injection attacks is using unsanitized user inputs. It is important to strip out any element in user-supplied inputs that the server could execute as SQL code. Related content: Read our guide to SQL injection 2. Cross-Site Scripting (XSS) An XSS attack can expose user data without indicating a compromise, impacting business reputation in the long run. Attackers can steal any sensitive data sent to the infected app, and the users may remain oblivious. Related content: Read our guide to XSS 3. Cross-Site Request Forgery (CSRF) A CSRF attack occurs when an attacker forces the victim to perform unintended actions on the web application. The victim first logs into the web app, which has deemed the user and browser trustworthy. Therefore, the app will execute malicious actions that the attacker tricks the victim into forwarding a request to the web app. The motivation for CSRF ranges from simple pranks to enabling illicit financial transactions. Related content: Read our guide to CSRF 4. Session Fixation A session fixation attack involves forcing a user’s session ID to a specified value. Depending on the target web application’s functionality, attackers may use various techniques to fix session ID values. Examples of session fixation techniques include cross-site scripting exploits and reusing HTTP requests. First, an attacker fixes the victim’s user session ID. Then, the user logs in and inadvertently exposes the online identity. The attacker can then hijack the victim’s user identity using the fixed session ID value. 5. Local File Inclusion (LFI) An LFI attack exploits the dynamic file inclusion mechanisms in a web application. It may occur when a web application takes user input, such as a parameter value or URL, and passes it to a file inclusion command. An attacker can use this mechanism to trick the app into including a remote file containing malicious code. Most web application frameworks enable file inclusion, which is useful primarily to package shared code into different files for later reference by the application’s main modules. If a web app references a file for inclusion, it might execute the code in the file explicitly or implicitly (i.e., by calling a specific procedure). The application could be vulnerable to LFI attacks if the module-to-load choice is based on HTTP request elements. Related content: Read our guide to LFI 6. Security Misconfigurations Security misconfigurations are some of the most serious web application vulnerabilities because they provide attacks with opportunities to infiltrate the application easily. Attackers could exploit a wide range of security configuration vulnerabilities. These include unchanged default configurations, data stored in the cloud, ad hoc or incomplete configurations, plaintext error messages containing sensitive information, and HTTP header misconfigurations. Security misconfigurations may be present in any operating system, library, framework, or application. Related content: Read our guide to security misconfiguration 7. XML External Entity (XXE) Processing An XXE attack occurs when an attacker abuses widely used features in XML parsers to gain access to remote or local files, typically resulting in Denial of Service (DoS). Attackers can also use XXE processing to carry out SSRF attacks, which force the web application to make external, malicious requests. XXE can also enable attackers to scan ports and execute malicious code remotely. Related content: Read our guide to XXE 8. Directory Traversal Directory traversal attacks, or backtracking, involve exploiting how the web application receives data from a web server. Web apps often use Access Control Lists (ACLs) to restrict user access to specific files within the root directory. A malicious actor can identify the URL format the target application uses for file requests. Related content: Read our guide to directory traversal Solutions for Preventing Web Application Vulnerabilities The most effective way to prevent web application vulnerabilities is to test your applications for vulnerabilities and remediate them. Here are four ways of identifying critical vulnerabilities in web applications. Static Application Security Testing (SAST) solutions scan source code for vulnerabilities and security risks. Many web applications incorporate code scanning at multiple stages of development, including while committing new code to the codebase and building new releases. SAST is usually rules-based and scan results can contain false positives, so the results must be carefully analyzed and filtered to identify real security issues. Related content: Read our guide to SAST Dynamic Application Security Testing (DAST) can test an application that is deployed in a staging or production environment, and execute its code to check for vulnerabilities. Automated DAST tools find vulnerabilities by sending numerous requests, including unexpected and malicious inputs, to applications, and analyzing the results to identify security vulnerabilities. Manual penetration testers usually perform similar tests to those performed by DAST tools, using tools like Burp Suite, Fiddler, and Postman. Related content: Read our guide to DAST Interactive application security testing (IAST) solutions combine dynamic testing (similar to DAST tools) with static analysis (similar to SAST tools) to help identify and manage security risks in web applications. IAST solutions monitor application execution and gather information about functionality and performance. They identify vulnerabilities in real time by deploying agents and sensors that inspect running applications, and continuously analyzing all application interactions. In addition, many IAST solutions incorporate software configuration analysis (SCA) to identify open source components and frameworks and discover known vulnerabilities. Related content: Read our guide to IAST Penetration testing is a security technique that combines human security expertise with dynamic scanning tools to find vulnerabilities in web application security mechanisms. Penetration testers operate from an attacker’s perspective. They perform reconnaissance, attempt to exploit vulnerabilities, gain unauthorized access, and demonstrate their ability to steal data or disrupt services. However, they operate ethically, without causing actual harm to the organization and within the scope of an agreement with the web application owner. Related content: Read our guide to penetration testing
<urn:uuid:339b9b90-44ad-4fca-aaf8-1b8e10c6817a>
CC-MAIN-2022-40
https://brightsec.com/blog/web-application-vulnerabilities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00483.warc.gz
en
0.85868
1,617
3.484375
3
In a simpler time, securing organizational infrastructure was relatively easy: Most if not all applications, services, and resources necessary for a user – usually an employee – to be productive were available on the network. With the right credentials, usually a username and password, users were considered authorized and trusted and could access the network and use any application, service, and resource to which they were authorized, plus see most of the other applications, services, and resources on the network. Organizations built up their perimeter security and they felt safe and secure from attacks and threats, because everything was safe behind the fortified walls of the network “castle.” Over time, though, securing an organization’s applications, services, and resources became exceedingly complex and difficult. Today, the network perimeter is not easily defined or identifiable, particularly with the ever-increasing use of clouds, the work-from-anywhere movement, mobile device use, and explosion of Internet of Things (IoT). Staff may be comprised of employees, contractors, consultants, supply chain vendors, and more. The sheer complexity of today’s organizational infrastructure has quickly outstripped what perimeter-based network security is able to handle. Enter Zero Trust. Zero Trust is not a new idea, however it’s a security concept that’s more relevant and important today than ever. A Zero Trust Architecture eliminates the model of a trusted network inside a defined perimeter. Zero Trust assumes that an attacker is already present in an environment. It also presumes that an organization-owned environment is no different—and no more trustworthy—than any environment that is not owned by an organization. Also, an organization must never assume implicit trust. The Zero Trust maxim is “Never Trust, Always Verify.” As the concept and desire to adopt a Zero Trust Architecture grew, so too did confusion about what exactly was a Zero Trust Architecture. In an effort to aid understanding of Zero Trust Architecture, the National Institute of Standards and Technology (NIST) developed NIST Special Publication (SP) 800-207, Zero Trust Architecture. While not a deployment guide or plan, SP 800-207 describes Zero Trust for security architects and delivers a road map for the migration and deployment of the security requirements for a Zero Trust Architecture. F5 has been named as one of 18 vendors to collaborate with NIST’s NCCoE on the “Implementing a Zero Trust Architecture Project” to develop practical, interoperable approaches to designing and building Zero Trust Architectures that align with the tenets and principles documented in NIST SP 800-207, Zero Trust Architecture. The proposed example solutions will integrate commercial and open-source products together that leverage cybersecurity standards and recommended practices to showcase the robust security features of a Zero Trust Architecture applied to several common enterprise IT use cases. (Please note that NIST does not evaluate commercial products under this consortium and does not endorse any product or service used.) Additional information on this consortium can be found at https://www.nccoe.nist.gov/zerotrust. “F5 is honored and excited to announce our collaboration with National Institute of Standards and Technology’s (NIST) National Cybersecurity Center of Excellence (NCCoE) on their “Implementing a Zero Trust Architecture Project,” states Peter Kersten, Vice President, Sales - Federal. “We look forward to a strong collaborative effort with our partners and other leading security stalwarts that culminates in reference architectures and demonstrations of a variety of interactive, integrated design approaches for a Zero Trust Architecture that maintain the principles and tenets published in the NIST SP 800-207, Zero Trust Architecture.” F5 is joined on this project by collaborators Amazon Web Services (AWS), AppGate, Cisco, FireEye, Forescout, IBM, Ivanti, McAfee, Microsoft, Okta, Palo Alto Networks, PC Matic, Radiant Logic, SailPoint Technologies, Symantec (Broadcom), Tenable and Zscaler. The result of this project will be a NIST Cybersecurity Practice Guide, a publicly available description of the practical steps necessary to implement cybersecurity reference designs for a Zero Trust Architecture.
<urn:uuid:cc7ce455-0094-49a1-9d59-ce1a1c57ae8a>
CC-MAIN-2022-40
https://www.f5.com/company/blog/f5-collaborates-with-nists-national-cybersecurity-center-of-excellence-on-implementing-a-zero-trust-architecture-project
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00483.warc.gz
en
0.914426
862
2.59375
3
CAMBRIDGE, United Kingdom — Future doctors at a hospital in the United Kingdom have become the first in the world to train with holographic patients. Wearing mixed-reality headsets, students can treat virtual patients using technology that mimics medical situations. Researchers at Addenbrooke’s Hospital in Cambridge developed the pioneering technology. During the simulation, medical students encounter a virtual patient with symptoms – such as being asthmatic – and must make real-time decisions about their care. National Health Service director Stephen Powis says the new technology would help train the next generation of doctors by allowing them to practice medicine in real-time. The first training module features a hologram patient with asthma, followed by scenarios of anaphylaxis, a blocked blood vessel, and pneumonia. Further modules in cardiology and neurology are currently in development. “The NHS has always been at the forefront of medical innovation, and this unique development by teams in Cambridge – to use life-like holographic patients in medical training – could enhance the learning experience of our next generation of doctors, nurses and healthcare workers, by creating new environments to practice medicine in real time, while improving access to training worldwide,” says Professor Sir Stephen Powis, the NHS national medical director, in a media release. Taking medical school to the virtual classroom The new training method rivals conventional resources for learning, such as textbooks, mannequins, and computer software. Named HoloScenarios, the mixed-reality technology is now available for license to medical institutions across the world, with developers saying it offers a cost-effective and flexible training resource. Mixed reality allows users to interact with and manipulate both physical and virtual items and environments. It is similar to the well-known and fully-immersive virtual reality (VR), which places the user entirely inside a digital world. “Mixed reality is increasingly recognized as a useful method of simulator training. As institutions scale procurement, the demand for platforms that offer utility and ease of mixed reality learning management is rapidly expanding,” explains Dr. Arun Gupta, a consultant anesthetist at Cambridge University Hospitals. “Our research is aimed at uncovering how such simulations can best support learning and accelerate the adoption of effective mixed reality training while informing ongoing development,” says Riikka Hofmann, professor at the University of Cambridge’s education department. “We hope that it will help guide institutions in implementing mixed reality into their curricula, in the same way institutions evaluate conventional resources, such as textbooks, manikins, models or computer software, and, ultimately, improve patient outcomes.” Addenbrooke’s Hospital developed the new mixed-reality technology in partnership with the University of Cambridge and Los Angeles-based tech company GigXR. “Empowering instructors with 360-degree preparation for clinical practice represents a milestone for GigXR that allows us to provide our customers with a library of applications that offers solutions for students from their first courses to continuing education,” concludes David King Lassman, founder of GigXR. “Our first HoloScenarios module represents a new and incredibly powerful way to use mixed reality for healthcare training, to be followed up by many more modules and new applications delivered soon.”
<urn:uuid:a990c595-dcf1-43d9-988f-2d7de8f0aa0f>
CC-MAIN-2022-40
https://cybercoastal.com/holographic-patients-are-helping-train-student-doctors/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00483.warc.gz
en
0.926247
684
3.421875
3
Understanding Big Data What do Wal-Mart, Facebook and the Hadron Collider have in common? They are just three of many large organizations who are major consumers and processors of Big Data, a term that is becoming a greater priority for companies around the world as they struggle with a ceaseless and ever growing ocean of information. The short definition of Big Data is that it represents all of the data in your organization – not just one type. Data resides in all business functions: marketing, finance, operations, research & development, customer experience – everywhere – and it essentially comes in three flavors: structured, unstructured and binary. Structured data is the type of information that is organized and indexable, and consequently is most often stored in databases or annotated documents. This could include records and files. Unstructured data refers to loose material such as emails and tweets, and according to some estimates, may comprise 80% or even 90% of a company’s potentially useable information. Binary data refers to photographs and other media that is generally stored using binary formats. In a recent podcast delivered by Mike Gualtieri, Principal Analyst, Forrester Research, Inc., and Milan Vaclavik, Senior Director & Solution Lead for CenturyLink Technology Solutions, it was noted that 70% of IT decision-makers see big data analytics as a priority within one year. This makes sense. For a company to fully understand where it is going, what its customers need, and how it compares to the marketplace, it must be able to access and use all of its data quickly and comprehensively. Currently, for most companies, this data is segmented into silos, with different storage mechanisms running on different platforms overseen by different people. So in a sense, Big Data at this moment in time, does not so much represent bits of information. The term better represents a concept, a problem, and a solution. The concept highlights an awareness of just how much company-related information is out there to process, such as inventory, transactions, emails, images, software applications. The problem is in accepting the need to categorize, store and access this data at any time, without delay. The concept lies in the management of all of this data to a more sophisticated approach to its storage, access and use. What kind of data is Big Data used for? Common use cases include: - Marketing campaign analysis - Data refining - Sentiment and social graph analysis - Customer churn analysis - Risk and fraud compliance - Real-time recommendations and offers - Customer experience analysis - Predictive analytics - Machine-generated data analysis. When a company does not employ a sufficiently robust approach to managing its data, Vaclavik says, it gives way to a biased or inaccurate view of the business. Analytics, for example, the analysis of key data, often relies on a very small percentage of the entire data pool; only 12% on average, which is a wholly inaccurate method for understanding what is going on. The idea of using cloud infrastructure for data analytics is gaining traction with IT managers tasked with the challenges of analyzing large amounts of data from diverse sources The key driver of big data, then, is a breakdown of the silos to allow for better cross-functional analysis. Big data specialists such as those that CenturyLink Technology Solutions, seek to set up a system that has four goals: - First, to capture and store all the data required for business functions. - Second, have a platform or solution to continuously integrate more data. - Third, to allow for continuous access. - Fourth, to allow insight, or understanding of the data itself. If any of these layers are missing then the system does not work. In a sense big data represents an old problem, but one that is much larger today, thanks to the increased number of devices connected to the Internet, and the resultant explosion of information. It can be perceived more as an ecosystem that a new technology. Infrastructure availability, scalability and reliability is critical and cloud increasingly is filling this need: IT managers need to focus on infrastructure that can scale elastically but not be overly complex to manage and secure; and it must offer high-performance computing with low latency; The cloud presents a compelling solution to this bundle of big data challenges. Many organizations are turning to Hadoop, an open-source application for large-scale data processing, to form the centerpiece of the big data solution. Milan points out that although Hadoop is well-suited for managing big data at the data layer, it still must solve three major challenges that emerge: - The first is integration: moving into a modality in which traditional data silos are broken down. - The second is staff skills, since applications such as Hadoop require specific skill sets in order to be maintained and run properly. - Third is the inevitable rush on the market that occurs when a new data management platform works so well that it simultaneously increases demand for more data. Ultimately, what Big Data comes down to is the consolidation, processing, and access to the information that drives a company. As world attention moves from gigabytes to petabytes and exabytes, the scope of operations expands logarithmically, forcing an enterprise-wide big data model to keep pace. And increasingly, for managing big data and big workloads, IT is turning to cloud vendors who offer a reliable, highly available infrastructure that can scale elastically without being overly complex to manage. Big Data services available through CenturyLink can be found at: http://www.centurylinktechnology.com/big-data By Steve Prentice Post Sponsored By Century Link Steve Prentice is a project manager, writer, speaker and expert on productivity in the workplace, specifically the juncture where people and technology intersect. He is a senior writer for CloudTweaks.
<urn:uuid:d59df885-eea9-4564-950a-2856d8c73fff>
CC-MAIN-2022-40
https://cloudtweaks.com/2014/05/understanding-big-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00483.warc.gz
en
0.940934
1,204
2.890625
3
Saturday, October 1, 2022 Published 2 Years Ago on Thursday, May 28 2020 By Mounir Jamil As coronavirus vaccine trials take place all over the world, scientists gather data to help maximize research potential and to ensure a more efficient, effective and ethical study design. Despite rigorous vaccine efforts however, the picture remains unclear. In mid-May, Moderna a US biotech firm exposed the first data from a trial. Its coronavirus vaccine triggered an immune response in individuals, and has protected mice from the lung infections caused by the coronavirus SARS-CoV-2. The results, which were announced in a press release, were translated as widely positive and caused stock prices to go up. Other fast-tracked test for coronavirus vaccines indicate that they have prevented infections in the lungs of monkeys that were exposed to SARS-CoV-2, but not in other parts of the body. A vaccine that is being developed at the University of Oxford, has protected six monkeys from pneumonia, however, the animals’ noses fostered as much virus as those of unvaccinated monkeys. Moderna’s coronavirus vaccine, co-developed with the US National Institute of Allergy and Infectious Diseases (NIAID) in Maryland, began safety testing on humans at the beginning of March. The vaccine is constituted of mRNA instructions that build on the coronavirus’s spike protein, it causes human cells to churn out the foreign protein, and alters the immune system. Even though such RNA-based vaccines can be easily developed, none have been licensed anywhere in the world yet. In a press release, Moderna also reported that 45 participants in the study that have received one or two doses of their vaccine have developed a strong immune response to the virus. Researchers have measured virus recognizing antibodies in 25 of the participants and have successfully detected levels close to or even higher than those that were found in the blood of individuals that fully recovered from the virus. However, it is still not clear if their responses are enough to protect people from the virus, due to the fact Moderna hasn’t shared its data, claims Peter Hotez, a vaccine scientist at Baylor College of Medicine and says that he is not sure if this is actually a positive result. He refers to an earlier May 15 bioRxiv preprint3 showing that most who actually recovered from the virus without need for hospitalization did not produce high levels of the neutralizing antibody that prevents the virus from infecting cells. Moderna has measured the potent antibodies in eight different trial participants and revealed that their levels were similar to the patients who recovered. Hotez also expressed his doubts regarding initial results of the Oxford study, that found that monkeys produced modest levels of neutralizing antibodies after being administered only one dose of the coronavirus vaccines. He says it seems that those numbers need to be significantly higher to afford protection. The vaccine is composed from a chimpanzee virus that has been genetically modified to produce a protein for the coronavirus. Hotez added that the coronavirus vaccines being developed by Sinovac Biotech in Beijing seem to have shown a more promising antibody response in macaque monkeys after they were administered three doses. Sarah Gilbert, an Oxford vaccinologist has co-led the study alongside Vincent Munster, a virologist at NIAD’s labs in Hamilton Montana. Gilbert mentioned that the Oxford monkeys were administered a really high dose of the virus after receiving the vaccine. This could be reason why the vaccinated animals had just as much SARS-CoV-2 genetic materials located in their noses as the control animals, although the vaccinated monkeys didn’t develop any sign of pneumonia. By administering high doses, the test ensures that the animals will be infected with the virus, however it might not replicate natural infections. Even though assessing the efficacy of a coronavirus vaccine is challenging, the most recent data focuses on safety, according to researchers. The monkeys vaccinated at Oxford and Sinovac did not develop an exacerbated disease post-infection, which was a key fear because an inactivated vaccine that causes SARS (severe acute respiratory syndrome) manifested signs in macaques. Moderna is set to begin phase II of the trial soon, which will involve 600 participants. It aims at beginning a phase III efficacy trial in July, to determine if Coronavirus vaccines are able to prevent disease in high-risk groups such as healthcare workers with underlying problems. The team at Oxford have already enrolled over 1,000 participants for their UK trial. Some of the volunteers have received a placebo, allowing researchers to determine if the vaccine works in humans over the coming months. Gilbert says that the lack of problems present in the monkey study was very reassuring. Artificial intelligence (AI) systems are already seeing huge adoption by businesses big and small. Its ability to enhance marketing tactics, customer service, business strategy, market analytics, preventive maintenance, autonomous vehicles, video surveillance, medical, and much more. Making AI technology invaluable across all sectors. Here are the fastest advancing AI trends to watch for in 2022. Small […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:11cade4f-da82-41f2-9a0b-9162971779d4>
CC-MAIN-2022-40
https://insidetelecom.com/coronavirus-vaccine-trials-underway-but-outcome-remains-unclear/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00483.warc.gz
en
0.962448
1,094
3.3125
3
An outbreak of flu-like symptoms that originated from a wet market in the city of Wuhan, China has brought chaos to the world. The new coronavirus or 2019-NCoV, was first reported on December 31st, 2019, as pneumonia. The number of infected people increased exponentially and the virus started to spread like wildfire. Authorities identified the virus a week later on January 6th. This was followed by multiple deaths in China and carriers were identified in several other countries too. Today there are nearly 1,360 fatalities, nearly all in China. And the total confirmed cases worldwide has surpassed 60,000. As the number of infected as well as the death toll increased, the WHO declared a global emergency. A call for more transparency Historically, the Chinese officials have been accused of being tight-lipped and delaying the release of details and information of the cases. The WHO and CDC depend on the data given by the Chinese for mapping the future course of action – in terms of allocating healthcare resources, dispatching medical personnel and travel restrictions. So it’s important to know the severity of the disease as it spreads. Today, in the modern age, with a plethora of data available, there has been an unprecedented increase in transparency and openness. Large tech firms like Alibaba and Baidu have helped researchers find a solution by offering their cloud platforms for free, which is used as testing platforms. AI in the fight against coronavirus There are a few tech firms that are helping combat the pandemic. Some of them are: The Canadian digital health company that uses natural language processing and machine learning, predicted that the virus would jump from Wuhan to Bangkok, Seoul, Taipei, and Tokyo. The algorithm, which has access to dozens of flight records, goes through news reports in about 65 languages, animal disease outbreaks and social media posts. It will use the data in predictive models to find a pattern. If you are unsure if you were amidst an infected individual while travelling, this Chinese internet company will help you find out. By entering your flight or train number as well as the travel date, you can find out if you had been in the company of patients affected by the coronavirus. A high-tech start-up from China is offering its chatbot service free for medical institutions, governments, and charities. This automated caller system is used to call potentially infected patients,help track them down and test them for the virus. It will provide solutions based on their responses, like suggest quarantine if they have been exposed to the virus but have not developed any symptoms as yet . The majority of the issue surrounding coronavirus is that it’s difficult to diagnose. The symptoms are so close to the common flu or pneumonia. The American surveillance company is working with a Boston startup called Buoy Health, an algorithm that helps tell. if a person has really been infected by the coronavirus, or if it’s just the seasonal flu. Another epidemic-monitoring company that tracked the virus-affected countries, a week before it was made public knowledge by government officials. It can also estimate the extent of people infected by the disease’s outbreak, and its consequences causing social and political disruption. Artificial intelligence is not going to cure coronavirus or help contain it. But it is useful in tracking and monitoring this pandemic, and in responding to it as well. These surveillance algorithms have been around for a long time, but for the first time in a global health related outbreak, they are proving to be useful, thanks to the recent developments in machine learning and the availability of data.
<urn:uuid:073b70b9-3026-47de-b841-098ba6c9bed8>
CC-MAIN-2022-40
https://www.crayondata.com/artificial-intelligence-battle-coronavirus/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00483.warc.gz
en
0.967312
750
2.875
3
Blockchain technology and the General Data Protection Regulation (GDPR) of the European Union are without doubt two of the most trending topics of 2018. The Blockchain is the technology that underpins BitCoin and other popular cryptocurrencies, through providing mechanisms for decentralized security and trust in financial transactions i.e. without involving any trusted third party. During the last couple of years, the incorporation of the blockchain technology is seen in a variety of non-finance applications especially in sectors like energy, industry, and healthcare. On the other hand, GDPR is a recently approved regulation, which imposes new data protection requirements for all European citizens and European enterprises operating anywhere in the world. As such it affects the way enterprises handle citizens data, including the technologies that they employ to collect, process and analyze datasets. This is where Blockchains and GDPR meet: Blockchains store and process data in distributed ledger infrastructures and therefore their operation and use must comply with the GDPR Standard. GDPR is structured around some core principles, including the need for transparency, fairness, and lawfulness in the handling and use of personal data, as well as for limiting the processing of personal data to specified, explicit, and legitimate purposes. Based on GDPR, data controllers and data processors must also minimize the collection and storage of personal data to what is adequate and relevant for their intended purpose, while at the same time ensuring security, integrity, and confidentiality of personal data. Enterprises take GDPR seriously given that the maximum fine for serious infringements can be as high as the greater of €20 million or four (4%) percent of an organization’s annual global revenue. GDPR ensures that individuals have control over their personal data while being able to access, view and change them at any time. Moreover, citizens have the “right to be forgotten”, which means that they must be able to delete their data whenever they want. This right brings some tension between GDPR and blockchain technology, given that public blockchains are immutable: Once information is in the blockchain, it cannot be altered or deleted. Also, blockchains are by definition decentralized and not under the control of single-party such as an administrator. Without an administrator with “delete rights” on the blockchain, it becomes difficult to delete an individual’s personal data. Likewise, all information in a blockchain is public, which makes it accessible to anyone as a means of preventing data manipulation. As a result, there are some obvious conflicts between a blockchain’s operational characteristics (i.e. decentralization, immutability, transparency) and GDPR principles. Nevertheless, blockchain technology presents also opportunities for boosting GDPR compliance. For example, blockchains have a public/private key system, which allows participants to send and receive data anonymously. In particular, the private key ensures access to information, while the public one makes the transactions addressable, without however linking to elements that can identify personal data. Furthermore, the blockchain’s decentralized nature alleviates the security and reliability vulnerabilities of centralized systems, which eliminates the risks of data-breaches and can, therefore, foster GDPR compliance. Given the above-listed challenges and opportunities, when collecting and processing personal data in a blockchain, enterprises need to seek for solutions that take advantage of the benefits without compromising compliance. To this end, they have to consider the obvious implication of the right to be forgotten on the design and operation of blockchain infrastructures: Personal data cannot be stored directly on the blockchain. To alleviate this limitation, the following alternatives can be considered: Beyond technical solutions for GDPR compliance, there is always a need for accompanying legal consulting prior to ensuring the compliance of a blockchain solution. This is because the mapping of some GDPR concepts on the blockchain technology (e.g., data controller, data processor, third parties) are open to interpretation. Hence, legal experts need to verify compliance of any solutions considering the roles and responsibilities of the various stakeholders during the blockchain operation. Based on blockchain’s benefits for implementing GDPR compliance, there are already a number of GDPR compliant blockchain-based products and services. For example, the Pillar project has implemented an open-source, multi-chain wallet that provides platform services for consumers, companies, and governments. Users of the Pillar wallet lock, control and protect their data in compliance with GDPR. Another example is the LogSentinel product that offers secure logging and audit-proof. It provides data integrity and makes it impossible to manipulate the data without detection. Moreover, it provides GDPR compliance reports along with a built-in data processing register. A third example of a blockchain-based solution for GDPR is VOLTA, which leverages KSI blockchain technology and supports governance and compliance processes for managing personally identifiable information in-line with GDPR. At the heart of the solution lies a technology that allows any type of electronic activity to be independently verified without the need for trusted third-party insiders or cryptographic keys. There are many more examples of products that leverage the capabilities of the blockchain in order to provide GDPR compliance solutions. They all tap on the opportunities that we have previously presented. Overall, despite some obvious conflicts between GDPR principles and blockchain technology properties, it is possible to implement GDPR compliant blockchains. However, the implementation of the latter is not only a technology issue but rather needs sound legal expertise as well. Furthermore, the properties of blockchain technology make it ideal for implementing GDPR compliance solutions as implemented in the products listed above. These products are probably just the beginning: Blockchain is likely to become one of the primary GDPR compliance technologies in the years to come. Smart Contracts for Innovative Supply chain management Increasing Trust in the Pharma Value Chain using Blockchain Technology Top Technology Trends for the Future of Insurance On Blockchain’s Transformative Power in Seven Industrial Sectors Are Blockchains ready for Industrial Applications? Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? The emerging role of Autonomic Systems for Advanced IT Service Management Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:dcb4b633-e193-43ca-bac9-ee4e3aca11e5>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/interlinking-blockchain-with-the-gdpr-norms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00683.warc.gz
en
0.919728
1,422
3.03125
3
The straightest path to sustainable development is circular The circular economy can stymie environmental degradation and climate change and reduce our reliance on finite resources while providing economic opportunities and benefits and promoting sustainable development. But what do consumers think and are organizations doing enough? We wanted to find out. For the latest Capgemini Research Institute report, Circular economy for a sustainable future: How organizations can empower consumers and transition to a circular economy, we surveyed nearly 8,000 consumers across the US, UK, the EU, and APAC for major consumer-facing industries and spoke with academics, industry experts, startups, and think tanks active in the field of circular economy. We found that while consumers are aware of the problem of waste and resource depletion and interested in participating in circular economy initiatives and mindful consumption practices, they face significant roadblocks, especially in terms of convenience, access, cost, and information. Moreover, even though they are clearly on board when it comes to closing the loop on food and plastic waste for example, they remain reticent in other areas, especially sharing/renting/leasing or buying second-hand. This leads to the consensus, from both consumers and organizations themselves, that organizations simply aren’t doing enough. To scale their circular economy practices, organizations have to embrace circular design principles and identify business models that aren’t just driven by product sales. By rethinking their value and supply chains and collaborating more within their ecosystems and with governments, lawmakers, academics, think tanks, suppliers, vendors, clients, and innovative startups, organizations can push their circular initiatives forward. By leveraging emerging technologies and promoting skill building, culture change, and accountability, they can lay solid foundations for a circular mindset internally and by providing information, building trust and awareness, and shifting mindsets, they can do the same for their consumers. For more information on the circular economy, download the report. Most importantly, be well and focus on the future you want.
<urn:uuid:ba726542-87a1-4e47-baf5-f0726e40a55d>
CC-MAIN-2022-40
https://www.capgemini.com/fi-en/research/circular-economy-for-a-sustainable-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00683.warc.gz
en
0.932777
398
2.625
3
After demonstrating a successful GPS spoofing attack against a drone (UAV – unmanned aerial vehicle) last June, Cockrell School of Engineering Assistant Professor Todd Humphreys and his student research team have now proved that a GPS flaw and a few relatively cheap tools can be used to hijacks both ships and planes. The results of their research was demonstrated aboard the “White Rose of Drachs”, a 210-foot-long, $80 million worth yacht, while cruising in the Mediterranean. With a laptop, an antenna, and a custom GPS spoofer that cost only $3,000 to build, the team managed to create a false GPS signal that the crew unknowingly accepted as the correct one and used it for navigation, and this resulted in the ship veering way off the original course. “Professor Humphreys and his team did a number of attacks and basically we on the bridge were absolutely unaware of any difference,” the ship’s captain Andrew Schofield told Fox News, adding that also no alarm systems were triggered during the demonstration. (You can check out the video here.) With 90 percent of the world’s cargo going across the seas, the implications of their research are huge. Attackers can run ships aground, or make two ships collide, or even shut down a port – all with disastrous consequences. Humphreys also noted that with the extreme similarities between the navigation systems of ships and that of commercial aircraft, the same type of attack can be mounted against them. When compared to last year’s research of similar attacks that can be directed against drones, this latest one is more complex and sophisticated. “Before we couldn’t control the UAV. We could only push it off course. This time my students have designed a closed loop controller such that they can dictate the heading of this vessel even when the vessel wants to go a different direction,” Humphreys says. Whether he will be once again called to testify before the US Congress about his research remains to be seen. His drone-hijacking attacks have so far garnered more attention from both the US political establishment and the nation’s Army forces that this latest one, so for the time being he’s trying to spread this information far and wide, and to make the world know that this type of attack is easy and cheap to execute. In the meantime, The Economist has published a timely and interesting piece about GPS jamming, which supports Humphreys’ claims about how simple and trivial is to disrupt the workings of satellite positioning systems.
<urn:uuid:93d4f194-dabb-4b00-8155-3698fb666d61>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2013/07/29/hijacking-ships-and-planes-with-cheap-gps-spoofers-and-laptops/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00683.warc.gz
en
0.961997
530
2.875
3
01. The participatory design process includes which of the following techniques? a) Discovery, evaluation, prototype b) Surveys, testing, release c) Testing, review, redesign d) Analysis, design, prototype 02. Why website navigation that is easy to understand so important? a) It ensures that site visitors can find what they are looking for quickly and easily. b) It ensures visitors will be directed to exactly the information you want them to see. c) It guarantees that visitors will purchase products. d) Because search engines will not be able to index pages within a site, if the navigation in difficult to use. 03. What is an advantage of the mind mapping approach to brainstorming? a) It better captures and represents brainstorming ideas in the way that your brain conceives of them. b) Creating a mind map on a computer program computer fosters an immediate connection to the hand, brain and map contents. c) It helps to guide your thoughts on a clear path by representing your ideas in an organized, linear fashion. d) Computer based mind mapping software tools generates an accurate site. 04. During the evaluation of user feedback from usability testing of a web site project, it is determined that users typically get lost within the site, get confused by "mystery meat" graphical links and are not able to complete tasks as directed. Which of the following UI design patterns would help correct this issue? a) Navigation wizards b) Image zoom c) Navigation tabs d) Progressive disclosure 05. A confirmation message is displayed to a customer after the purchase of product is completed on an ecommerce site. This is an example of which user interface design principle? 06. Which of the following represents an SEO strategy of using short phrases of three to five key words? a) Word cloud b) Long-tail keyword c) Word salad d) Boolean operator keywords 07. You are consulting on the development of a complex Web site for an airline maintenance company. It will contain a large library of technical information. Maintenance technicians will use the site to understand the requirements of their jobs, the physical structure of the airplanes, and other important information. They will need to refer to different areas of the site frequently. What kind of positional awareness tool should be included to help navigation through the site? a) Site maps b) Deep URLs c) Breadcrumb trails d) Descriptive page headings 08. When usability testing results are reported, what should that report include? a) Data collection process, original design mockups, wireframes b) Test goals, test subject demographics, conclusions c) Test schedule, test subject name, screen shots d) Test subject name, original design plan, release date 09. You are designing a website for a wedding planner. She wants all of the text on her site to be displayed in a flowery, elaborate typeface, similar to a wedding invitation. You advise her to consider using a simpler, more conventional typeface. Why? a) Script and fantasy typefaces display at smaller sizes, making them hard to read. b) Text set in a fantasy typeface must be converted to images, which will make web page layout more difficult, and slower to load. c) Script and fantasy typefaces are not allowed on websites; they are reserved for printed documents only. d) Special script or fantasy typefaces may not be installed on her visitors' devices, and must be downloaded from a server as a custom web font. 10. An award winning writer has published a series of poems on her personal website. One of the writer's students has reproduced these poems in their entirety in a college newspaper, which is sold for a small fee. Which type of intellectual property law has been violated? b) Fair Use
<urn:uuid:00634a15-7621-4bab-a499-e7fe6aaeec06>
CC-MAIN-2022-40
https://www.edusum.com/ciw/ciw-user-interface-designer-1d0-621-certification-sample-questions
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00683.warc.gz
en
0.891812
831
2.53125
3
The Convergence of Technological Superpowers AI & IoT and how they're transforming industries Innovative new technologies are creating increasingly sophisticated ways for us to measure and understand the world around us in ways that we never thought possible. Two technologies that are enabling innovators to rethink approaches to solving problems, their business model and entire industries are artificial intelligence (AI) and the Internet of Things (IoT). Let us take a brief look into what these technologies are before we understand and look at examples of how their convergence is driving transformation. What is Artificial Intelligence (AI)? AI is a sub-domain and wide-ranging branch of data science, aimed at developing machines with the intelligence to perform tasks that normally require human intelligence (an equally wide-branching set of tasks). AI is becoming increasingly pervasive, revolutionising many aspects of our lives in areas such as finance, customer service, healthcare and advertising in places that are not always immediately obvious to the user as AI can often provide a seamless experience. A basic summary of how AI works is by a model identifying relationships, patterns and characteristics that exist within a data set and applying those observations to perform a specific task. Examples of AI and its sub-domain machine learning (ML) are becoming increasingly common across a variety of industries, from content and purchase recommendations in online content and shopping, to fraud detection in banking. The ubiquity of software and growth of the internet has resulted in an explosion of data known as the ‘big data problem’ organisations are struggling to manage. In recent years, the capability of AI has led more and more enterprises to explore how AI can be used to help manage this data inundation to solve existing complex problems and tackle emerging ones. What is the Internet of Things (IoT)? IoT are types of sensors such as smartphones, smart watches and other types of electrical devices that can remotely collect and transfer data through a network without requiring human interaction. The ‘connected’ nature of IoT allows smart devices to communicate by exchanging data, integrating data from different types of sensors and performing analytics to derive valuable insights. These sensors are used to collect data and measure actions, events and conditions that occur in an environment in a way that has never been possible. IoT can be thought of as a technological extension of our nervous system that connects us to our environment, essentially plugging us into the world around us and collecting information on things we know are there but have not been able to measure consistently or effectively in real-time. However, the rise of IoT isn’t without its challenges, as it potentially compounds the big data problem by creating more data than organisations know what to do with or act on. Today, IoT is in its relative infancy but is already making a considerable impact in areas such as consumer goods, supply chain and manufacturing, even agriculture. One can only imagine the impact of IoT on our lives in 20 years when the types of sensors, the data they collect and the methods of analysing and acting on this data become more sophisticated. The most compelling aspect of IoT is that when everything becomes connected with everything else, the possibilities become endless. This begs the question: when you combine two revolutionary technologies such as AI and IoT, how can they work together to create new possibilities? Synergy of AI and IoT: The Perfect Combination If IoT is seen as a technological nervous system connecting us to our environment and enabling extensive and varied collection of data across numerous events and contexts – from heartbeat and blood pressure of humans to wear and tear of machinery - then AI and machine learning is the brain making sense of this influx of stimulation; identifying patterns, relationships and correlations that humans couldn’t detect or interpret at the required speed without intelligent assistance and vast processing power. Enterprises are increasingly incorporating AI into IoT applications to discover insights that can inform smarter decisions relating to innovating existing or creating new products, services and processes, or even changing entire business models. The intersection of IoT and AI creates profound new opportunities for brands to connect the dots between data in unprecedented ways and use these insights to enhance their customer value proposition for B2B and B2C focused sectors in a myriad of ways. One of the most prominent areas in which the impact of IoT can be seen is in consumer goods and home automation. Smartphone ownership is virtually ubiquitous, but smart devices such as Alexa and Nest are increasingly making their way into our homes in the form of voice assistants, security cameras, lighting and environmental controls, alarms, smart thermostats to help us manage our environment and even better understand our homes and ourselves. For example, smart thermostats can learn to understand our temperature preferences across different times of day and even different rooms, then adjust accordingly. This capability is even being extended to more traditional appliances such as fridges, which can be temperature controlled remotely, create grocery lists, alert users if doors are open, even recommend recipes based on ingredients in the fridge. Most of these tools and applications can all be monitored and controlled via an application, giving us unprecedented insight and control of our homes even when we are not there. IoT and AI is also influencing our behaviour in our homes through its applications in the energy and water industry. In both these cases, sensors can be deployed across their respective networks to collect data on user consumption, time of day and network performance, which can be used to generate insights into trends in energy usage and to monitor and detect faults across the networks in need of repairs. The benefits are not just for companies though: customers can use these insights to inform conscientious yet practical changes to their consumption behaviour to reduce wasteful energy consumption and bills. These technologies are even finding their way into the last place you might expect to find them: food. IoT and AI are being used in agriculture to unearth valuable insights regarding the optimal conditions to grow high-quality and high yield produce and crops, monitoring factors such as soil quality, moisture levels, acidity, and temperature and enabling farmers to take smart proactive measures to maximise quality and production. The combination of sensor technology and AI into agriculture may be the key to more sustainable food production and distribution and feeding the planet. Augmenting Human Capability So how should these two technology superpowers be combined to solve problems and make our lives better? Well, this question is one that us humans must answer. One key limitation to the metaphor of AI being akin to a brain making sense of data generated from an IoT nervous system is that artificial intelligence lacks the agency or sentience to understand the real-world context behind data and human goals and objectives. This lack of understanding or reason means that whilst AI has the capability to generate valuable insights through advanced analytics that map patterns in data, it lacks the intelligence to tell us what problems we should be solving, the data we should be collecting, the patterns we should be searching for and why. It simply performs the task it is told to do – a task allocated by humans. The implication is that AI, by itself, is not the panacea to all our problems and cannot miraculously divine valuable insights without direction. IoT might provide us with the raw material and AI is a tool – albeit a sophisticated one – to process that raw material into something useful, but these entities must be aligned with the user. The person who wields matters. Vision and intent are intrinsic to a successful IoT/AI strategy and sophisticated IoT hardware and AI cannot compensate for the blind implementation of either. Ultimately, the responsibility falls to humans to identify the problem, define the desired outcome, and select or craft the right tool for the task. The emerging use-cases and potential upsides of combining AI and IoT will cause business leaders to rethink their products, services, operations, business models and the value they can offer customers. Afterall, when everything is connected the possibilities become endless: which necessitates a strong and clear vision to avoid enthusiastically stumbling in the wrong direction.
<urn:uuid:dc486324-b6e2-46cc-88e9-e4b559333f6c>
CC-MAIN-2022-40
https://aimagazine.com/ai-strategy/convergence-technological-superpowers
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00683.warc.gz
en
0.944476
1,611
2.96875
3
All Windows folders must have two entries: the directory “.” (denoting the current directory) and “..” (denoting the parent directory). On a Windows platform, it’s important to create a file extension with dots. This prevents attacks that the system may confuse with dots and parses. However, as seen on the command above, you cannot create a file with “…”, including using it as a name. All this can be bypassed using the ::$INDEX_ALLOCATION trick. Using the folder name twice also creates the folders. For example, you can pass the command mkdir “….\….\” to create a directory and another one inside it. This will enable you to enter the folders, store files, and execute programs from the same location. It is not possible to enter the folder using its name. As such, after creating the files in the folder, you’ll be forced to use the “cd … \… \” syntax. Please note that if you use “cd.” in the folder, it will take you one directory up because of the confusion in paths. You may not open the same directory from the Graphical User Interface (GUI). In some cases, if you stay in the same directory and maintain the same path, double clicking a folder may not have any impact. In other cases, you may notice that you are in the folder but the path in the explorer changes. For instance, when opening the folder several times, you may notice many dirs in the path of the graphical interface. By entering as many folders as you want, you may not show all the files inside the folder in the GUI, and you may also not open a folder by passing “C:\Sample\Test\…\…\” in the input field. NOTE: Deleting the folder will crash the explorer because it will not stop counting files being deleted; best advice is to avoid doing this on your working system Using the GUI to search for files may also not work for you; for example, searching for a Sample123.txt will keep searching forever, without anything to show. Searching for the same file via the command prompt gives a positive result, as shown below. However, most administrators prefer to use the PowerShell, which gives an endless loop. If you use the Get-ChildItem –Path C:\Test –Filter Sample123.txt –Recurse –ErrorAction SilentlyContinue –Force commandon the PowerShell interface, it will iterate forever. Some programs may seem to work correctly. For example, if you place some malware in the same directory and perform tests using an antivirus solution, nothing will happen because some of them may be unable to interpret their names and paths. When searching for viruses inside the C:\Test\…\, the malware will be skipped inside the C:\Test\. Some Python programs that use the function os.walk() make it to work correctly. Please note that creating a directory junction pointing to its own parent folder will not lead to an endless loop in both cmd and PowerShell. Protect yourself! Discover all security holes in the folder hierarchy on your Windows fileservers! Get your free trial of the easiest and fastest NTFS Permission Reporter now!
<urn:uuid:70c1f9d0-8ba9-49b4-9860-193b24e197d9>
CC-MAIN-2022-40
https://blog.foldersecurityviewer.com/windows-how-to-create-files-that-cannot-be-found-using-the-dots/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00683.warc.gz
en
0.896964
698
3.640625
4
Broadband is the name given to any fast, permanent Internet connection. It can be delivered by cable, satellite, mobile, fiber optics and ADSL, which is the technology for high speed Internet access. There are many of the Internet service provider in India like BSNL, Airtel, Sify etc.. But BSNL is the leading among them. To see all the Internet Service Providers of India see HERE. When the Internet revolution began, users accessed the net via dial up and a "modem". Modem is a piece of hardware that converts analog signals to digital signal from your computer, so they can travel down a telephone line, and vice versa. This could be painful slow, and it tied up your phone lines. A broadband connection make Internet more enjoyable as it is much faster. This is because a broadband connection can download more chunks of data " bits" simultaneously.
<urn:uuid:28bdda72-6047-4de0-809d-5144452725b4>
CC-MAIN-2022-40
https://www.cyberkendra.com/2013/01/why-is-bandwidth-different-from.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00683.warc.gz
en
0.916989
188
2.84375
3
Do You Know About Optical TAP (Traffic Access Point) Cassettes? The need for real-time network traffic monitoring in today’s intelligent data center has become compelling. Data center network administrators need to gain better visibility of their networks, optimize the performance of mission-critical applications and, more importantly, secure their networks. Optical traffic access point (TAP) cassettes are hardware tools that allow you to monitor your network, which make a 100% copy of your network’s data allowing your monitoring tools to see every bit, byte and packet. In fact, optical TAP is one of the most efficient ways to monitor traffic and network link quality in data center and telecom carrier networks. Have you ever used fiber optic tap in your network? Let’s get close to passive network TAP together in this article. Optical TAP is an access point installed in networks that provides real-time monitoring of ports. Typically, the data is used to monitor for security threats, performance issues, and optimization of the network. Optical TAP is a passive device that integrates TAP functionality into cable patching system, which requires no power of its own and does not actively interact with other components of the network. Instead of two switches or routers connecting directly to each other, the optical TAP sits between the two endpoint devices connected directly to each of them. Then traffic is copied and once the traffic is tapped, the copy can be used for any sort of monitoring, security, or analytical use. Thus, fiber optic TAP is a key component of any visibility system. Two types of optical TAPs are available—active network TAP and passive network TAP. The active network TAP uses electricity for operation while the passive network TAP does not. The active network TAP is mainly used for applications that require manipulation of the signal sent to the monitoring port. This is required only for very specialized applications. Passive network TAP is much more common in enterprise data centers and used for applications that require simple monitoring. More detailedly, passive network TAP provides a simple and powerful way to monitor optical networks. And because of requiring no power and having no electrical components, it is impossible for passive network TAP to be a point of failure when deployed in a production network. And passive network TAP is highly reliable and requires no maintenance. In all, passive network TAP provides access to data flowing across a network, without creating either a location to corrupt data or a prospective point of failure. Optical fiber is designed to send light from a transceiver through a thin glass cable to a receiver on the other end. Instead of connecting directly to each other, each of the two endpoint nodes (switches, routers, database, etc) are connected to network ports on the optical TAP cassette. An optical TAP usually integrates both network ports and monitoring ports in a module and it includes an optical splitter, which “splits” off a percentage of the input power and sends it to a monitoring device. As shown in the figure below, we can connect the optical TAP to the Switch X and Switch Y via network ports and connect optical TAP to monitoring device via monitoring ports. With the splitter, we can see that a part of TX data of Switch X transmits to RX of Switch Y and another part of TX data of Switch X transmits to monitor. Similarly, a part of TX data of Switch Y transmits to RX of Switch X and another part of TX data of Switch Y transmits to monitor. The monitored traffic is thus separated into two transmit (TX-only) signals, one copy from endpoint A (Switch X), and one copy from endpoint B (Switch Y). From the above picture, we can see that the signal is split to two parts by the splitter. So what’s the proportional share of light for each path (transmit to network and monitor)—so called optical tap split ratio? The split ratio is written as a combination of two percentages. The first number is designated as the network percentage. The second number is the monitor percentage. They always add up to 100 percent. Generally, the TAP split ratio is available in 50/50 or 70/30. A 50/50 split ratio would indicate that 50% of the light budget coming into the TAP from the network is passed along to the end device, and 50% of the light budget is diverted to the monitoring device. Whereas in a 70/30 split ratio, 70% of the light budget is passed along to the end device and only 30% of the light budget is passed along to the network monitoring device. Optical TAP Split Ratio If the path to your monitoring device is short and direct, you might need to maintain more light on your primary link to keep both signals readable. At the edge of readability, you will experience network performance degradation due to retries and errors even if the link does not fail completely. If the network TAP does not split off enough light, the monitor link will fail to deliver enough light for the monitoring appliance to register an accurate signal. Low light levels on the monitor link can lead to false conclusions of data errors on the network link, or there may not be enough light for the appliance to register any signal at all. Before you connect the fiber optic cable into an optical TAP, make sure that the network TAP characteristics is compatible with the cables. At present, network TAP is mainly available in LC and MTP two port types. Take the MTP network TAP for example, and following steps below to connect an optical TAP cassette to your network: Fiber Optic TAP To connect TAP cassette to the network (in-line links) 1.Connect MTP network port to switch A using a MTP cable. 2.Connect another MTP network port to switch B using a MTP cable. To connect network TAP to the monitoring device 1.Connect network TAP monitor port to monitoring device using a MTP cable for switch A monitoring. 2.Connect another network TAP monitor port to monitoring device using a MTP cable for switch B monitoring. Data center networks are becoming more and more complex making it more difficult to trouble shoot and balance traffic within LANs and SANs. Optical TAP allows network and storage engineers to gather valuable data analytics, which give you a much fuller understanding of your data flow patterns and allow you to plan your technology integrations accordingly. FS.COM provide a series of 10G, 40G or 100G network TAP, and is available in single mode or multimode with a 50/50 or 70/30 split ratio. For more information, please contact us via email@example.com or call 24/7 Customer Service: 1 (718) 577 1006.
<urn:uuid:140fb5ca-7f7e-48b8-8ec7-59c0417f435f>
CC-MAIN-2022-40
https://community.fs.com/blog/do-you-know-about-optical-tap-test-access-point-cassettes.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00083.warc.gz
en
0.914207
1,405
2.734375
3
The Internet of Things (IoT) is revolutionizing our world as connected sensors provide the data that help us improve food production, optimize supply chains, combat climate change, and more. Once data is gathered from local sensors, it has to be transmitted back to the Internet so the data can be analyzed and key decisions made. A challenge arises, however, when these sensors are deployed in maritime applications or rural and remote areas that lack the cell or WiFi coverage needed to transmit that data. Over 90% of the Earth’s surface area is not covered by a terrestrial connectivity solution. With 41 billion IoT devices expected to come online by 2027, there is an enormous need for an accessible IoT connectivity solution that does not have geographic boundaries. Founded in 2016, Swarm Technologies (now part of SpaceX) is at the forefront of providing affordable and 100% global. IoT connectivity through its network of satellites in low Earth orbit (LEO). Satellite connectivity has historically been prohibitively expensive for all but the largest companies. Spending hundreds of dollars a month per device is just not feasible for a farmer who has hundreds or thousands acres to cover, for example. Swarm’s ultra-small satellites (just 11 x 11 x 2.8cm, or about the size of a grilled cheese sandwich) enable the company to provide satellite connectivity at up to 20x lower cost than legacy providers. This opens up a nearly infinite number of new use cases that require reliable connectivity in even the most remote corners of the globe, but are also cost-conscious. FreeWave’s versatile suite of IoT solutions are exactly the type of technology that can benefit from Swarm’s low-cost, global connectivity. From agriculture, to oil & gas, to utilities, to unmanned vehicles, FreeWave’s products need to be able to reliably send small amounts of data from remote, often unconnected areas. Satellite connectivity, however, had always been too expensive for FreeWave to consider. The result was an inability to deploy as many remote devices as they would like to, limiting their growth potential. When remote deployments did occur, some data ended up being lost altogether. Swarm’s low-cost satellite network is changing that. The Fusion Satellite is FreeWave’s first Swarm-enabled IoT product. It integrates Edge Compute capabilities and software to connect industry-standard equipment in remote and environmentally-harsh regions. With FreeWave’s expanded Fusion offering, coupled with the FreeWave Edge data management software, Fusion Satellite can get stranded data where and when it’s needed from even the most remote locations. “Reliable, fully global connectivity is essential to many businesses being able to operate in rural or remote locations, and eliminate the constraints of cellular coverage,” says Sara Spangelo, CEO and co-founder of Swarm (now Sr. Director at SpaceX). “We’re excited to see the new markets and applications that FreeWave can support now that their devices can transmit from every point on Earth!” The FreeWave satellite solution, leveraging the Swarm component, deliver the lowest cost service ROI to our global customers. Thanks to companies like Swarm, truly global connectivity is available and accessible like never before. It’s supercharging the growth of IoT companies and opening the door to new sensing, tracking, and monitoring applications around the world. About Swarm Technologies Swarm provides the world’s lowest-cost two-way satellite communications network. Founded in 2016, Swarm is committed to making data and communications accessible to everyone, everywhere on Earth. Swarm’s uniquely small satellites enable the company to provide network services and user hardware at the industry’s lowest cost and deliver maximum value to customers across a range of industries including maritime shipping, agriculture, energy, and ground transportation, providing the highest value for low-bandwidth use cases such as asset tracking and sensor monitoring. To learn more, visit www.swarm.space. See how the FreeWave Fusion Satellite solution connects you to the data even in remote locations.
<urn:uuid:3b63d3c7-3d29-48f2-ab1c-40a2df77c672>
CC-MAIN-2022-40
https://www.freewave.com/swarm-guest-blog/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00083.warc.gz
en
0.930068
829
2.78125
3
IBM’s Quantum team just proved that even today’s noisy of quantum computers are better at storing computations than classical computers. The company is touting the find as a sort of new flavor of quantum advantage — available scratch space. “Through our research, we’re exploring a very simple question: how does the computational power differ when a computer has access to classical scratch space versus quantum scratch space?” the team explained. The advantage, they said, is that unlike classical computer bits, constrained to the “on” or “off” position when it interacts with the gate, qubits aren’t limited to two positions, they have what they call a “larger space of values.” Qubits deliver not just the classical bit, but the complex probabilities with which it will calculate a specific qubit value. “Here, for the first time that we are aware of, we report a simultaneous proof and experimental verification of a new kind of quantum advantage,” the IBM team said in their announcement of the research. “Specifically, we show that qubits, even today’s noisy qubits, offer more value than bits as a medium of storage during computations.” In their paper published in Nature Physics, in addition to showing their math, they outline how they proved their scratch space theory in the lab by asking both systems to “find a majority of three bits,” something a classic, binary bit can’t do. “We armed the limited classical computer with access to random Boolean gates to further increase its computational capabilities,” they wrote. “But even with access to this randomness, the classical computer can only succeed 87.5 percent of the time whereas a perfect, noiseless quantum computer could succeed 100 percent of the time.” The quantum computer got a little help too, the team added fractional CNOT gates to boost efficiency and propose circuit hardware tweaks to make it easier and cheaper to connect added qubits to the computational qubit that SWAP operations. “Now, instead of SWAP-ing qubits to interact them with the computational qubit, we can reset the physical qubits that are adjacent to the computational qubit and re-initialize them to a new qubit state,” they said. “With high fidelity mid-circuit reset, we show an improvement of 3 percent over the SWAP circuits.” The next step for the IBM team will be to focus on the hardware solutions. “This is result has important implication—it shows that quantum computers, even noisy quantum computers, are powerful computational tools,” the IBM Quantum team said.
<urn:uuid:3df55666-bf35-4d8f-971d-b189876de2a1>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/ibm-touts-a-new-kind-of-quantum-advantage/amp/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00083.warc.gz
en
0.931567
564
3.078125
3
Computer programming was once incredibly time-intensive, requiring knowledge of languages like Python or Java to build the most basic applications. From debugging to mathematical skills, the effort and commitment necessary to become proficient at computer programming was always a barrier to most people – leading to the creation of low-code software as a service (SaaS) solutions to simplify the process. Early no-code and low-code SaaS solutions included Microsoft FrontPage and Adobe Dreamweaver, designed so non-programmers could build website. These initial SaaS solutions allowed users to work without using HTML code via drag-and-drop features. Nevertheless, websites born in this period were static, lacking advanced functions. People ultimately would see the value of low-code SaaS applications transforming them into what they are today. What are low-code SaaS solutions? As the fastest-growing sector of the IT world, no-code and low-code SaaS solutions aim to truncate the application development process. (Pega, 2022) The former empowers anyone to create apps without programming knowledge, while the latter minizines programming efforts to the simplest levels. (Forbes, 2021) Specifically, low-code SaaS solutions employ visual tools and model-driven processes, reducing the amount of hard coding, enabling democratized access to development and fast-tracking app delivery. (Pega, 2022) These SaaS solutions vary from modest, free software to much more sophisticated, enterprise-grade engines for complex tasks. No-code applications help users build websites, mobile apps, and games without code. These tools are ideal for non-programmers and even the busy programmer as they leverage modules and templates, including drag-and-drop graphical features, removing the need for line-by-line coding. (The Next Web, 2022) The rise of no-code and low-code SaaS solutions The popularity of no-code and low-code SaaS solutions and tools is most likely due to the surging demand for software development services coupled with the lack of skilled developers. Indeed, nearly 44% of renowned companies will experience a considerable skill gap in the upcoming years. (McKinsey, 2020) Low-code SaaS solutions are pivotal to compensate for the lack of programmers and software specialists. Moreover, the interest in no-code and low-code development solutions spawns from the desire to reduce efforts, resources, and time requirements to create new applications. (Forbes, 2021) These solutions are useful for medium and small-sized businesses, including specialty service providers and small entrepreneurs, helping them accomplish their goals quickly with minimal IT resources. Such benefits are also vital today, as companies seek to cut as many expenses as possible during this looming economic downturn. The current status of no-code and low-code solutions No-code and low-code SaaS solutions spending continues to grow, with Gartner predicting it will reach $171 billion in 2022. (Gartner, 2021) Plus, these tools keep receiving improvements and upgrades via AI, machine learning, and data analytics. Likewise, big data, IoT, and 5G are other areas these solutions can get enhanced to build more comprehensive and high-level applications. Build robust, customizable enterprise-grade solutions with IntelePeer’s Marketplace No matter the circumstances or economic environment, organizations must achieve their IT and development needs rapidly and affordably. IntelePeer’s Marketplace provides companies with an intuitive bundle of multi-application communication solutions to support advanced business workflows and integrations economically and at scale.
<urn:uuid:cc427aaa-73eb-4c2a-b273-f254d4cd4790>
CC-MAIN-2022-40
https://intelepeer.com/blog/low-code-saas-solutions-how-it-started-and-how-its-going/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00083.warc.gz
en
0.930477
741
2.578125
3
XSS the most widely-used attack method of 2019 The most widely-used cyber attack method used to breach large companies in 2019 was cross-site scripting (XSS), according to research. The hacking technique, in which cyber criminals inject malicious scripts into trusted websites, was used in 39% of cyber incidents this year. This was followed by SQL injection and Fuzzing, which were used in 14% and 8% of incidents respectively. Among other widely-used methods are information gathering, and business logic, although both were used in less than 7% of incidents. With 75% of large companies targeted over the last 12 months, the report by Precise Security also revealed the key motivation behind cyber crime has been the opportunity for hackers to learn. Almost 60% of hackers conducted cyber attacks in 2019 due to the fact it presents a challenge. Other prominent reasons for hacking a company’s systems include to test the security team’s responsiveness and to win the minimum bug bounty offered. ‘Recognition’ ranked sixth in the list of motivations, and was cited by just 25% of hackers. Bizarrely, 40% also said that they preferred to target companies that they liked. Digging into industry-specific insights, additional research published this month also revealed the most prominent attack method faced by sectors within the UK economy. The most prevalent hacking technique in the business, finance and legal sectors, for example, was macro malware embedded into documents, according to statistics compiled by Specops Software. Retail and hospitality firms, meanwhile, suffered mostly from burrowing malware, present in 51% of attacks, as did governmental organisations, registering 37% of incidents. The healthcare industry was susceptible mostly to man-in-the-middle attacks, in which communications between two computer systems are intercepted by a third-party. Distributed denial of service (DDoS) attacks were the most common form of attack faced by the technical services industry, with 58% of incidents using this method. As for how these attacks are conducted specifically, the Precise Security report showed that 72% of platforms used as a springboard for cybercrime are websites. WordPress, for example, is a prime target due to the massive userbase, with 90% of hacked CMS sites in 2018, for instance, powered by the blogging platform. Application programme interfaces (APIs) were the second-most targeted platforms in 2019, being at the heart of 6.8% of incidents, with statistics showing Android smartphones are usually involved in such attacks. - Senior Cyber Risk Consultant, UK - Remote first- Exclusive - United Kingdom - Depended on experience. Cyber Security Risk Consultant to join specialist, people first security consultancy. WARNING if you want a large, slow moving, high politics, high travel security consultancy that demands their a pound of flesh this is NOT for you. Client focused opportunity. Prior consulting experience is essential within Cyber Security. Experience working with businesses to identity and make recommendations to mitigate cyber risk. Some of the nice to have certifications. CRISC, ISO27001 Lead implementer, CISA, CISM, CISSP UK based - remote first mentality. (With some travel) Training budget Unlimited holiday Looking to interview immediately Unable to offer sponsorship. - identity access Management Consutlant - Upto £80,000 plus benefits An Identity & Access Management Consultant is needed for an expanding business based in the United Kingdom. (Remote role with monthly office meet ups) The Identity & Access Management Consultant will be responsible for the technical design and implementation of Identity & Access Management/IAM products for a wide variety of clients. Deliver bespoke end-to-end consultancy service to our clients, from gathering requirements through to implementation. Work in a close team designing, developing, and implementing first-class IAM solutions. Manage client relationships, working closely with key stakeholders to continually evaluate business requirements and ensure the highest quality solution delivery. If you are interested we are looking for an individual with Previous experience working within the IAM or CIAM field is essential, Strong knowledge with SAML and Oauth and ideally OpenID Previous experience from any of these technologies: One Identity, SailPoint, Saviynt, Ubisecure, Ping Identity, would be advantageous - 17'5 NOT 4 7R4P | Pen testing Lead 100k++ Lead Penetration tester wanted please. - This is however a Master level as appose to padawan. 1. 100k+ for the skilled individual. 2. Research / training time 3. Hybrid role- 3 days at home 2 in the office with the team in London. (11am - 16:00) 4. Exclusive opportunity. So yours to hear about if you are quick. Infrastructure and Web application / red teaming pen testing experience Someone that can scope, deliver and speak to clients. - It's Pen Testing, The good, the bad and the ugly - United Kingdom A new lead Pen Testing opportunity, AND slightly different from the usual you may see. The good, the bad and the ugly… Lalalalala la laa laaaa The GOOD 1. £90-110k for the skilled individual. 2. Research / training time 3. Hybrid role- 3 days at home 2 in the office with the team in London. (11am - 16:00) 4. Exclusive opportunity. So yours to hear about if you are quick. The bad 1. You have to apply or email me so we can speak. 2. 17'5 NOT 4 7R4P or click bait The ugly 1. It’s only ugly if you don’t reply and someone else you know gets it. Infrastructure and Web application / red teaming pen testing experience Someone that can scope, deliver and speak to clients. Apply today for more information.
<urn:uuid:3306e57a-08c2-4eb8-84f9-b32014513cb5>
CC-MAIN-2022-40
https://www.dclsearch.com/blog/2020/01/xss-the-most-widely-used-attack-method-of-2019
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00083.warc.gz
en
0.944003
1,232
2.78125
3
Singapore adopted its Personal Data Protection Act (PDPA) way back in 2012 before the EU’s General Data Protection Regulation (GDPR) made its appearance on the legal stage. It came into full force on 2 July 2014 and governs the collection, use, disclosure and care of personal data. It also regulates telemarketing practices through the Do Not Call registry which allows Singaporeans who sign up for it to opt out of marketing messages on their telephones, mobile phones and fax machines. While it may be considered progressive for its time and contains much of the same jargon that has now become the staple of data protection regulations across the world, the PDPA falls short of the GDPR’s hard line approach to privacy and personal data protection. It was criticized for its many exemption clauses and does not have any requirements for special categories of sensitive data such as those relating to health, race, ethnicity etc. This particular failing was not without its consequences: in June 2018, Singapore suffered its worst data breach to date when the personal data of 1.5 million healthcare patients including that of its Prime Minister, Lee Hsien Loong, was compromised. The Personal Data Protection Committee (PDPC) tasked with enforcing the PDPA fined Integrated Health Information Systems (IHIS), the technology agency running the healthcare institutions’ IT systems, S$750,000 (approx. $540,000) and SingHealth, the data controller, S$250,000 (approx. $181,000). A probe report found that the data breach was primarily caused by weak cybersecurity practices. The PDPC has since announced its intention to update the PDPA’s requirements, most notably, adding mandatory data breach notifications and data portability to the legislation. It also issued a number of guides to assist organizations in understanding its approach to regulating Singapore’s personal data protection regime. Its most recent, released on 22 May 2019, cover data protection management, active enforcement and managing data breaches. Who does the PDPA apply to? The PDPA has an extraterritorial reach and applies to organizations collecting personal data from individuals in Singapore, whether the companies are located in the country or not. The Act does not apply to the public sector which is governed by other rules. What is personal information under the PDPA? Personal data under the PDPA is defined as data that, whether true or not, can be used to identify an individual by itself or together with other information to which the organization has or is likely to have access to. Business contact information, when used for business purposes and not in a personal capacity, is not protected by the PDPA. Neither is personal data about an individual that has been in existence for at least 100 years or personal data about individuals that have been deceased for over 10 years. As previously mentioned, the PDPA does not include special requirements for sensitive data. However, the PDPC has recently issued new guidelines for the protection of National Registration Identification Card (NRIC) numbers and similar national identification numbers. When it comes into force on 1 September 2019, it will make it illegal for organizations to collect, use or disclose NRIC numbers or to make copies of identity cards, except under specifically permitted situations such as legal requirements, if a consent exception under the PDPA applies or it is necessary to accurately establish or verify an individual’s identity to a high degree of fidelity. The thorny issue of consent The PDPA’s consent requirements are much more relaxed than those of more recently adopted regulations such as the CCPA and GDPR. It requires express consent from individuals to collect personal data, but includes no less than 18 exemptions to the rule, which allow organizations to collect personal data without consent. While some of these are familiar, for example in case personal data is publically available, is being collected for national security purposes or for journalistic reasons, it also includes other, more contentious exemptions such as data collected for evaluative purposes or in the interest of the individual. When it comes to using personal data without consent, there are 10 exemptions and for disclosure without consent, 19 exemptions. The PDPA goes a step further than exemptions and also accepts deemed consent as valid consent. Deemed consent is essentially data provided voluntarily by an individual to an organization when it is reasonable for the individual to do so. This voluntarily provided data can then be passed on to another organization for a particular purpose. Singaporeans have the option of withdrawing consent, even in the case of deemed consent. However, any legal consequences of the withdrawal have to be borne by the individual who must be informed of these likely consequences by the organization from whom they request the withdrawal. Companies are also not obligated to inform third parties of consent withdrawals, so it falls to the individual to seek them out and withdraw consent from them as well. The withdrawal of consent cannot be requested if the collection, use or disclosure of the information is required by law, or if it is necessary for legal or business purposes. The PDPA offers limited rights of access and correction of information collected by organizations. Individuals can request access to personal data held by an organization and information concerning its use or disclosure in the last year, but this right is subject to exceptions. While individuals can request that organizations make corrections to their personal data, companies can decide, on reasonable grounds, not to apply them. The PDPA does not currently include any right to be forgotten or data portability among its requirements. However, the PDPC recently started a six-week public consultation to seek views on proposals to introduce data portability and data innovation provisions in the PDPA. Cross-border data transfers Organizations can transfer personal information from Singapore to other countries only in compliance with the PDPA or if they have applied for and received exemption from the PDPC. Those that need to transfer data across borders in accordance with the PDPA, must ensure that the country to which the data is being transferred has a comparable level of data protection to the standards set forth by the PDPA. Data can also be transferred to other countries if organizations have received consent from the individual to do so, if data transfer agreements have been put in place or transfers are necessary for certain prescribed circumstances. If organizations tamper with personal data or hide information concerning its collection, use or disclosure, they face a fine not exceeding S$50,000 (approx. $36,000). Any attempts to hinder a PDPC investigation can lead to a fine of not more than S$100,000 (approx. $72,000). Companies are also liable for their employees’ actions in the eyes of the PDPA, whether they are aware of them or not. The maximum penalty allowed by the PDPA is of S$1,000,000 (approx. $725,000) and, as shown in the case of the SingHealth data breach, the PDPC is not shy about issuing it. Frequently Asked Questions There are 9 main personal data obligations under Singapore’s PDPA: - Consent obligation - Purpose limitation obligation - Notification obligation - Access and correction obligation - Accuracy obligation - Protection obligation - Retention limitation obligation - Transfer limitation obligation - Openness obligation - Notifying the purposes of collecting or processing personal data and seek the customer’s consent - Responding when customers ask about their personal data - Ensuring the collected personal data is accurate and complete - Protecting and securing the personal data held by the organization - Disposing of any personal data that is no longer needed - Ensuring the protection of personal data when transferring overseas - Appointing a data protection officer - Closely managing service providers that handle personal data - Communicating data protection policies and practices - Checking the Do Not Call Registry if the company conducts telemarketing - Process the personal data of EU citizens in relation to the offer of goods or services to individuals in the EU; or - Monitor the behavior of individuals in the EU. - Training their employees on data security - Building a data protection strategy - Undertaking regular risk assessments - Implementing data protection tools such as antivirus, firewall, and Data Loss Prevention (DLP) software - Running regular backups of important and sensitive data - Encrypting sensitive data Download our free ebook on A comprehensive guide for all businesses on how to ensure GDPR compliance and how Endpoint Protector DLP can help in the process.
<urn:uuid:78c32bb4-4659-4b68-8c13-04e61ae852b0>
CC-MAIN-2022-40
https://www.endpointprotector.com/blog/about-singapores-pdpa/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00083.warc.gz
en
0.938423
1,750
2.640625
3
Cybercrimes are on the rise, and the hackers and scammers behind these attacks are chomping at the bit for a shot at your system. New research from Check Point Security shows another 30% rise in cyberattacks over the past few weeks; your computer is more vulnerable now than ever. Whether hackers are tricking you into clicking phishing links or directly attacking you through a malicious exploit, there’s no shortage of ways to break into your computer. Tap or click here to see the phishing red flags to watch out for. Instead of fretting over how you’ll be attacked by cybercriminals, follow these six steps to make sure your system is protected. 1. Check your router to see if it’s been compromised One trick used by hackers to break into routers is “DNS hijacking,” which involves switching the servers your router normally connects with to malicious ones. This forwards your internet traffic to fake versions of websites that steal your data or spy on your activities. To see if the DNS settings on your router are normal, tap or click here to use a free DNS scanning tool. WhoIsMyDNS.com identifies the DNS server and IP address that performed the scan and checks them against its database to see if the server has been reported for any suspicious activity. Once the results load, check to see if the DNS server correlates with your Internet Service Provider. If it doesn’t, your router may have been hijacked. 2. Keep everything up to date Security threats are constantly evolving, which is why you need to keep your browser updated. Updates help protect you from the latest spreading viruses and attacks. Tap or click here to find out if you need to update your browser. Even more important, update your operating system regularly. Windows releases frequent (though sometimes buggy) updates, and missing one can mean serious consequences for your security. The same goes for Macs. How to update Windows Most Windows PCs download and install updates automatically by default. If you haven’t changed your automatic update settings since powering up your computer for the first time, you might not need to change a thing. If you’ve turned automatic updates off, you can update manually. - Open Settings, followed by Update & Security. - Click Check for updates. - If there is an update available click Download & install. - Note: Make sure you’ve backed up your data before continuing. How to update your Mac: Apple’s macOS receives all its updates through the Mac App Store. Here’s how to find and download the most recent version of macOS. - Open the App Store app. - Click Updates in the toolbar. - Tap the Update button next to the macOS update to download and install. - Your Mac will restart when it is finished updating. You can also access the App Store Update tab by clicking the “Software Update…” button under “About This Mac.” Find this by clicking the Apple button from the menu bar at the top of your screen. 3. Test your firewall Most computers have a firewall active right off the bat, which prevents others from seeing your system online. Even if cybercriminals know where your computer is located, a firewall prevents them from getting inside and doing any damage. First, make sure your firewall is on. - Open Settings > Update & Security. - Choose Windows Security from the left-hand menu. - Choose Firewall & Network Protection to open the firewall menu. - Your system will tell you whether your firewall is on or not. If it’s off, you can toggle it on or reset the settings to default by clicking on Restore firewalls to default. - Open System Preferences on your Mac, then click Security and Privacy. - Click the Lock Icon to make changes and enter your admin username and password. - Then click Turn on Firewall. Then tap or click here to test that your firewall is actually working. These port scans will make sure you’re keeping bad actors out of your system. 4. Remove extra browser add-ons and host files in Windows Most browser extensions are safe-to-use tools that enhance your internet experience, but some are malicious. Regularly comb through your list of extensions and remove any you don’t recognize or don’t use anymore. In Chrome: Visit the Chrome Web Store menu to list of all your currently installed extensions. Remove them by clicking Remove from Chrome. Click the Library tab and delete the extension from there as well. In Firefox: Click on the three-line menu button and click Add-ons, followed by Extensions. Scroll through the list of extensions and click the three-dot icon next to the extensions you want to remove. Select Remove to delete them from your browser. In Safari: Choose Safari > Preferences, then click Extensions. To turn off an extension, deselect its checkbox. To uninstall an extension, select the extension and click the Uninstall button. Windows users should check the Hosts file to see if any unusual configurations have been made by attackers. This file can override your DNS and redirect URLs to different locations, like malicious websites. Type the Window Key + R on your keyboard and paste C:\Windows\System32\drivers\etc\hosts into it. In the pop-up menu that appears, select Notepad to open the file. Scroll through and note any unusual or garbled looking text. Copy the data contained here into another text document as a backup, and delete the unusual entries. Click File, then Save to make the changes. 5. Check if anyone else is using your Wi-Fi Network intruders can slow down your speeds and interfere with your data. Your connection is private, so it’s worth knowing who else might be logged in and using it. To see all the devices connected to your network, open your router’s settings menu. This can be accessed by typing your IP address into the address bar of your web browser. You can usually find this address on the sticker attached to the bottom of your router, but most use the default address of 192.168.1.1. Then log in with your username and password. This is either the default username and password for your router, or a unique login you created when you set up the router for the first time. If you’re unsure what your login is, you can call your ISP for assistance. When you’re logged into your router settings, look for an option that looks like “Attached Devices, “Connected Devices” or “Client List.” This will show you all the gadgets using your web connection, so scroll through the list carefully and note anything that you don’t recognize. Usually, you can kick them off from this menu as well. 6. Hide your Wi-Fi network from public view By default, your router broadcasts its network name (SSID) for you and your guests to find easily. But this also means anyone looking for your network can attempt to join. To make your network truly private, stop it from broadcasting its connection. That way, only people who know your router’s exact name can attempt to join. To do this, log into your router’s settings and locate the menu for wireless settings. Look for the broadcasting option for your SSID, which is most often enabled by default. Toggle that option off. Make sure you write down your SSID before disabling the broadcast. Otherwise, you might find yourself locked out of your own network. It doesn’t take too long to secure your system from outside threats. With a little work, you can stay on top of emerging dangers and keep your data safe. An ounce of prevention is worth a pound of cure.
<urn:uuid:6de32d74-3f7a-4893-93c0-2273ce0c7362>
CC-MAIN-2022-40
https://www.komando.com/privacy/ways-to-keep-hackers-off-of-your-network/738560/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00283.warc.gz
en
0.892764
1,641
2.53125
3
What is File Integrity Monitoring (FIM)? File Integrity Monitoring (FIM) is a control or process that compares the current state of operating system and/or application software files against a known baseline to validate the integrity of the files (i.e. looking for inconsistencies). The integrity verification uses a cryptographic hash function to calculate an initial checksum of a file, which is then compared with a newer calculated checksum of the current state of the same file. In essence, a checksum is a small block of data that is derived from another block of data. In the example below, you will notice that a change from jumped to hopped has resulted in a different checksum: [the cow jumped over the moon] > [cryptographic hash function] > [the cow hopped over the moon] > [cryptographic hash function] > In addition, irrespective of the size of the block of data that is used to derive the checksum, the generated checksum will always be of the same length, as demonstrated below: [cow] > [cryptographic hash function] > [the cow jumped] > [cryptographic hash function] > FIM can either be a manual or an automated process. The Unix cksum manual command will generate a checksum value for a file for each file given in its arguments (e.g. cksum test.txt). Alternatively, an automated process performing file integrity monitoring can be set up to perform such monitoring either randomly, at a predefined period, or in real-time, and of course, it will generate alerts if any inconsistencies are found. What does File Integrity Monitoring do? Updates to system configurations, files, and file attributes across your IT infrastructure will be a normal day-to-day business activity. They’re required for continuing business function. However, within those daily updates, there is the possibility of hidden changes that could impact the integrity of data and/or system configuration files. Consequently, if accidental, these changes could reduce the security posture of the business or, if deliberate, indicate a security breach in progress. FIM is a simple way of identifying any changes that could impact the security of the business IT infrastructure. What are the PCI compliance objectives of File Integrity Monitoring? - Requirement 10.5: Audit data must be secured so it cannot be altered. In this situation, file integrity monitoring will ensure that any changes to existing audit data will generate alerts. - Requirement 10.5.5: File integrity monitoring will also ensure that existing log data cannot be changed without generating alerts. However, it is important that any new data being added does not cause an alert to be generated. - Requirement 11.5: For this requirement, file integrity monitoring needs to alert you to unauthorised modifications of critical system files, configuration files, or content files. Furthermore, file integrity monitoring should also perform critical file comparisons at least weekly. In addition, Requirement 12.10.5 makes it clear any incident response plan must include alerts from security monitoring systems, including file integrity monitoring systems. What are the problems with File Integrity Monitoring? File object selection This is probably the most difficult question to answer, but the PCI DSS can help. Take a look at Requirement 10.5/10.5.5. Audit data repository/storage would be the first area to deploy file integrity monitoring, even if access to audit data is restricted to specific roles/named individuals. Ensuring that any modifications to audit data are identified and alerted upon would be your priority. Then we have Requirement 11.5; alerting on changes to critical system files, configuration files, or content files. Critical files normally do not regularly change. However, any modification could indicate a system compromise. Most FIM solutions will come pre-configured with critical files for conventional operating systems. Other critical files, such as those for business applications, must be evaluated and defined by the business. Lastly, we have Requirement 10.8. While this is a requirement for service providers, it makes sense to include it within all file integrity monitoring activities, whether a service provider or merchant. For example, you could deploy file integrity monitoring across anti-virus/malware clients on local machines. Obviously, as the anti-virus updates, there would be a lot of alerts generated. But, conversely, a lack of a generated alert could indicate that a particular anti-virus client had not been updated. False positives/false negatives Probably the next most difficult question to address, which (if left unchecked) can lead to alerting fatigue. There is always a trade-off between too many alerts which serve no benefit, and not receiving enough appropriate alerts. In essence, this boils down to file object selection, understanding your PCI DSS obligations, what you want FIM to achieve, and what the FIM solution can achieve. Cheap is not always best. In this situation, it might be beneficial to have a chat with a QSA company. Leading on from above, alerting fatigue is always a real danger with too many false positives. Again, this is down to file object selection. However, making use of file integrity monitoring products that come pre-configured with critical files for operating systems would help enormously with this problem. How do you effectively deploy File Integrity Monitoring? In essence, what we are talking about here is the business having a complete and detailed understanding of your PCI DSS environment, and those requirements you must meet to remain (or become) PCI DSS compliant. As mentioned, most file integrity monitoring products usually come pre-configured with critical files. However, other critical files, such as those for custom applications, must be evaluated and defined by the business. In this situation, it might be beneficial to seek expert advice. Furthermore, to ensure that the FIM solution provides an effective alerting mechanism, elements such as false positive/false negatives and alerting fatigue must be kept to a minimum. To achieve this, most file integrity monitoring allows for the development of a system baseline, and/or learning functionality. Over a period of time, this fine-tunes the response of the file integrity monitoring solution to meet the requirements of the business. Another way of achieving this would be to make use of a service provider that can offer a Security Operations Centre (SOC), and they would then do all the heavy lifting. If implemented and deployed appropriately, file integrity monitoring is a powerful way of checking the well-being of your PCI DSS environment. Even if you are not required to deploy file integrity monitoring, the tool offers an insightful understanding of the operation and status of your IT and data assets. Nettitude has been a registered Qualified Security Assessor (QSA) company for over 10 years. We have a reputation with our clients for taking a pragmatic and realistic approach to PCI DSS, and our history of delivering PCI DSS assessments means we have likely faced many of the challenges your organisation must overcome before. Find out more about our PCI DSS services here
<urn:uuid:79316032-6112-4934-a61c-5d1011fa10c0>
CC-MAIN-2022-40
https://blog.nettitude.com/what-is-file-integrity-monitoring
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00283.warc.gz
en
0.920973
1,484
2.78125
3
An anomaly is defined by Oxford Dictionary as “a thing, situation, etc., that is different from what is normal or expected”. When you apply this definition to Networks we need to first determine the networking constructs on which determining what is not normal or expected is of business and operational significance. Then we need to determine the normal behavior of these networking constructs and what is not normal on an ongoing basis. Networking Constructs – Infrastructure and Flows Given the multi-layer, multi-domain, multi-cloud nature of modern networks with increasing adoption of software defined networking and unpredictable internet circuits there is a wide range of networking constructs. We define a networking construct as a physical or virtual network infrastructure element or a flow that can either be as granular as possible or represent an aggregation that is dynamically defined by an operator. Each networking construct has live instances. Consider two simplest networking constructs that an operator is interested in for one device: (1) aggregate traffic on all interfaces on that device (2) traffic on each interface of that device. For case (1), there is one instance and for case (2), there will be as many as the number of interfaces. Network infrastructure networking constructs include everything from switches, routers, firewalls, load balancers, servers, VMs, containers, hardware components, physical and virtual interfaces, data center interconnects, hybrid cloud interconnects, SD-WAN tunnels, cloud virtual network components and so on. Some of these network constructs may be more important to network operation teams than others. Instances of some of these constructs are readily discoverable from network protocols while others require a fairly flexible operator definition or intent. For example an operator may define a networking construct for the aggregate traffic exiting a data center. Flow networking constructs include individual flows as well as constructs that represent a flexible operator defined aggregation such as a set of flows that map to an application, all flows on each VM or DMZ, all flows to a specific port, all flows with TCP retransmits, etc. The good news is that instances of these networking constructs exhibit a wide range of metrics and events that enable the determination of the normal behavior of these constructs. This includes environmental, data plane, hardware, control plane, packet and system metrics, network events, alarms and logs learned via a very wide range of standards based and vendor proprietary protocols. It is important to note that the different instances of a specific type of networking construct e.g., interface or hybrid cloud interconnect, may have different normal behavior. Hence a Network Anomaly refers to what is not normal or expected on a specific instance of a networking construct. Machine Learning Based Network Anomaly Determination of the normal behavior of the instances of these networking constructs requires learning from a very wide range of distributions, taking into account time of day, seasonality and looking at multiple metrics and events together. Operators cannot be expected to define what is not normal as it places a significant burden on them and in many cases they may not know what is normal. In addition what is not normal or expected needs to be high fidelity and be actionable and usable by operation teams. Threshold based or rule based techniques fall well short of the above goals. Machine learning could be used if the challenge of autonomous training at scale is overcome. In fact several Machine Learning techniques are not a good fit either as they are not autonomous and impose significant operator tax. However Machine Learning as a discipline is very well suited to be a key building block in addressing the above goals as it enables learning from patterns at scale. These observations lead us to the following definition of a network anomaly. A Network Anomaly refers to an actionable unexpected or abnormal behavior on a specific instance of a networking construct where the normal behavior is autonomously learned using Machine Learning. To learn more about Augtera’s innovation and use cases around AI based detection of operationally relevant Network anomalies please contact us.
<urn:uuid:0f510ec5-9a82-4210-ba6f-ac369e42df7f>
CC-MAIN-2022-40
https://augtera.com/blog/what-is-an-operationally-relevant-network-anomaly/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00283.warc.gz
en
0.933566
807
2.84375
3
In the days of apps, web services, and cloud computing, where information and data are shared among many individual applications, APIs represent the building blocks. Without APIs, most company processes, especially across company borders, would not function properly. It’s therefore not a surprise that cyber criminals have identified APIs as one of the most lucrative targets when it comes to retrieving sensitive information. In an API security attack, the objective is mostly to exploit it for data or other malicious purposes. There are many ways to attack API security. Some of the most common are SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF). Let’s have a look into each of those types of attacks and how you can prevent them. - SQL Injection An SQL injection is a type of API security attack which targets databases. It works by the attacker inserting malicious code into an SQL statement in order to gain access to data or to alter it for their own purposes. Preventing SQL injection attacks is relatively simple. The easiest way to do this is to use parameterized queries. This is where placeholders are used for dynamic values, and the actual values are supplied when the query is executed. This ensures that the dynamic values cannot be interpreted as SQL code. - Cross Site Scripting (XSS) XSS is a similar type of attack, but instead of injecting malicious code into a database, XSS targets a web page or web application in order to steal user data or hijack their session. This is particularly dangerous since the user might be tricked into revealing sensitive information. Some examples to prevent XSS include a web application firewall (WAF) that can detect and block XSS attacks. Another way is to use a content security policy (CSP) that can help prevent XSS attacks by specifying what content from what sources is allowed to be loaded by the browser. - Cross-Site-Request-Forgery (CSRF) An attacker forces an end user to execute unwanted actions on a web application. CSRF attacks target state-changing requests, not theft of data. With social engineering, an attacker may trick users into executing actions of the attacker’s choosing. If the victim is a normal user, a successful CSRF attack can force the user to perform state changing requests like transferring funds, changing their email address, and so forth. If the victim is an administrative account, CSRF can compromise the entire web application. A good practice in preventing CSRF attacks is by including a token in all POST requests. The token should be unique to each user and should not be guessable. When a form is submitted, the token is compared to the one stored in the user’s session. If they don’t match, the request is rejected. API security is critical As can be seen from the examples above, API security is a critical issue for SMBs and enterprises that expose their API. Attacks on APIs can lead to data breaches, loss of customer trust, and reputation damage. Therefore, it is important to understand that these attacks can be prevented and mitigated. Of course, depending on the specific details of the attack, there are various ways of mitigating the attack. However, some important which work for any attack and even protect your APIs better overall include: - Using encryption and authentication measures A common way is to use HTTPS with SSL/TLS. This will protect all communication between the client and the server. You can also use a digital certificate to prove that the server is who it says it is. - Monitoring your API for suspicious activity - Use a web application firewall (WAF) to monitor traffic and identify suspicious requests. However, be aware that WAFs require ongoing maintenance and regular updates. - Use a log monitoring service to collect and analyze your API logs for anomalous activity. - Responding quickly to any incidents Cybersecurity is not only about detecting and identifying weaknesses and risks. Every organization needs to have a plan for what to do in case of an attack. This is also true for API attacks. Make sure you have a well-defined process in place to react to an incident. - Good API documentation To make sure that your API is well-documented, you can use auto-generated documentation tools, write clear and concise comments in your source code, or use a consistent name for your API elements. A good documentation will also ensure that developers understand how to properly use it. As a result, the risk of an attack which happens due to a misconfiguration will also be significantly reduced. - Implementing rate limiting to prevent excessive or abusive requests A good rate-limiting strategy for your API will depend on its specific needs and usage patterns. However, some common rate-limiting strategies include limiting the number of requests that can be made per unit of time, or per unit of data (e.g., per MB). Protection can be achieved As you can see, though APIs are increasingly attacked by cybercriminals, it’s pretty straightforward to prevent those attacks. Or at least make it as difficult as possible for the attacker. If you follow some basic principles in designing your API, both for the general development, but most importantly for some basic cybersecurity hygiene, you will go a long way in protecting your APIs. An online tool like the widget mentioned above will give you an even better security. We do recommend to establish solutions for business logic security testing into your development to ensure a smooth integration into your development processes. Besides that added layer of security, this will also make it easy for your developers to design their APIs securely right from the start – which, after all, is the best protection you can get. BLST Security has made it possible to upload your JSON log file through an online widget and get results for free as shown in the following examples: After a free signup, you can even view the Params table and have a deeper insight of your API. BLST Security also offers API security testing service to scan your API for vulnerabilities that could be exploited by malicious actors. Even better: you can even try the widget directly and right here
<urn:uuid:93c59815-041d-45fd-8c29-be74ae2cb5a7>
CC-MAIN-2022-40
https://cyberprotection-magazine.com/how-criminals-attack-the-building-blocks-of-the-internet
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00283.warc.gz
en
0.929117
1,275
3.359375
3
If you type ‘flat Earth’ into Google, you’d be joining a group of people that have helped to triple the search term over the last couple of years. In fact, a recent YouGov poll found that only around two-thirds of Americans aged between 18 and 24 believe that the Earth is round. Although the idea the Earth is flat has been scientifically discredited, there seems to be a growing belief in the conspiracy theory. And it’s getting more traction than some of the other conspiracies out there, like chemtrails (which proposes that a plane’s long-lasting condensation trail is actually made up of chemical or biological agent). Interest in most of these other far-fetched theories remains stable but the flat-Earth movement is growing, particularly in America. And it has some high-profile supporters. From basketball players to musicians, rappers to TV hosts, a number of celebrities are jumping on the flat Earth bandwagon. So what’s causing a renewed interest in something that’s been scientifically disproven for the past two thousand years or more? What does it say about social media? And how did we actually establish that the world is round in the first place? Rounding out the world Once upon a time, it made sense for people to believe that the Earth was flat, says University of Melbourne cartographer Chandra Jayasuriya. Ships would sail off toward the horizon and often never return, and those people left behind didn’t really have access to information outside of their communities. “Their view was egocentric and geocentric. They lived in a village that was the centre of their existence,’’ she says. “The further away from the village they travelled, the more hostile the environment became.” Greek philosophers established that the Earth was round as far back as the third century BC, but it wasn’t until the 15th century that it became commonly accepted. The first scientific estimates of the Earth’s circumference were made by the Greek mathematician and geographer Eratosthenes in 240 BC. He noted that on the 21st of June that year, in a town called Syene (near modern day Aswan), the reflection of the sun could be seen in a deep well, meaning that it was directly overhead. But in Alexandria, around 800 kilometres away and almost directly north of Syene, at noon on the same day, the angle of the sun was about seven degrees – or one-50th of a circle. If the Earth was actually flat, the angle would be identical in both places. “From this, he concluded that the circumference of the Earth must be 50 times the distance between Syene and Alexandria,” Ms Jayasuriya adds. “This gave him a figure that was very close to the actual circumference as we know it now.” In 150 AD, Ptolemy’s treatise Geographia laid out a revolutionary system of assigning co-ordinates, expressed in degrees of latitude and longitude, to locations around the world. The mathematician and astronomer assigned these coordinates to more than 8000 places across the known world. Even though many of the measurements weren’t accurate, Ptolemy’s concept of ‘global mapping co-ordinates’ – used to this day – was based on the theory that the Earth was and is, indeed, round. “Although Ptolemy’s original map didn’t survive, the text was rediscovered around 1300 AD and cartographers were able to recreate the map”, says Ms Jayasuriya. As well as observations of the sun and its shadows, Ms Jayasuriya says many scientists throughout history continued to gather observations and evidence that the Earth is spherical including: - That we see the top of a ship’s mast coming into port and not the entire ship - That all other planets and celestial objects are spheres - That during a lunar eclipse, the Earth’s shadow on the moon is curved Distrusting the experts So why, despite overwhelming scientific evidence that the Earth is an “oblate spheroid” – a sphere that’s squashed at its poles and swollen at the equator – is the flat-Earth movement gaining traction in the 21st century? Well, in part, according to School of Culture and Communication lecturer Dr Jennifer Beckett, it’s due to a general shift towards populism and a distrust in the views of experts and the mainstream media. “It’s really about the power of knowledge, and that increasing distrust in what we once considered to be the gatekeepers of knowledge – like academics, scientific agencies, or the government,” Dr Beckett says. In this kind of environment, “it becomes really easy for once-fringe views to gain traction. You get a bunch of people around you who are constantly reaffirming your belief.” Dr Beckett also notes that the burgeoning movement speaks to how so-called social media “influencers” can now hold more sway than an expert in the field. “That’s often because they tend to be better storytellers,” Dr Beckett says. “And there’s an element of authenticity there – people naively think, ‘Oh, they’re a real person, so it must be true’.” The flat earth ecosystem Dr Beckett notes that the flat Earth community uses various social media platforms in distinct, overlapping ways in order to create a kind of ecosystem around their beliefs. “Youtube becomes a content hub, Facebook becomes an administrative one-stop shop for that hub, and Twitter continually pushing out the messaging,” she says, likening Youtube to a sort of alternative documentary channel for flat earthers. “It’s a really interesting beast … they can have their daily or weekly TV show in the same way that we go to David Attenborough.” It’s a more powerful social media tool than Facebook or Twitter because it’s a “high context” platform, Dr Beckett says, where users can stream themselves with an immediacy and intimacy that’s lacking from text or image-based platforms. “It’s kind of like feeling like you have direct access to David Attenborough, after watching one of his documentaries. Being able to have a conversation with him then have him respond in the next episode to your concerns or your question.” And unlike TV, on Youtube you can go searching for videos by people who agree with your view of the world. Or in this case, the Earth. Dr Beckett says that as we increasingly rely on social media for entertainment, we are becoming “affect addicts” – looking for the next hit of anger, happiness or other intense emotions. And it’s very easy for misinformation to circulate in this environment. Many flat earthers endorse the idea that the UN logo is actually a flat Earth map, for example. But Ms Jayasuriya adds its appearance is the result of a way of ‘projecting’ a 3D sphere onto a 2D plane. Because there’s “no perfect way to project a 3D sphere onto a 2D surface”, cartographers produce maps using different ‘projections’ for different uses. The UN logo is a particular projection centred on the North Pole. Getting the facts for critical thinking So, the question remains: why is this a theory that still persists in 2018 in the face of science, and even photographic evidence? Well, it also comes back to thinking critically about information that’s out there. Particularly online. “Look, flat earthers’ are actually employing Cartesian doubt; this a philosophical idea that the world outside the self is subject to uncertainty,” Dr Beckett says, referring to a method of sceptical thinking popularised by René Descartes, the French philosopher, mathematician, and scientist. “But, I’d say the best way to do your research on whether a story is correct is to actually go to the mainstream media, to go to those scientific agencies and see what they’re saying. “Academics are academics not because they’re trying to pull the wool over people’s eyes, but because we spend a lot of time training and thinking deeply about these issues,” says Dr Beckett. “You know, a lot of time, work and effort has gone into perpetuating the notion that the Earth is a globe… perhaps, that’s a sign that it is.” • Anders Furze is film critic in residence at The Citizen and Program & Communications Coordinator at the Centre for Advancing Journalism at the University of Melbourne. This article was first published on Pursuit. Read the original article.
<urn:uuid:7c26d491-11fa-46cb-91be-f86b96e370d8>
CC-MAIN-2022-40
https://news.networktigers.com/industry-news/why-do-some-people-believe-the-earth-is-flat/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00283.warc.gz
en
0.953149
1,886
3.25
3
Researchers Develop ‘Atomic Soccer’ to Reposition Atoms (PhotonicsOnline) Scientists at MIT, the University of Vienna, and several other institutions have developed a method that can reposition atoms with a highly focused electron beam and control their exact location and bonding orientation. The finding could ultimately lead to new ways of making quantum computing devices or sensors, and usher in a new age of “atomic engineering”. MIT professor of nuclear science and engineering Ju Li, graduate student Cong Su, Professor Toma Susi of the University of Vienna, and 13 others at MIT, the University of Vienna, Oak Ridge National Laboratory, and in China, Ecuador, and Denmark. The power of the very narrowly focused electron beam, about as wide as an atom, knocks an atom out of its position, and by selecting the exact angle of the beam, the researchers can determine where it is most likely to end up. “We want to use the beam to knock out atoms and essentially to play atomic soccer,” dribbling the atoms across the graphene field to their intended “goal” position, he says. “Like soccer, it’s not deterministic, but you can control the probabilities,” he says. “Like soccer, you’re always trying to move toward the goal.” In the t
<urn:uuid:a9a6a949-7677-4003-9f1f-c0a655fd2303>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/researchers-develop-atomic-soccer-reposition-atoms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00283.warc.gz
en
0.955669
279
3.171875
3
Research Shows Seniors Using Video Chat are Uplifted People that have family across the country, across the world, or have lost loved ones are at higher risk of social isolation and depression. This social isolation and depression are common in older adults. In 2015, research showed that 5% of adults that were 50 years or older lived with major depression. Communication Technology Combats Depression A new study by researchers at OHSU in Portland, Oregon indicates that communication technology could help address depression in older adults. In particular they looked at four online communication technologies: video chat, email, social networks and instant messaging. Over a two year period, these researchers looked at people 60 years and older that used each of these communications channels. They found that video chatting with friends and family conclusively held the most promise in reducing the risk of depression among seniors. Those who used email, instant messaging or social media platforms like Facebook had similar rates of depressive symptoms compared with older adults who did not use any communication technologies at all. Researchers are not surprised by these results. After all, video chatting is engaging in face-to-face interactions rather than passively scrolling through feeds. Video chat platforms like Skype or FaceTime allow people to connect directly with family and friends all around the world.
<urn:uuid:fb4eede2-7d8b-4175-ac43-cb46b99ac38a>
CC-MAIN-2022-40
https://www.actiontec.com/research-shows-seniors-using-video-chat-are-uplifted/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00483.warc.gz
en
0.954636
259
2.921875
3
10 Common Types Of Malware And How To Combat The Threat The days of simply relying on your computer’s built-in antivirus software are over. Cyberattacks have evolved in sophistication and are now a bitter reality having a ubiquitous presence due to the evolution of multiple devices. One of the most common types of cyberattacks is a “Malware”. In 2019, Kaspersky’s web antivirus platform identified more than 24 million 'unique malicious objects'. This number will only continue to increase with the accelerated pace of digital transformation in the recent months, especially after Covid-19. What is a Malware? Malware is shorthand for malicious software. According to Wikipedia, Malware is any software intentionally designed to cause damage to a computer, server, client, or computer network. It can infect computers and devices in several ways and comes in a number of forms. Since its birth, Malware has found several ways of attack which include email attachments, malicious advertisements on popular sites (known as malvertising), infected apps or USB drives, phishing emails and/or text messages, fake software installations etc. Why do Cybercriminals Use Malware? There are various reasons for which cybercriminals use Malware. Some of them are: • To trick a user into providing personal data • To steal user's bank, credit card or other financial data • To gain control of multiple computers to launch Denial-of-Service (DoS) attack • To infect computers and use them to mine bitcoin or other cryptocurrencies The ultimate motive of Malware attack is the financial gain. “Cyberattacks have evolved in sophistication and are now a bitter reality having a ubiquitous presence due to the evolution of multiple devices” 10 Most Common Types of Malware 1. Trojans aka Trojan Horses: A Trojan, just as the name suggests (hint- Trojan war), disguises itself as legitimate software with the purpose of tricking you into executing malicious software. A user may find a pop up that reads 'the system is infected' and would instruct to run a program to clean it. The user takes the bait, without knowing that it is a Trojan. 2. Viruses: Viruses are designed to damage the target device by corrupting data or completely shutting down the system. They require human action to infect devices and are often spread through email attachments and internet downloads. 3. Rootkits: Rootkits enable unauthorized users to gain remote access to your computer without being detected. Because this attack type has control over your computer, your endpoint protection is often blocked from doing its job. They are commonly employed for ad fraud i.e. they can open invisible browsers and click on ads to generate income from the same. 4. Ransomware: The name says it all! Hackers launch an attack that encrypts your important files/ data, blocking your access to the same. The hackers then demand a ransom in return. Worst part is even if you pay the ransom, you may not get the data back. 5. Adware: As the name suggests, adware is a type of Malware designed to automatically deliver advertisements to users to generate revenue for its creator. Adware doesn’t tend to steal data like most other forms of Malware, but it can be extremely frustrating as the user is forced to see ads they would prefer not to. 6. Spyware: It is a Malware used to spy on your computer activity. Malicious actors use spyware to keep tabs on people they know or in surgical attacks against celebrities, government officials, and business people. 7. Keyloggers: It refers to a type of software or hardware-based program that monitors the keyboard activities of the user (hence the name keylogger). Cybercriminals use such software to steal personally identifiable information, financial data, passwords and even the media files in order to gain financially from the same. 8. Botnets: Botnets are networks of infected devices that work together under the control of a hacker. Botnets can be used to carry out phishing attacks, send out spam or launch Distributed Denial of Service (DDoS) attacks. 9. Worms: Unlike viruses, worms are self-replicating and spread without end-user action. They’d simply spread by themselves and destroy systems, devices, networks as well as the connected infrastructure. 10. Fileless Malware: Fileless Malware is a type of malicious software that uses legitimate programs to infect a computer. It does not rely on files and leaves no footprint, making it challenging to detect and remove. How to Combat the Malware Threat? Unfortunately, finding and removing Malware program can be a fool's errand unless you are well trained in Malware removal and forensics. As an organization, you can't rely on one particular step or solution. For example- activating a firewall may prevent cyber-criminals from entering into your network, but it can't prevent an employee from unintentionally clicking a malicious link in an email. Thus, you need to adapt a multi-layer approach to combat the threat of Malware. These layers may include a activating a firewall, use of anti-Malware and anti-virus software, periodic end-user training, email filtering, patch and update management and network monitoring, just to name a few.
<urn:uuid:0e3c8387-4e08-4e29-9172-badc6d48e5b7>
CC-MAIN-2022-40
https://company-of-the-year.ciotechoutlook.com/cxoinsight/10-common-types-of-malware-and-how-to-combat-the-threat-nid-6440-cid-113.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00683.warc.gz
en
0.919647
1,106
2.78125
3
The objectives of a preparedness program are to safeguard life, conserve property, maintain the continuity of operations, prevent environmental contamination, and protect reputations and relationships. Emergency management, business continuity, IT disaster recovery, and crisis management are common terms for programs to accomplish these objectives. Prevention and mitigation programs including occupational health and safety, fire prevention, physical/operational security, cyber/information security, environmental protection, enterprise risk management, and crisis communications also have roles achieving these objectives. Continue reading the full article and download the Preparedness Bulletin here. Sign up to receive the Preparedness Bulletin newsletter here.
<urn:uuid:f32ea1a4-b7fa-415c-b80f-74c7f07f16b1>
CC-MAIN-2022-40
https://continuityinsights.com/integrated-preparedness-program-coordinated-development-and-implementation-of-program-elements-can-reap-significant-benefits/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00683.warc.gz
en
0.912641
126
2.96875
3
This post will enlist some of salient terms and components related to F5 LTM. These components are indispensable part and used frequenetly while working on F5 LTM system. Below is the list depicted through diagrams – Nodes – A node is represented by the IP address of the Physical server on which application is hosted. It can be physical or logical (for example VMWare) server in the internal network. It can be called as a configuration object represented by the IP address of the server. Pool Members: – A pool member is a service running on a node, represented by IP address of the node and service(port) number. For e.g. you have server 192.168.1.1 and application is listening on port 80, then pool member is 192.168.1.1:80. A Node can host multiple pool members like 192.168.1.1:80 and it can be 192.168.1.1:443. A node and service port to which BIG-IP LTM can load balance traffic. Pool: – A pool is logical grouping of pool members that represents an application. Each pool can have different load balancing method. A pool groups pool member together to receive and progress network traffic in a fashion determined by a specific load balancing algorithm. Monitor: – A configuration object that checks the availability or performance of network resources such as pool members and nodes. Monitors check the status of a pool member or node on an ongoing basis, if a pool member or node being monitored does not responded within the set interval, BIG-IP LTM marks it offline, but continues to monitor. BIG-IP LTM continues to direct traffic to the remaining pool members while continuing to monitor the offline pool member or node. When a pool member or node responds, BIG-IP LTM marks it available and starts directing traffic to the pool member. Virtual Server: – A virtual server is an IP address and server (port) combination that listens for client requests. BIG-IP LTM is a default deny device, the virtual server is the most common way allow client requests to pass through. Each virtual server will uniquely process client request that match its IP address and port. Each virtual server than directs the traffic, usually to an application pool. The Virtual server translates the destination IP address and port to the selected pool members. A Virtual server allows BIG-IP systems to send, receive, process and relay network traffic. Related- F5 LTM Interview Questions
<urn:uuid:f38dfc30-38ab-4068-ae2d-b0e18e5198e0>
CC-MAIN-2022-40
https://ipwithease.com/f5-ltm-key-terms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00683.warc.gz
en
0.92088
523
2.828125
3
The COVID-19 vaccine is just one example of the rapid and global effort to stopping the pandemic. Drugs too are being developed. A new study by CiRA researchers shows that the combination of two drugs halts the infection of SARS-CoV-2, the virus responsible for COVID-19, in iPS cells. The influence of the COVID-19 pandemic can be seen in almost all industries. Chain supplies are suffering worldwide, and companies everywhere are adjusting to having their employees work from home. Science has been affected in many ways too. CiRA as a whole has diverted massive resources to the problem, because iPS cells offer an attractive model to study the infection for several reasons. “We can differentiate iPS cells into any cell type we want to study. We can acquire iPS cells from mild and severe COVID-19 patients,” said CiRA Junior Associate Professor Kazuo Takayama about some of the benefits of these cells. One of the most important is TMPRSS2, which codes for transmembrane serine protease 2. Another is CTSB, which codes for cathepsin B, another type of protease. These proteases cleave the spike protein of SARS-CoV-2, allowing the virus to enter the cell. In the study, these two genes were edited so that the cell did not produce the proteases. This significantly attenuated the infection of iPS cells by SARS-CoV-2. Using this information, the researchers tested two drugs, CA-074, which inhibits cathepsin B, and Camostat, which inhibits transmembrane serine protease 2, showing their combination reduced the viral load to less than 0.01% of that without drug treatment. This synergy could reflect the different locations of the proteases in the cell. Cathespin B is found in endosomes, while TMPRSS2 is found in the cell membrane. Endosomes are molecules to be transported from one location to another in a cell, while proteins in the membrane can move around but only on the membrane and exchange items from the inside and outside of the cell. However, Takayama cautions that iPS cells do not exist in patients and that more study is needed to confirm the effect of the drug combination in patient care. “We need to see if the same effects are found in differentiated iPS cells,” he said. Targeting Host Protease to Block Viral Entry The proteolytic cleavage of S protein at the S1/S2 and S2′ sites by the serine protease TMPRRS2 and/or endosomal cysteine proteases CatB/L drives viral entry through the fusion peptide that inserts into the host cell membrane. This insertion leads to the formation of an antiparallel six-helix bundle, allowing the fusion process and, therefore, the uncoating and release of the viral RNA in the cytoplasm. The S1/S2 and S2′ priming events by host proteases are necessary for SARS-CoV-2 to infect the host and interfering with the virus entry may turn out to be an advantageous antiviral strategy, as it would block the infection or virus propagation at an early stage, more importantly if considering its high transmissibility. Targeting host factors has the advantage of reducing the possible development of drug resistance and of likely providing broad-spectrum activity, by contrast interacting with the host protein, the possibility to have more severe side effects are higher with respect the classical antiviral approach. However, TMPRRS2 and CatB/L involvement in viral infection is still the object of study and targeting these host proteases for CoV treatment is an emerging strategy. In addition, the available data suggest that simultaneous inhibition of both proteases is required for robust block of antiviral entry [58,59]. So far, there are no reports of medicinal chemistry programmes focusing of TMPRRS2 and cathepsin B/L within CoV drug discovery, thus available inhibitors often show limitations, in term of potency, selectivity, drug-like properties. Nevertheless, some compounds have been shown to exert a promising antiviral effect against SARS-CoV-2 and/or other related CoVs and are described below. TMPRSS2 as Host Target and Its Inhibitors Recently, TMPRSS2 has been shown to mediate SARS-CoV-2 S protein priming, as well as for SARS-CoV and other CoVs [21,58,60,61,62]. TMPRSS2, also named Epitheliasin, is a 492 aa serine protease of type II transmembrane serine proteases (TTSPs) family, expressed on the cell surface, consistently to their role in regulating cell-cell and cell-matrix interactions. The human TTSP family includes 17 members so far sharing the same structural features. The N-terminal intracellular domain preserves the phosphorylation sites, followed by the transmembrane domain, and the stem region that is located in the initial extracellular part with a binding site for low-density lipoprotein (LDL) and calcium in a LDL Receptor A motif and a single scavenger receptor Cys-rich (SRCR domain). The C-terminal extracellular endoprotease domain contains the catalytic triad composed by the residues His-Asp-Ser, in which the Ser hydroxyl group promotes nucleophilic attack on the priming site (Figure 10) . TMPRSS2 is predominantly expressed in prostate, but it has also been found in lungs, colon, liver, kidneys, and pancreas. The expression in the upper airways, bronchi and lungs, where its physiological function remains unclear, gives thought to its important role for pneumotropism of several highly pathogenic viruses, such as SARS-CoV-2, SARS-CoV, MERS-CoV, and HCoV-NL63. Indeed, CoVs engage ACE2 to enter into host cells and although the ACE2 is a ubiquitous enzyme, they show particular tropism for the lungs. In addition, while ACE2 is expressed in both type I and type II pneumocytes, it has been verified that SARS-CoV readily infects the type I pneumocytes at early stage [60,64]. In vivo experiments showed that TMPRSS2 is responsible of viral spread and immunopathology of CoVs infection . In TMPRSS2_ko mice, SARS-CoV and MERS-CoV showed significantly reduced viral replication in the lungs, especially in the bronchioles, and the inflammatory infiltration. Moreover, TMPRSS2 is not only involved with CoVs S protein activation, but its role has also been recognized in the activation of glycoprotein on the surface of influenza A virus, metapneumovirus, and porcine epidemic diarrhoea virus in different stages of viral life cycles . Therefore, targeting TMPRSS2 could be a broad-spectrum antiviral strategy; however, neither drugs able to specifically inhibit this target have been identified, nor sufficient information about the substrate specificity and no 3D structures of the protein are available. However, it was reported that the fluorogenic trypsin substrates Cbz-Gly-Gly-Arg-AMC and Boc-Leu-Gly-Arg-AMC were also substrates for TMPRSS2, indicating that P1 can be represented by Arg, which is consistent with the recognition elements of some drugs able to inhibit human epithelia serine proteases; anyway enzymatic kinetics analyses were not performed and values of Km or Kcat are not known . On the other hand, drugs that are able to inhibit a wide panel of human serine proteases, including TMPRSS2, are currently approved to treat prostate cancers and several inflammatory pathologies . Previous evidences showed that the clinically proven serine protease inhibitor camostat mesylate was able, even at high concentration (up to 100 µM), to partially block SARS-CoV infection (65% of inhibition) in cell expressing TMPRSS2 without causing toxicity; hence, the antiviral activity was enhanced by adding a CatB/L inhibitor, namely E64d, thus indicating that the remaining 35% was attributable to endosomal cathepsins . Camostat is a pseudo irreversible inhibitor of different serine protease, including TMPRSS2, being characterized by aromatic guanidine as P1 mimetic recognition elements (Figure 11), less polar than Arg, and it is used in Japan for prostate cancer and other applications, such as pancreatitis and liver fibrosis. Consistently, camostat produced a partial block (50–60%) of SARS-CoV-2 entry in TMPRSS2+ cell lines, including Calu-3 lung cells, while no effect was observed in TMPRSS2- cells, and full inhibition was obtained again adding E64d . Interestingly, animal model studies have found that the treatment of camostat mesylate not only produced a 10-fold reduction SARS-CoV titers in Calu-3 airway epithelial cells , but also an increase of survival rate of 60% in mice . Camostat can inhibit in vivo infection by SARS-CoV and other pneumoviruses known to utilize TMPRSS2; therefore, the drug could be a suitable antiviral candidate for drug repurposing as component of a drug combination, to prevent infections in the lungs by SARS-CoV-2. Indeed, camostat has recently been involved in an interventional study to evaluate the efficacy and safety in humans of inhibiting SARS-CoV-2 infection, which provides a randomized treatment in 580 participants with camostat mesylate drug (Phase I) and in parallel with placebo oral tablet (Phase IIa) (ClinicalTrials.gov Identifier: NCT04321096). In order to obtain the highest grade of evidence, double-blinded, randomised, placebo controlled trials are carried out on 334 patients with moderate COVID-19 infection. The clinical trial is in phase IV, but not yet recruiting (ClinicalTrials.gov Identifier: NCT04338906). Nafamostat is a serine protease inhibitor that is structurally related to camostat, in use as anticoagulant used for disseminated intravascular coagulation (DIC), that has proven particularly potent activity in blocking CoV infection in vitro likely by inhibiting TMPRSS2 mediated entry (Figure 11). A dual split protein (DSP) reporter assay was developed to quickly monitor membrane fusion mediated by viral S protein and to screen a library of approved drugs, leading to the identification of nafamostat as a potent inhibitor of fusion S activity of MERS. Tested in MERS infection assay the compound blocked viral replication by 100-fold at a very low concentration of 1 nM, more efficiently than camostat . More recently, the same research group has exploited the DSP using SARS-CoV-2 S protein where nafamostat shows fusion inhibitory activity and more interesting the drug inhibits with excellent potency SARS-CoV-2 replication in pulmonary Calu-3 cells with EC50 CPE of about 10 nM with pre-treatment . The activity decreases of more than 300-fold if the inhibitor is added during infection, thus suggesting that it inhibits viral entry. Moreover, nafamostat shows 30–240 nM concentration by iv administration through continuous infusion in DIC patients and PK study in rats revealed the maximum concentration of intact nafamostat in the lung after infusion to be about 60-fold higher in comparison with the maximum blood concentration ; such an accumulation may partially suppress SARS-CoV-2 infection. A randomized clinical trial has recently been launched for adult COVID-19 patients to investigate the ability of nafamostat to slow down the lung disease (ClinicalTrials.gov Identifier: NCT04352400). Indeed, the efficacy of nafamostat as mucolytic and anticoagulant agent and the potent inhibitory activity against TMPRSS2 are useful features to improve the clinical conditions of hospitalized COVID-19 patients. Through a HTS on FDA approved drugs and other commercial libraries (around 70 K cmpds) against TMPRSS2 in order to find new potential anti-metastatic agents for prostate cancer, bromhexine hydrochloride and other four hits (Figure 12) were identified as inhibitors of the enzyme at a concentration below 5 µM . In particular, bromhexine hydrochloride exhibited the most potent inhibition with IC50 = 0.75 µM, resulting specific for TMPRSS2 being significantly less active (50–80-folds) against hepsin and matriptase and not active up to 100 µM against trypsin and thrombin. Moreover, the new repurposed drug was evaluated in cells and in rodent animals without showing significant toxicity. Bromhexine is structurally unrelated to guanidine derivatives camostat and nafamostat, and no data are provided on kinetics of inhibition and putative binding site. However, bromhexine is an orally bioavailable drug used as mucolytic cough suppressant and with no substantial adverse effects. Indeed, in China an interventional clinical trial for COVID-19 has recently been approved in order to evaluate the efficacy and safety of bromhexine hydrochloride in patients with suspected or novel coronavirus pneumonia (ClinicalTrials.gov Identifier: NCT04273763). The treatment is randomized open label, based on the administration of bromhexine hydrochloride tablets in combination with standard treatment for COVID-19 (Arbidol hydrochloride granules/Recombinant Human Interferon α2b spray). A larger clinical study on 140 participants, at early phase I, involves the treatment with bromhexine alone or in combination with hydroxychloroquine sulphate, in order to evaluate the effect of bromhexine in preventing the development of COVID-19 (ClinicalTrials.gov Identifier: NCT04340349). In summary, the above-described clinical trials aim to counteract the SARS-CoV-2 infectivity and, in this regard, bromhexine is investigated as a promising inhibitor of TMPRSS2. A peptidomimetic approach represents a possible alternative towards the development of serine protease TMPRSS2 inhibitors. A series of 4-amidinobenzylamide derivatives, known as inhibitors of various trypsin-like serine protease, was screened against TMPRSS2 [75,76,77] in order to systematically investigate its substrate specificity , since the catalytic domain of all trypsin-like serine proteases share structural features and folding pattern. The screening revealed a preference for basic P3 residues in D-configuration, such as D-arginine, proline or glycine residues in P2 position, and a particular preference for 4-amidinophenylalanine amide as P1 residue. Upon SAR investigation, compound 92 (Figure 13), having the P1 m- amidinophenylalanine piperidine amide with a basic ethylamine chain extending towards the S1’ and a bulky biphenyl sulfonamidic N-cap, turned out to be the most potent inhibitor with a Ki value of 0.9 nM for TMPRSS2. In Calu-3 airway epithelial cells that were infected with human pandemic influenza viruses, 92 caused dose-dependent reduction in viral titers (10–100-fold at 10 µM and 100–1000-fold at 50 µM, after 24 h) without affecting cell viability. However, the significant discrepancies between the nM Ki on the isolated TMPRSS2 and activity in cell-based context observed at µM concentration are likely due to the high polarity of the compound that has two protonable groups at physiological pH. Recently, compound 92 and its less polar analogue MI-1900 have been shown to reduce by 25-fold virus titer in SARS-CoV-2 Calu-3 infected cells at a concentration of 10 µM, without showing toxicity up to 50 µM . In recent times, it has been shown that a guanine-rich tract in the promoter region of human TMPRSS2 gene is able to form G-quadruplex secondary structures in the presence of potassium cations and adjust the gene transcription process . Because the guanine-rich sequence has proven relevant for TMPRSS2 promoter activity , the use of compounds that are capable of stabilizing G-quadruplex structures and thus reducing/blocking transcription of the TMPRSS2 gene has been proposed as a potential host targeting strategy. Seven benzoselenoxanthene analogues were designed and synthesized for this purpose, and the compounds Se1, Se3, Se5, and Se7 (Figure 14) have shown to increase the stability of the TMPRSS2 G-quadruplex in vitro, corresponding to an effective decrease of TMPRSS2-gene expression in Calu-3 cells. At a later stage, Shen et al. evaluated the inhibition of viral propagation in Calu-3 cells infected with Influenza A virus. Benzoselenoxanthene analogues led to a near-complete reduction of virus titer at a concentration of 8 µM, with an antiviral activity comparable with anti-influenza drug Oseltamivir, although definitely inferior to camostat inhibitor. Therefore, no significant cytotoxic effects have been revealed at 10 µM. In summary, the down-regulation of TMPRSS2 expression through G-quadruplex structure stabilization is affirmed as a promising strategy for the inhibition of viral infection and represents a pioneering starting point for novel drugs against SARS-CoV-2. reference link : https://www.mdpi.com/1422-0067/21/16/5707/htm More information: Rina Hashimoto et al, Dual inhibition of TMPRSS2 and Cathepsin B prevents SARS-CoV-2 infection in iPS cells, Molecular Therapy – Nucleic Acids (2021). DOI: doi.org/10.1016/j.omtn.2021.10.016
<urn:uuid:92cce3b9-d87d-4b45-9a30-f9f3a3f0e8c5>
CC-MAIN-2022-40
https://debuglies.com/2021/10/22/drugs-ca-074-and-camostat-stops-sars-cov-2-infection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00683.warc.gz
en
0.928596
3,957
2.859375
3
Every now and then, something comes along that promises to reboot the debate on a hot button issue. A new algorithm developed at the Chinese University of Hong Kong looks set to do just that, pitting privacy advocates against technologists in a fresh fight over facial recognition technology. Dubbed GaussianFace, the algorithm — developed by CUHK’s professor Sean Tang Xiaoou and student Chaochao Lu, both of the Department of Information Engineering — takes face recognition to the next level. When presented with the task of identifing matching faces from a set of over 13,000 web-sourced images, it not just matches but actually exceeds the ability of humans to correctly find matches. The challenge, known as the Labeled Faces in the Wild benchmark, is a difficult one. The images include both genders and a wide variety of ages, races and ethnicities. Clothing and hairstyles vary, and so to do lighting and pose, making it tough to be certain whether any given image pair is a match. Humans, according to the paper, fail to correctly identify around 2.47% of the pairs they’re presented with, either calling a match when the subjects differ, or not managing to match two photos of the same individual. The GaussianFace algorithm, on the other hand, managed an extremely impressive 98.52% accuracy — that is, it missed only 1.48% of image pairs, almost 1% better than humans.
<urn:uuid:9c822332-7825-4c85-ac34-308fc0d509e0>
CC-MAIN-2022-40
https://www.crayondata.com/new-face-recognition-algorithm-knows-you-better-than-you-know-yourself-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00683.warc.gz
en
0.926125
295
3.140625
3