text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
In the first part of this two-part series, I defined what Open Database Connectivity (ODBC) is and demonstrated how to configure an ODBC data source. I started with the task of adding a new data source and walked you through the configuration of ODBC packages. In this article, I’ll show you how to finish the configuration of an ODBC data source. I’ll also take a look at what you need to configure on your AS/400 in order to use ODBC. As you learned last issue, ODBC is a Microsoft-developed API that uses a set of drivers and a driver manager to allow applications to access data using SQL as a standard language. It is commonly used by most of the visual languages, e.g., Visual Basic and Delphi, as well as Microsoft Access and MS Query, to retrieve data from sources on local or remote systems. Because ODBC uses SQL as its search engine, it is heterogeneous and can retrieve data from such varied sources as PCs, the AS/400 and other midrange systems, and mainframes. That’s enough review: Let’s jump right into how to configure the performance of your ODBC data source.
The options on the Performance tab (shown in Figure 1) help you to enhance the performance of ODBC requests on the AS/400. You can get to this tab by double-clicking on the 32bit ODBC icon in Control Panel, then highlighting the AS/400 data source you created in Part 1 of this series, and finally clicking on the Configure button.
There are four options on this panel: Enable lazy close support; Enable pre-fetch during EXECUTE; Record blocking; and OS/400 library view.
These are some rather esoteric sounding terms, but once they’re defined for you, you’ll realize that this is just Microsoft’s way of making the mundane sound extraordinary. However, if you are not familiar with SQL, some of these terms may not be familiar to you. If that’s the case, I strongly recommend that you get a basic SQL manual.
Lazy close support refers to how an AS/400 file that was opened by ODBC closes when the SQLFreeStmt command is used with the SQLClose option. If this box is checked
and SQLFreeStmt is used with SQLClose, the open AS/400 file will not be closed until the next ODBC SQL request is sent. If the box is not checked and both SQL statements are used, the AS/400 file will be closed immediately.
Why would you want to use this option? You may have an application that needs to retrieve the last data set from the last position of the file pointer and you want to process that data on the remote application before the file is closed. Leaving the file open can improve performance because you don’t have to reopen the file and reposition the pointer to retrieve the previous data set.
The Enable pre-fetch during EXECUTE option lets your SQL Select statement open a file and perform a Fetch operation when the Select statement is executed, allowing your application to fetch the first block of data before the application requests it. This can reduce the amount of communication between your application and the server, thereby enhancing the performance of the application. If your SQL statements use the SQL ExtendedFetch statement, you should leave this box unchecked because performance will be degraded when these features are used together.
Record blocking is a familiar term to most programmers. You can specify record blocking as the number and size of records to retrieve for each data request. From this panel, you specify when and if record blocking is allowed, as well as the record block size. You can indicate that you want records blocked unless the For Update Of clause is specified on an SQL Select statement. You can also tell the system to block records if the Select statement specifies For Fetch Only. The last option is to indicate that records are not blocked at any time. You can also specify the number of rows (records) to return during a given request in a predefined number between eight and 512. The defaults for this command are Block always unless For Update Of with a record block size of 32. What this means is that records will always be retrieved in blocks of 32 KB, unless the For Update Of clause is used in the SQL File statement. In general, when you experience performance problems with your ODBC requests, you should try changing the block size here before attempting any other changes. You can sometimes improve performance dramatically just by changing this parameter.
The OS/400 library view option specifies the set of libraries that will be searched to create a list of table owners (file owners). This option is only valid when your SQL request contains the Tables function. This is a fairly complex SQL function that probably won’t be used by most ODBC applications. This parameter is only valid when certain parameters, such as szTableQualifier or cbTableQualifer, are specified with the Tables function. If these particular parameters aren’t used, the Default library list is used, which has the same effect as if this option was not specified at all. If you would like more information on what parameters are valid with the Tables function, do the following: From the Data Source Setup panel, with the Performance tab checked, click on Help and then on OS/400 library view to see a list of valid parameters. In general, you should leave this value set to Default library list.
Clicking on the Language tab reveals the panel shown in Figure 2. From this panel, you can specify values for the following options: Sort type, Sort weight, and Language ID. Sort type lets you sort by Language ID (which works in conjunction with the Language ID option), by Hex (all data is converted to hex prior to sorting), by job profile, or by a sort sequence table. If you tell the system to go by a sort sequence table, you will enter the table and library name in the Sort Library/Table Name box.
Sort weight refers to how uppercase and lowercase letters are treated. This option is only available to you if you use a sort type of Language ID. If you select the Shared- Weight radio button, all lowercase letters will be converted to uppercase prior to sorting. If you select the Unique-Weight button, the lowercase-to-uppercase conversion will not take place. Incidentally, the collating sequence for these sorts will be based on the collating sequence of the individual language selected with the Language ID parameter.
Language ID is available to you when the Sort Type of Language ID is selected. You can select the language that is appropriate for your application.
The Other tab (shown in Figure 3) lets you set values for the Connection type, Object description type, and Scrollable cursor options. Each of these options refers to specific SQL operational functions. Connection type controls whether or not you can use such SQL options as Read/Write, where all SQL options are allowed; Read/Call, where only Select and Call statements are allowed; and Read-Only, where only the Select statement is allowed. The default value for this parameter is Read/Write. Object Description Type specifies the types of values that are returned by the ODBC catalog APIs in the REMARKS column of the returned SQL data set.
If you have used SQL, you are familiar with the term scrollable cursor. Scrollable cursors are used to allow multiple, indexed reads of a table (file). The Scrollable cursor option in the Other tab lets you specify whether a cursor should always be scrollable or always scrollable unless the row size is 1. (A row size value of 1 means that only one record is returned from a SQL Fetch operation.) This type of operation is normally specified in the Select Using operation within SQL, but it can also be overridden here.
The tab shown in Figure 4 lets you specify whether or not to perform data translation on data that is stored in a file with a CCSID of 65535. (Data will be translated after it is retrieved from the AS/400 but before it is returned to your application.) This feature lets you translate EBCDIC data to ASCII format. If you do tell the program to perform the EBCDIC-to-ASCII conversion, you can also specify which translation Dynamic Link Library (DLL) should be used. Finally, this option allows you to pass a parameter to the translation DLL, if it requires one. You should refer to the DLL vendor’s documentation to determine if a parameter is required.
The Format tab (Figure 5) allows you to set up AS/400 database server data format options. From this tab, you can control the following options: Naming convention, Decimal separator, Time, and Date. The Naming convention parameter specifies whether your SQL statements must use SQL naming standards or SYS naming standards. SQL naming standards dictate that table names must have a period (.) between the collection (library) and the table name. The SYS naming convention means that a slash (/) is used to separate the collection and table name. The Decimal separator parameter specifies what is used as a decimal point. The default value is a period (.).
The Date and Time options control which separators are used on returned date and times. The default for the Time parameter is *HMS (hours, minutes, and seconds) using a colon (:) as the separator. The default for the Date parameter is *ISO (International Standards Organization), which is yyyy-mm-dd with a hyphen (-) used as the separator.
The ODBC Data Source Administrator Window
So far, everything I’ve done to set up a data source has been at the user level. That is, only the user currently logged into Windows 95/NT will be able to access the data source configuration I’ve previously defined. If you want this data source to be available to all users of this PC, regardless of who is logged on, you should select the System DSN tab from the ODBC Data Source Administrator window (Figure 6). Data sources configured using this tab will be available to all users of a PC.
Use the File DSN tab (Figure 7) to connect a data source to a data provider, such as the AS/400. Unlike the user or system types of Data Set Names (DSN), file data sources may be shared by all users, as long as they have access to the appropriate drivers such as the Client Access/400 32-Bit ODBC driver. Click on the Add button to add a new file data source and you’ll be presented with the Create New Data Source window shown in Figure
8. This is the same window you see when you click on the Add button in both the User DSN and System DSN panels. Clicking on the Advanced button gives you access to the dialog box shown in Figure 9. From there, you can modify the ODBC driver settings, changing such things as parameters passed or driver name. You should only make these kinds of changes if you have the documentation handy for the particular ODBC driver you are changing and you are sure of what you are doing. Your modifications could easily make your data source unusable. Checking the Verify this connection (Recommended) box will allow you to test the connection before proceeding. This is a very good idea if you change any of the driver settings. You can continue on with the rest of the file data source configuration just as you did with the user data source configuration.
The Set Directory button on the File DSN tab (see Figure 7) allows you to make the current directory shown on this panel the default directory for listing data sources. That is, the directory shown on this panel (in the Look in drop-down box) will become the default directory from which all data sources will be listed. If you have data sources in another directory, they will not be shown in the list. In general, you’ll never need to change this setting unless you have software that places ODBC drivers in something other than the default directory, which is C:Program FilesCommon FilesODBCData Sources.
ODBC Drivers and About Tabs
The ODBC Drivers and the About tabs on the Data Source Administrator window are informational tabs; you cannot change anything from them. The ODBC Drivers tab lists the defined ODBC drivers available to this PC, including any you may have created, while the About tab lists information about the core ODBC components. Both panels are handy for reference in time of trouble, but not much use otherwise.
The Tracing tab shown in Figure 10 allows you to start or configure an ODBC Trace session. Click on the appropriate When to trace radio button to define when you want the session to run. All the time means just what it says; all ODBC calls will be logged. Onetime only means that only the current ODBC session will be logged. When the user logs off, tracing will stop. Click on the Start Tracing Now button to begin the session. Clicking on the Stop Tracing Now button will end the current Trace session. You can enter the name of the Trace Log file in the Log file Path box or leave the name as the default. The Custom Trace DLL box allows you to specify the name of the trace DLL. Unless your ODBC driver software requires a specific trace DLL, you can leave this entry at its default value.
A discussion of ODBC tracing wouldn’t be complete without mentioning a couple of caveats. First, the ODBC trace log files can become incredibly huge unless you periodically clear them by deleting them. Second, when you trace ODBC calls, you log all data passed between the PC and the remote database server. This means that user IDs and passwords are logged in clear, unencrypted text. Needless to say, this is a security violation that compromises security on both the PC and the remote AS/400. Anyone who has access to this PC can start an ODBC trace and retrieve this sensitive information. It is a good idea to remove the trace DLL from the PC in order to close this security hole. If you later need to start an ODBC trace on that PC, you can always restore it from a server or diskette.
On the AS/400
That’s the basics of each option available to you when you create or modify an ODBC data source. You can make a huge impact on the performance of the ODBC-enabled application based on the options you select from these panels. However, you’re not done yet. There are a few more things to take care of on your AS/400. From an AS/400 command line, type the command Add Remote Database Directory Entry (ADDRDBDIRE) and press Enter. You’ll see the Add RDB Directory command panel (Figure 11). You’ll need to type in the name of your AS/400 in the Relational database field and *LOCAL in the Remote location field. You’ll also need to define whether this is an *SNA or *IP connection by making the appropriate entry in the Type field. You create this remote
database directory entry so that the ODBC driver connecting to this AS/400 will know what syntax to use for the table qualification. Some ODBC SQL calls require a syntax of SYSTEM.COLLECTION.TABLE, where a period (.) is used as a separator. The syntax of SYSTEM.COLLECTION. TABLE is equivalent to AS/400 System Name, Library, and File name. The AS/400 requires a slash (/) as a separator. Defining the Remote Database Directory Entry lets the ODBC driver know what syntax to use. (For another view on how to improve ODBC performance from the AS/400, see “Kick Start ODBC with Prestarted Jobs,” Client Access/400 Expert, November/December 1996.)
Client Access/400 and ODBC calls using Client Access communicate with a program named QZDAINIT that runs on your AS/400. This program resides in library QIWS. Normally, QZDAINIT is started when you start subsystem QSERVER. Generally, the system startup program will start QSERVER. However, some AS/400s have been set up to use QCMN or QBASE as the default communication subsystem. If this is the case, you may need to ensure that those subsystems are started before your Client Access-based ODBC connections will work.
That’s an in-depth look at configuring ODBC. If you find that a particular setting doesn’t work with your ODBC driver or the SQL syntax required by that driver, experiment with the configuration until you make it work. Although SQL is generally the same across all applications and platforms, there are differences. You may find that certain recommendations made in this series of articles may not function exactly the way you intended with your application. The online help provided with the Client Access driver (32- bit) is fairly extensive and quite helpful. Refer to it if you experience any problems.
Figure 1: The options on the Performance tab will help you enhance the performance of ODBC on your AS/400.
Figure 2: The Language tab lets you specify such options as Sort Type, whether to sort by uppercase or lowercase, and what Language ID to sort by, if applicable.
Figure 3: The Other tab lets you specify such options as Connection type, Object description type, and Scrollable cursor.
Figure 4: You specify whether or not CCSID 65535 translation should be performed on the Translation tab.
Figure 5: The Format tab lets you specify the format of returned data.
Figure 6: The ODBC Data Source Administrator window is the starting place for defining ODBC data sources.
Figure 7: The File DSN tab lets you connect a data source to a data provider, such as the AS/400.
Figure 8: The Create New Data Source window is used to add new data sources using existing ODBC drivers.
Figure 9: The Advanced File DSN Creation Settings dialog box lets you modify ODBC driver settings.
Figure 10: The Tracing tab lets you configure or start an ODBC Trace session.
Figure 11: The Add RDB Directory Entry panel on the AS/400 lets the ODBC connection know what SQL syntax to use. | <urn:uuid:4dd1d8d1-1df5-46af-b2bd-3dab9bd2a086> | CC-MAIN-2022-40 | https://www.mcpressonline.com/programming/apis/configuring-32bit-client-access400-odbc-part-2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00342.warc.gz | en | 0.860911 | 3,824 | 2.53125 | 3 |
By: D. Singla
Basic of Voice over IP (VoIP)
Voice over IP (VoIP) is an innovation that permits voice, video, and sound to be sent as information bundles across an IP organization, regardless of whether private or public. VoIP advancement is rapidly standing out and interest in the space because of the advantages it might give. Cost decreases, wide media limits, phone and organization smallness, flexibility, and combination with different applications are, for the most part, benefits of VoIP for the two customers and correspondence specialists as demonstrated in Figure 1.
Voice over IP as Platform
At the point when we utilize an exemplary circuit-transmuted telephone to contact a partner’s office from yours, the call commences from the equipment around your work area, goes over one of a confined number of pathways on particular phone organizations, and emerges at a predetermined area-the circumventing telephone work area. VoIP calls are rudimentally pieces of information on the overall Internet. They are not bound to geological areas or explicit contraptions, because VoIP uses general guidelines, it can verbalize with any contrivance that upholds the Internet convention. It can peregrinate to an electronic mail inbox on a PC to a remote organization in any area of the planet similarly as promptly as it can to the telephone on that associate’s work area.
- The initial phase in building a VoIP stage is to introduce gadgets Converters, Programming, or Telephones, that workers see in their front office.
- VoIP programming and equipment design administer what capacities are offered, just as how VoIP gadgets speak with corporate IT frameworks .
- The initial phase in building a VoIP stage is to introduce front-office gadgets—the telephones, converters, or programming that workers visually perceive.
- The incipient VoIP programming and equipment design administer what capacities are offered, like how VoIP contrivances verbalize with corporate IT frameworks .
- Similarly, as static corporate Web destinations gave way to dynamic, intelligent, truly business-upgrading utilizations of the Internet 10 years prior, VoIP will fill in as an establishment for more essential correspondences that blend voice in with different information—supposed “united interchanges” .
- Consider VoIP’s potential as an essential instrument as far as three kinds of ability: Customization, Virtualization, and Intelligence.
Virtualization in Voice over IP
VoIP. With a couple of mouse clicks, it is likewise functional to make a help for a boundless number of telephones anyplace on the planet. This mix of compactness and versatility takes fixed and costly components of customary correspondences and makes them alterable and modest. It empowers organizations to set up minimal expense repetition to control hazards, and it gives organizations adaptable correspondences that can quickly respond to moving interests .
Customization in Voice over IP
The main advancements in conventional telephone network innovation (as shown in Figure 2), for example, guest ID and phone message, required a very long time to create and execute. New calling provisions or voice applications are easy to make and foster using VoIP. Albeit off-the-rack VoIP programming and gadgets offer a scope of functionalities, organizations are occupied with creating one-of-a-kind applications that might build up marking, further develop client care, and work on inner correspondences.
As these models show, organizations are now using VoIP’s customization and virtualization highlights, however, these are among the few that have advanced past straightforward expense cutting establishments. The best capability of VoIP will be acknowledged when organizations foster progressively modern frameworks to interface interchanges and business cycles and lift the efficiency of information laborers.
Some facts and myths about Voice over IP
- While the extraordinary greater part of individual organizations actually utilizes conventional telephones, around 10% of global telephone traffic presently streams over the Internet using voice over Internet convention, or VoIP .
- VoIP is more than essentially another innovation for making customary telephone discussions more reasonable.
- Its solidarity comes from the way that it changes over discourse into computerized information parcels that can be saved, blended, modified, copied, in with different information, and dispersed to almost any gadget that can associate with the Internet.
- Think of it as what could be compared to the World Wide Web (WWW).
- The term IP, or Internet convention, just alludes to the specialized rules that oversee how advanced information is encoded.
- Because of these mundane norms, VoIP might connect with other Internet-predicated information and frameworks progressively.
- Notwithstanding, think about this: Because VoIP changes over voice into Internet-accommodating information parcels, it can and will dislodge the firm, packaged telephone benefits that most organizations actually use.
- Also, because it will allow organizations to plan their own altered telephone applications, it will move control of telephone benefits from transporters who have customarily characterized (and controlled) them and toward the organizations that use them.
- VoIP will fill in as the binding system for such applications, empowering progressively custom-made, savvy, and vital voice conversations.
VoIP is coming. The key differentiation won’t be between the individuals who convey it and the people who don’t, or even between early adopters and loafers. Innovation will be a fight between the individuals who consider VoIP to be basically one more method for getting the normal, worn-out things done and other people who use it to totally rethink their organization.
- Article at: – https://hbr.org/2005/09/using-voip-to-compete.
- Vinokurov, D. & MacIntosh, R. W. (2005). Detection and mitigation of unwanted bulk calls (spam) in VoIP networks. US Patent No. US2005/ 0259667 A1. November 2005.
- VoIP Magazine Editorial Staff. (2005). ISS finds flaws in Cisco VoIP. Retrieved on December 20, 2007.
- William C.Hardy (2003). VoIP Service Quality Measuring and Evaluating Packet-Switched Voice. McGraw-Hill Networking.
- Y. Hanifan and Y. Bandung, “Designing VoIP security system for organizational network,” International Conference on ICT for Smart Society, 2013, pp. 1-5, doi: 10.1109/ICTSS.2013.6588074.
Cite this article as:
D. Singla (2021), Voice over IP (VoIP), Insights2Techinfo, pp.1
- Smart Adoption of IoT in COVID-19 Pandemic paves new era of sustainable future world
- Moving Object Detection System: Wi-Vi
- Zero-Shot Temporal Activity Detection
- Virtual Personal Assistant
- CLASSIFICATION OF FUNGI MICROSCOPIC IMAGES – Leveraging the use of AI
- Humanoid Robots: The future of mankind | <urn:uuid:513c3be0-7bf3-40f0-9d61-3d67d4b74b1d> | CC-MAIN-2022-40 | https://insights2techinfo.com/voice-over-ip-voip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00342.warc.gz | en | 0.881203 | 1,470 | 3.046875 | 3 |
DaaS is one of the latest terms in the field of cloud computing that is going to be more prevalent in future. It stands for directory-as-a-service and here we will try to understand what is DaaS (Directory-as-a-Service) and how it actually works. It is a kind of cloud-based solution that is accessible at the user store of a company. The function is to actively take and move lightweight directory access protocol (LDAP) or Microsoft Active Directory right to the cloud and manage it effectively in the form of a service.
The best part of DaaS is that it can function as an ideal combination of the best features of cloud based active directory or LDAP. It is the modernized version of directory to work along with critical IT trends like proliferation of the device types and cloud services.
What is DaaS (Directory-as-a-Service) in Information Technology?
In terms of IT, DaaS is a kind of secure connection and effective management of manpower to IT resources via a single, unified, specialized cloud based user directory. Particularly, the user directory would remain safe and functioning in the cloud and be the single access point for all the employees to reach their devices and applications. It is worth mentioning that a cloud-based directory service is the point of central connection for different other kinds of complementary solutions like single sign-on. Some of the organizations may seamlessly navigate to DaaS, but many need a migration path by leading an existing LDAP or Active Directory to the cloud.
How DaaS (Directory-as-a-Service) Functions?
After understanding what is Directory-as-a-Service, it is essential to understand how it actually functions. DaaS is the core of the IT services that are meant for authorizing, authenticating and managing the users along with their respective devices and applications. Each function is described below in a brief manner-
DaaS is one stop authorization solution, which ensures the fact that the right users have access to the right IT resources in your company. Besides, it is also feasible to execute a command when the users are required to add or remove devices from the system.
DaaS can perform as an extension or as directory of record of your present directory. The requests to authenticate the users are sent via LDAP protocol. It can easily be deployed on Windows, Linux and Mac devices for the policy and task management, security auditing and survivability.
One of the vital functions of DaaS solution is it’s proficiency to manage Windows, Linux and Mac devices at scale. Directory-as-a-Service can simplify task execution over the devices including globally updating policy settings, applying patches, modifying registry settings as well as changing system configurations. It will make sure consistency across your environment, as well as facilitating you to group the alike objects and employ the same configurations and policies across them.
Some of DaaS cloud companies are enlisted below –
- Microsoft Azure
- AWS Directory Service.
- Oracle Identity Management.
In case you are struggling to infer how your user directory can navigate to the modern cloud era, besides helping control and manage new device and user types, then it is pivotal to come across best DaaS service. The services from one reliable source could help you to much greater extent! | <urn:uuid:7d2b7906-7b30-4bb2-bf95-7b8fddac9440> | CC-MAIN-2022-40 | https://networkinterview.com/what-is-daas-directory-as-a-service/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00342.warc.gz | en | 0.928436 | 711 | 2.875 | 3 |
It’s hard to find something you don’t even know is there. This is especially true of advanced malware. Because even though traditional antivirus software is widely deployed by most enterprises and its sophistication has improved over its 30-year existence, it can only find malware if it knows exactly what to look for (by, for example, its digital signature). Malware, though, is sneaky, patient, and routinely modified to evade detection. With advanced malware such as zero-days, rootkits and remote access Trojans (RATs), the task of ferreting it out becomes even trickier. Today’s hackers commonly use methods such as encryption to obfuscate code, and packing to prevent detection by signature-based antivirus software, and these require special techniques to identify. In order to execute, the code must be unpacked or decrypted in order to run, making the malware detectable in memory but not directly on the disk or other storage media. Further, some malware will create registry keys to render itself persistent after reboots even though volatile memory is erased. Signature-based methods are not equipped to detect these advanced threats. However, malware of this type still has to run in memory to wreak its havoc, regardless of how it is delivered and it can only be detected by behavior-based approaches.
While advanced malware has been crafted for many purposes including stealing passwords and personally identifiable information (PII), and even for pranks, there are countless methods of coding each of the features in the malware. For example, there are tens of thousands of ways to code a key logger in Windows alone. Typical antivirus software has to track not only each of these malicious applications, but it must also keep up with the hundreds of variants creating literally millions of different pieces of malware just for key logging. However, because the number of operations carried out by the various key loggers to execute malicious activities is finite, behavioral detection only needs to track a small number of operations. Therefore, if you can identify malware by its behaviors, you stand a much better chance of detecting each and every malicious key logger. With behavior-based malware detection, millions of signatures are no longer needed to find a given piece of malware. In fact no signature is needed at all to detect brand new malware, which could not be caught at all with signature-based systems.
As we’ve all seen in the last two years, there’s been a deluge of data breaches and cyberattacks, with huge corporations including Home Depot, Target, Sony and Anthem falling victim and exposing tens of millions of individuals’ records to hackers. The Identity Theft Resource Center reported 783 data breaches in 2014, an increase of 28 percent from 2013. And these are just the major breaches that are reported in the media or required notification to government agencies. Today’s valuable digital assets are being targeted regardless of the size of the company or where they are located. Even smaller companies that are connected in today’s complex business ecosystems to larger companies are no longer safe.
Against this backdrop, traditional antivirus software still has value but it can be limited. Antivirus software looks for malware by matching it exactly against a database of known malware signatures by each line of code. An extension of the signature is the Indicator of Compromise (IOC), which is a collection of parameters about the malware. Signatures can include something as simple as an MD5 hash value. IOCs may also include parameters such as filename, path, IP addresses or author/source. While useful, these indicators will only get you part way toward a secure solution. Although malware writers don’t write brand new software each day, they do often change existing software to avoid detection. It’s not only very easy to change the filename of a piece of malware, it is simple to change one byte of code, effectively making it a new piece of malware undetectable by its previous signature. That’s all it needs to trip up antivirus software. With this many variants, it is difficult for traditional AV software- and IOC-based detection systems to keep up. Today’s advanced malware requires something stronger.
There are alternative methods to find malware such as whitelisting and network-based solutions but even those can be bypassed by today’s increasingly sophisticated and persistent Cybercriminals. Whitelisting is one way to give the green light for known processes and applications to run, such as a program like Microsoft Word or Internet Explorer. However, whitelisting cannot detect memory modifications, malware injected into safe applications, or malicious code running in registry. So those plugins and Word macros that appear to be safe according to your whitelist may in fact be malicious. Furthermore, perimeter-based detection is also averted by hackers using packed and Encrypted code as well as code fragments.
The only real solution is to search for malware not by matching it precisely against a signature or a whitelist, but rather by the behavior that it carries out when the malware is run, or executed, in memory. How is this done? Take the example of a bank robber: He may be pacing back and forth in front of a bank, in warm weather, wearing a ski mask, and appearing agitated. It could be just any individual, but his behaviors — pacing, agitation, ski mask — are what identify him as a threat. In the same way, advanced malware exhibits behaviors as it executes — opening multiple communication channels, contacting a server by IP address rather than domain name, etc. — and these behaviors can be identified at a very granular level in memory when the malicious code executes. The idea is that one doesn’t need to know every line of code to identify malware; one needs to know how it’s going to behave, and from there it can be identified and successfully eradicated.
At the end of the day, it’s easy to miss malware at rest (on disk) or in motion (on the network) because of hackers’ advanced tools and techniques, which allow it to avoid signature-based scanning techniques on the endpoints and network-based detection methods. It can lie at rest literally for years. The only guaranteed and fool-proof way to ferret out malware is to have it run in memory and simultaneously apply behavior-based malware detection against it. Malware wants to hide to avoid detection but to carry out its bad acts, it must run. It’s only when malware runs and you’re looking for its behavior that you can find it and take the requisite steps to eradiate it.
By Raj Dodhiawala, senior vice president and general manager at ManTech Cyber Solutions International.
About Raj Dodhiawala
Raj Dodhiawala is senior vice president and general manager at ManTech Cyber Solutions International, a provider of cyber security solutions specializing in comprehensive, integrated security support, including computer and network design, implementation, and operations. Specializing in enterprise software appliance marketing, product management, technology and services, Raj offers a quarter of a century of enterprise software experience. As the Vice President of Innovation and Engineering at Morse Best Innovation, Raj led the company’s effort in technical marketing services and developed DemoMate, a software-plus-services solution for sales readiness. At Microsoft, he was the Chief Architect of the .Net Center of Excellence as well as the Program Co-Lead for the Net Effect program. Prior to Microsoft, Raj served as Director of Engineering at Blue Coat, leading the Server Accelerator product effort, which remains a core component of the company’s revenue mix. Additional roles include Director of Engineering at NUKO, Technical Scientist for Boeing Computer Services and software consultant at Tata Consultancy Services in Mumbai, India. | <urn:uuid:46039034-0f39-47c3-bbb1-cbf82b5d43a4> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/articles/malware-wants-to-hide-but-it-has-to-run-to-wreak-havoc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00542.warc.gz | en | 0.943202 | 1,592 | 2.703125 | 3 |
Vulnerability management refers to the process of discovering, confirming, classifying, prioritizing, assigning, remediating, and tracking vulnerabilities. Do not confuse vulnerability management with vulnerability scanning, the latter being part of the vulnerability management process, with emphasis on the discovery phase. It is also important to understand that risk management deals with all associated risks, whereas vulnerability management targets technology.
Vulnerabilities can be perceived as weaknesses in people, process, and technology. Vulnerability management in the context of SOC focuses on known technical weaknesses introduced in software and firmware. It is worth highlighting that the existence of a technical vulnerability could be the result of weaknesses in people and process such as the lack of a proper software quality assurance process.
Organizations with a mature security program integrate the closely linked vulnerability management and risk management practices. Sometimes this can be accomplished using tools that can automate this integration. Figure 2-9 shows the initial steps you would typically undertake to identify the scope and prepare your vulnerability management program. We will look deeper into preparing the SOC in Chapter 10.
Figure 2-9 Preparing a Vulnerability Management Program
The most critical element of vulnerability management is being faster at protecting the vulnerable asset before the weakness is exploited. This is accomplished by continuously applying a series of steps to identify, assess, and remediate the risk associated with the vulnerability. A good reference model that can be followed as a guideline for handling risk is the SANS Vulnerability Management Model shown in Figure 2-10. The details of each step are covered in Chapter 7.
Figure 2-10 SANS Vulnerability Management Model
One of the most common methods to identify when a system is vulnerable is by monitoring for vulnerability announcements in products found within your organization. Let’s look more into how this information is released.
Vulnerabilities in open and closed source code are announced on a daily basis. Identifiers are associated with vulnerability announcements so that they can be globally referenced, ensuring interoperability. One commonly used standard to reference vulnerabilities is the Common Vulnerabilities and Exposures (CVE), which is a dictionary of publicly known information security vulnerabilities and exposures. CVE’s common identifiers make it easier to share data across separate network security databases and tools. If a report from one of your security tools incorporates CVE identifiers, the administrator can quickly and accurately access and fix information in one or more separate CVE-compatible databases to remediate the problem. Each CVE identifier contains the following:
- CVE identifier (CVE-ID) number in the form of CVE prefix + Year + Arbitrary Digits
- Brief description of the security vulnerability or exposure
- Other related material
The list of products that use CVE for referencing vulnerabilities is maintained by MITRE.8
The CVE identifier does not provide vulnerability context such as exploitability complexity and potential impact on confidentiality, integrity, and availability. These are provided by the Vulnerability Scoring System (CVSS), maintained by NIST. According to NIST, CVSS defines a vulnerability as a bug, flaw, weakness, or exposure of an application, system device, or service that could lead to a failure of confidentiality, integrity, or availability.
The CVSS enables users to understand a standardized set of characteristics about vulnerabilities. These characteristics are conveyed in the form of a vector composed of three separate metric groups: base, environmental, and temporal. The base metric group is composed of six metrics: Access Vector (AV), Access Complexity (AC), Authentication (Au), Confidentiality (C), Integrity (I), and Availability (A). The base score, ranging from 0 to 10, derives from an equation specified within the CVSS. AV, AC, and Au are often referred to as exploit metrics, and C, I, and A are referred to as impact metrics. Figure 2-11 shows the base metrics used in CVSS (source: NIST CVSS Implementation Guidance). The vector template syntax for the base score is AV:[L,A,N]/AC:[H,M,L]/Au:[M,S,N]/C:[N,P,C]/I:[N,P,C]/A:[N,P,C].
Figure 2-11 CVSS Base Metrics (Source: NIST CVSS Implementation Guidance)
CVSS is a quantitative model that ensures a repeatable accurate measurement while enabling users to see the underlying vulnerability characteristics that were used to generate the scores. Thus, CVSS is well suited as a standard measurement system for industries, organizations, and governments that need accurate and consistent vulnerability impact scores. Example 2-11 shows the information included in the vulnerability announcement labeled CVE-2014-4111 including the CVSS score of (AV:N/AC:M/Au:N/C:C/I:C/A:C).
Example 2-11 Vulnerability Announcement CVE-2014-4111
Original release date: 09/09/2014 Last revised: 09/10/2014 Source: US-CERT/NIST Overview Microsoft Internet Explorer 6 through 11 allows remote attackers to execute arbitrary code or cause a denial of service (memory corruption) via a crafted web site, aka "Internet Explorer Memory Corruption Vulnerability." Impact CVSS Severity (version 2.0): CVSS v2 Base Score: 9.3 (HIGH) (AV:N/AC:M/Au:N/C:C/I:C/A:C) Impact Subscore: 10.0 Exploitability Subscore: 8.6 CVSS Version 2 Metrics: Access Vector: Network exploitable; Victim must voluntarily interact with attack mechanism Access Complexity: Medium Authentication: Not required to exploit Impact Type: Allows unauthorized disclosure of information; Allows unauthorized modification; Allows disruption of service External Source: MS Name: MS14-052 Type: Advisory; Patch Information Hyperlink: http://technet.microsoft.com/security/bulletin/MS14-052 | <urn:uuid:cbaba26d-8c95-4924-83ad-dd1e9cfcfe27> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=2455014&seqNum=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00542.warc.gz | en | 0.900181 | 1,228 | 2.890625 | 3 |
Rising sea levels are set to damage fiber optic cables, submerge network points of presence (PoPs) and surround data centers, researchers have warned.
In a study analyzing the effects of climate change on Internet infrastructure in the United States, University of Wisconsin-Madison and University of Oregon researchers found that a significant amount of digital infrastructure will be impacted over the coming years, and cautioned that mitigation planning should begin immediately.
“This is a wake-up call”
The peer-reviewed study Lights Out: Climate Change Risk to Internet Infrastructure, authored by Ramakrishnan Durairajan, Carol Barford and Paul Barford, combined data from the Internet Atlas - a global map of the Internet’s physical components - and projections of sea level changes from the National Oceanic and Atmospheric Administration (NOAA).
“Our analysis is conservative since it does not consider the threat of severe storms that would cause temporary sea level incursions beyond the predicted average,” Durairajan et al. note.
At a particular risk are fiber optic cables buried underground, which - unlike submarine cables - are not designed for prolonged periods of submersion.
According to the study, in 15 years some 1,186 miles (1,908km) of long-haul fiber and 2,429 miles (3,909km) of metro fiber will be underwater, while 1,101 termination points will be surrounded by the sea. “Given the fact that most fiber conduit is underground, we expect the effects of sea level rise could be felt well before the 15 year horizon,” the paper states.
Additionally, “in 2030, about 771 PoPs, 235 data centers, 53 landing stations, 42 IXPs will be affected by a one-foot rise in sea level.”
The US networks most at risk belong to AT&T, CenturyLink, and Inteliquent, with a particularly strong impact expected across New York, Miami, and Seattle metropolitan areas. “Given the large number of nodes and miles of fiber conduit that are at risk, the key takeaway is that developing mitigation strategies should begin soon.”
Researchers added: “Future deployments of Internet infrastructure (including colocation and data centers, conduits, cell towers, etc.) will need to consider the impact of of climate change.”
In a separate statement, Paul Barford said: “Most of the damage that’s going to be done in the next 100 years will be done sooner than later. That surprised us. The expectation was that we’d have 50 years to plan for it. We don’t have 50 years.
“This is a wake-up call. We need to be thinking about how to address this issue.”
Whether any immediate action will be taken remains unclear, with the US federal government disputing climate change science and rolling back the associated regulations. “The first instinct will be to harden the infrastructure,” Barford said.
“But keeping the sea at bay is hard. We can probably buy a little time, but in the long run it’s just not going to be effective.” | <urn:uuid:cc9e5e0f-4281-4dea-b220-c2e54182f564> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/news/data-centers-fiber-optic-cables-at-risk-from-rising-sea-levels/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00542.warc.gz | en | 0.908836 | 657 | 2.84375 | 3 |
What is a Community Needs Assessment?
A community needs assessment is a systematic process for identifying salient socio-political community issues, such as poverty, crime, health, education or unemployment. These are broad examples of topics that a needs assessment may identify as issues in a community with gaps between what is and what should be. Community needs assessments are effective when used to inform initiatives and programs meant to improve a community's societal well-being. Needs assessments may reveal concrete needs such as an improved system of public transportation, or a need that is more abstract and conceptual, such as a the need for a community to be more informed about issues of environmental health and sustainable living.
Why is a Community Needs Assessment Important for CSR?
As previously discussed, effective corporate social responsibility (CSR) programs and initiatives generate both a societal benefit, and a business benefit, allowing for sustainable and credible corporate philanthropy. The concept of context-focused giving discusses how corporate philanthropy and business strategy can be combined by identifying contextual conditions most important to a company's strategy, growth and operations and developing a giving program that improves the nature of this context, creating social and economic benefits.
Corporate social responsibility should be geared toward addressing real needs in a community. Without referencing or conducting an analysis that identifies and measures community needs, giving programs are based in good ideas rather than solid evidence. Community needs assessments identify community needs as well as community assets and resources. Utilizing your community's strengths to address its weaknesses makes perfect sense, and a needs assessment will clearly identify both. Community needs assessments help organizations and individuals to better understand their communities and the potential micro-communities that exist within them. The biggest benefit to utilizing or conducting a community needs assessment is that it helps identify priorities for community improvement and to inform evidence-based, and often more effective, strategies for generating social and/or environmental community benefits.
How to Find or Conduct a Community Needs Assessment
Your company need not conduct its own community needs assessment to identify areas of improvement to target for your CSR programs. There are many community need assessments that have already been conducted and are available for review by the public. Most often, government or non-profit agencies have been responsible for conducting community needs assessments. A simple internet search can more than likely lead you to one or more community needs assessments that have been conducted for the area in which your company operates.
If you are not able to find a needs assessment that has already been conducted for your community, or if for any reason you would rather conduct your own community needs assessment, there are many tools and resources available for your CSR team to create your company's own. One great resource for learning more about community needs assessments is The Community Tool Box, a service of the Work Group for Community Health and Development at the University of Kansas. One of their services is "Learn a Skill," a 46-chapter resource that provides information about a wide-variety of skills and practices for building and improving community. Chapter 3 provides an in-depth look at community needs assessments with many tools and strategies for conducting one. Another great resource is James W. Altschuld's "The Needs Assessment KIT," a set of 5 books that explain needs assessments, and takes readers through the phases of conducting a needs assessment and subsequent plan of community action. | <urn:uuid:f87fef8f-ca5e-410f-85c5-2450ea3b4865> | CC-MAIN-2022-40 | https://www.givainc.com/blog/index.cfm/2015/4/16/community-needs-assessments-and-csr | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00542.warc.gz | en | 0.964561 | 670 | 2.921875 | 3 |
The Isle of Iona
The Isle of Iona is a small island in the Inner Hebrides off the west coast of Scotland. The Vikings were here, it was a significant site in the development and spread of early Celtic Christianity, and 48 Scottish kings including Macbeth are buried here. Iona has a lot of history for such a small place. (877 hectares, or 21671.142 square chains if you insist on U.S. measurement units)
Iona is known in Gaelic as Ì Chaluim Chille, meaning Iona of Saint Columba, or as Eilean Idhe, and further back yet it had a Norse name, Eyin Helga.
Iona is just off the south-western tip of Mull, which itself is just across the Firth of Lorn from the northwest coast of Scotland. My plan was to look around Oban, and then take a ferry from Oban to the northeast coast of Mull, just beyond the lighthouse shown on the above chart. That connects to a bus across Mull to its southwest tip. Then a short ferry crosses the narrow sound to Iona.
In addition to Vikings and early Celtic Christianity, Iona is also the burial place of 48 Scottish kings, including Macbeth (1005-1057). Yes, he was real, Shakespeare didn't make him up. However, Shakespeare's story is largely made up.
Oban is on the coast, about ninety minutes by bus southwest of Fort William and connected to Glasgow by frequent bus and three or so trains to and from Glasgow Queen Street station every day.
Oban, or An t-Òban in Gaelic, meaning The Little Bay, is called "The Gateway to the Isles" as it's the major ferry port for connection to the Inner and Outer Hebrides.
A prominent sight is the Stuart McCaig Tower looming over the center of town. It was built in 1897-1902 at the direction of a philanthropic banker, with the dual purposes of providing work winter work for the local stonemasons and building an epic monument to him and his family. His design (being his own architect, a key aspect of most any architectural folly) was for an elaborate structure based on the Colosseum in Rome, with the addition of a tall central tower containing statues of himself and his family, plus a museum and art gallery within the main circular walls. Just the outer walls were finished before he died.
Lest you think I'm making this up, or being harsh in calling it a "folly", his will of 1900 said:
The purpose of the trust is that my heritable estate be not sold, but let to tenants, and the clear revenue or income be used for the purpose of erecting monuments and statues for myself, brothers and sisters on the tower or circular buildings called the Stuart McCaig Tower, situated on the Battery Hill, above Oban, the making of these statues to be given to Scotch sculptors from time to time as the necessary funds may accumulate for that purpose; also that artistic towers be built on the hillock at the end of Airds Park, in the parish of Muckairn; and on other prominent points on the Muckairn estate, and on other prominent places on the various estates; such in particular on the Meolroor of Balagown, lying north-east of Kilachonish Farmhouse. My wish and desire is to encourage young and rising artists, and for that purpose prizes be given for the best plans of the proposed statues, towers, &c., before building them.
And then, in an addition in 1902:
Further, in order to avoid the possibility of vagueness of any kind, I have to describe and explain that I particularly want the trustees to erect on the top of the wall of the tower I have built in Oban, statues in large figures of all my five brothers and of myself, namely, Duncan, John, Dugald, Donald, Peter, and of my father, Malcolm, and of my mother, Margret, and of my sisters, Jean, Catherine, Margret, and Ann; and that these statues be modelled after photographs. And where these may not be available, that the statues may have a family likeness to my own photograph or to any other member of my foresaid family; and that these statues will not cost less than one thousand pounds sterling, and that money to come out of the accumulated clear revenue.
More interesting, at least to me, was the Oban Distillery, at the base of Battery Hill and just off the northwest corner of the harbor. You pay to take the tour, not much, significantly less than the cost of going to a pub and ordering the equivalent of the end-of-tour tastings.
Better yet, join the Friends of the Classic Malts program or similar for free admission to tours.
I found the tour very interesting, the initial stages are similar to processes used in home brewing.
Note that Scotch is spelled whisky while the Irish and American products are spelled with an added "e", whiskey.
Also note that Scotch is the beverage, while Scottish is an adjective for things from Scotland and Scots are people from Scotland.
Things have changed since the old days. Almost all of the distilleries buy their barley already malted, and the smokiness comes from added phenols, not directly from peat smoke. Oban single-malt in its final form only contains about 5 ppm peat smoke phenols. Talisker, from the Isle of Skye, is about 25 ppm. Islay whiskies tend to be above 25 ppm.
Oban uses a 10 hour mash cycle, with 58°C and 68°C sparges. There's a third sparge, even hotter, but it's just to make the sparge liquid to be used for the next cycle. The number and temperatures of the sparge cycles will vary from one distillery to the next.
Oban ferments the resulting wort for four days. That sounded fast to me, but they said that some distilleries finish their fermentation in as little as 2.5 days.
The result, similar to partially finished ale, is distilled twice to 64% a.b.v. The result is completely clear. It is placed in barrels for aging, by law all Scotch whisky must be aged for at least three years in oak casks.
There are laws in the United States requiring bourbon and Tennessee whiskey to be aged in new oak casks. Why? The laws were passed because of lobbying by the barrel making industry, to create more work. It's one of the forms of weak socialism in the U.S., like the farm subsidies in which farmers are paid to not grow crops.
Anyway, the barrels are used once in the U.S. and then broken down and shipped to Scotland. Some sherry casks are also shipped in from Spain. Different distilleries use different types of barrels at different generations after their original use. That is, an American barrel might be shipped to distillery #1 and used two times, then shipped to distillery #2 and used three more times.
A short report in the journal Science [vol 329, 27 Aug 2010, pg 999] reports that industrial microbiologist Martin Tangney, of Edinburgh Napier University, has patented a process to turn whisky distilling byproducts into an automobile fuel based on butanol, C4H9OH. Draff is the residue of the fermentation stage. There are hundreds of distilleries in Scotland, and they produce 187 million kilograms of draff each year. Clostridium acetobutylicum bacteria convert the draff to butanol. Additional water is needed, available in the form of pot ale, what's left behind in the distillation stage.
You'll need to stay somewhere — I stayed at Oban Backpackers, just a few blocks north of the harbor.
The Oban War and Peace Museum does a very nice job of explaining Oban's military and geopolitical connections during and since World War II.
During World War II Oban was a major port for the Royal Navy and for the North Atlantic convoys between the UK and North America. There was also a major RAF flying boat base on Kerrera, the small island directly across from Oban harbor.
During the Cold War Oban was significant as the landing point for TAT-1, the first Trans-Atlantic Telephone cable. TAT-1 came ashore at nearby Gallanach Bay, and it also carried the "Hot Line" between Washington and Moscow. | <urn:uuid:2e8d68df-6df4-4ff1-bb08-4b58c2f8d888> | CC-MAIN-2022-40 | https://cromwell-intl.com/travel/uk/scotland-isle-of-iona/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00542.warc.gz | en | 0.964269 | 1,781 | 2.640625 | 3 |
How often do you clean out your phone’s clipboard? If you don’t know what that is, you’re not alone. In a nutshell, your clipboard is where your phone temporarily stores text, images and other content you copy and paste elsewhere. You can’t exactly see what’s being stored there at any given time unless you manually tap paste in another part of your phone.
Clipboards are an essential part of any computer system, but they can also be vulnerable to security issues. Earlier this year, researchers discovered how iPhones suffered a common weakness to apps that could snoop on private clipboard contents. Tap or click here to see what you can do to avoid getting spied on.
But the problem has significantly escalated thanks to new discoveries from these same researchers. After digging deeper, they found that TikTok, as well as 53 other iOS apps, enjoy unrestricted access to your clipboard, and intentionally scan and retrieve that data. Here’s what we know, and what you can do to retain your privacy.
TikTok is keeping its eye on you
Researchers from security firm Mysk have expanded upon their initial report showing the dangers of unrestricted clipboard access on iOS. But rather than just point out how vulnerable this area of your iPhone is, they’ve now uncovered that more than 50 apps — including the obscenely popular TikTok — intentionally access your clipboard for data.
As you can see in the video above, the permissions granted by the operating system put a potential wealth of information in the hands of these app developers. If you make the mistake of copying or pasting sensitive information like a password, it can easily get scooped up by these apps running in the background.
But it’s not just limited to one device, either. If you sync your notes or other files between devices using the same iCloud account, your devices share a so-called “universal clipboard.” This means anything saved to a clipboard across all your iCloud-connected devices is fair game for these snoops.
What’s more, early-adopters of the iOS 14 beta discovered how aggressively some of these apps scan for content in real-time. In a Twitter thread composed by one of the authors of Emojipedia, iOS 14’s paste notification feature alerted him that TikTok was scanning and copying his clipboard approximately ever 1-3 keystrokes. That’s a lot of effort put into snooping!
Okay so TikTok is grabbing the contents of my clipboard every 1-3 keystrokes. iOS 14 is snitching on it with the new paste notification pic.twitter.com/OSXP43t5SZ— Jeremy Burge (@jeremyburge) June 24, 2020
As bad as this is, there are fortunately some steps you can take to secure your clipboard and prevent yourself from being spied on.
What apps are spying on me? What can I do?
The researchers from Mysk have graciously provided a complete list of the apps they detected were snooping on users’ clipboards. The format as follows includes both the app name, as well as its BundleID in your phone’s code.
If you use any of the following apps, you may want to clear your clipboards out before opening them — or perhaps avoid using the apps altogether.
- ABC News — com.abcnews.ABCNews
- Al Jazeera English — ajenglishiphone
- CBC News — ca.cbc.CBCNews
- CBS News — com.H443NM7F8H.CBSNews
- CNBC — com.nbcuni.cnbc.cnbcrtipad
- Fox News — com.foxnews.foxnews
- News Break — com.particlenews.newsbreak
- New York Times — com.nytimes.NYTimes
- NPR — org.npr.nprnews
- ntv Nachrichten — de.n-tv.n-tvmobil
- Reuters — com.thomsonreuters.Reuters
- Russia Today — com.rt.RTNewsEnglish
- Stern Nachrichten — de.grunerundjahr.sternneu
- The Economist — com.economist.lamarr
- The Huffington Post — com.huffingtonpost.HuffingtonPost
- The Wall Street Journal — com.dowjones.WSJ.ipad
- Vice News — com.vice.news.VICE-News
- 8 Ball Pool™ — com.miniclip.8ballpoolmult
- AMAZE!!! — com.amaze.game
- Bejeweled — com.ea.ios.bejeweledskies
- Block Puzzle —Game.BlockPuzzle
- Classic Bejeweled — com.popcap.ios.Bej3
- Classic Bejeweled HD —com.popcap.ios.Bej3HD
- FlipTheGun — com.playgendary.flipgun
- Fruit Ninja — com.halfbrick.FruitNinjaLite
- Golfmasters — com.playgendary.sportmasterstwo
- Letter Soup — com.candywriter.apollo7
- Love Nikki — com.elex.nikki
- My Emma — com.crazylabs.myemma
- Plants vs. Zombies™ Heroes — com.ea.ios.pvzheroes
- Pooking – Billiards City — com.pool.club.billiards.city
- PUBG Mobile — com.tencent.ig
- Tomb of the Mask — com.happymagenta.fromcore
- Tomb of the Mask: Color — com.happymagenta.totm2
- Total Party Kill — com.adventureislands.totalpartykill
- Watermarbling — com.hydro.dipping
- TikTok — com.zhiliaoapp.musically
- ToTalk — totalk.gofeiyu.com
- Tok — com.SimpleDate.Tok
- Truecaller — com.truesoftware.TrueCallerOther
- Viber — com.viber
- Weibo — com.sina.weibo
- Zoosk — com.zoosk.Zoosk
- 10% Happier: Meditation —com.changecollective.tenpercenthappier
- 5-0 Radio Police Scanner — com.smartestapple.50radiofree
- Accuweather — com.yourcompany.TestWithCustomTabs
- AliExpress Shopping App — com.alibaba.iAliexpress
- Bed Bath & Beyond — com.digby.bedbathbeyond
- Dazn — com.dazn.theApp
- Hotels.com — com.hotels.HotelsNearMe
- Hotel Tonight — com.hoteltonight.prod
- Overstock — com.overstock.app
- Pigment – Adult Coloring Book — com.pixite.pigment
- Recolor Coloring Book to Color — com.sumoing.ReColor
- Sky Ticket — de.sky.skyonline
- The Weather Network — com.theweathernetwork.weathereyeiphone
How do you clear your clipboard out? It’s simple. Just replace what you’re copying to it on a regular basis. In other words, keep a bit of text in your Notes app and copy that once you’ve finished pasting something else.
Try to make it a simple word or sentence that doesn’t say anything important. That way, the only data you’re hanging on to is a piece of harmless text.
This is yet another reason why it’s important to review what apps you download to your device, as well as the permissions they’re allowed. Don’t forget, however: These data-hungry apps can’t snoop on your clipboard if they’re not installed on your phone in the first place. Tap or click here to see even more iOS apps you need to delete right now. | <urn:uuid:1d23de00-427d-439c-ac8e-6da591b1537b> | CC-MAIN-2022-40 | https://www.komando.com/security-privacy/tiktok-and-other-apps-snooping-on-your-device/743630/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00542.warc.gz | en | 0.836855 | 1,733 | 2.8125 | 3 |
The use of fiber optic cables for communications dates back to the 1950s when some of the first demonstrations of the world’s earliest transmission systems occurred. Since then, continuous innovation of optical fiber has significantly improved the performance characteristics to a level that now fiber optic cables serve as the backbone for global communications systems.
Fiber optic testing and training are important aspects of ensuring optical communications networks operate successfully as designed and intended. When thinking about the big picture, testing and training happen at many different levels and in a variety of instances both before, during, and after the network has been installed.
Topics: fiber optic training
For many years, communications network equipment manufacturers performed most of the lab testing efforts when designing and certifying their gear for use in the field by their customers. Now that fiber optic systems are the norm around the world for driving communications, many of the equipment users themselves are testing potential devices before deployment to determine the best solution for their own unique needs. Financial service providers, utilities, local governments, and smaller regional internet providers have recognized that having even a small and efficient lab setup where they can simulate their network and train technicians can go a long way in optimizing their operations. | <urn:uuid:e7b5a319-1cfe-41c7-9a97-404ae3ac1702> | CC-MAIN-2022-40 | https://www.m2optics.com/blog/topic/fiber-optic-training | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00542.warc.gz | en | 0.955794 | 238 | 3.203125 | 3 |
Using the latest advances in genetics, two biofuel companies have developed a way to turn plant matter into fuel that can meet the nation’s energy needs.
In a process similar to brewing beer, the firms use genetically engineered microbes to turn a broth of plant material, water and sugar into a biofuel that can be used to run today’s automobiles.
The designer biofuel — which the companies predict will be entering the mainstream market in three to five years — could be an important road on the map to energy independence.
With designer biofuels, “we’re not necessarily reliant on obtaining liquid transportation fuels from oil, which generally comes from the Middle East,” Neil Renninger, cofounder and senior vice president ofdevelopment for one of the companies, Amyris Biotech told TechNewsWorld.
While designer biofuels are exotic, biofuels, in general, are not. Gasoline mixtures containing 10 percent ethanol — which is a biofuel — can be pumped at gas stations today, but the new breed of biofuel being pioneered by Amyris and LS9 has some distinct advantages over ethanol.
For one, it takes less energy to produce it.
Like Oil and Water
Ethanol mixes with water, explained LS9’s Senior Director for Corporate Development Gregory Pal.
“When you get done with your fermentation for ethanol, you have to do a distillation process, which is effectively putting a bunch of energy into the system to separate the water from the ethanol,” he told TechNewsWorld.
Since the “renewable petroleum” made by the designer biofuel makers acts like vegetable oil, it doesn’t mix with the water in the fermentation tank so it can be skimmed off the top of the broth with very little energy expenditure, Pal explained.
More Energy Per Gallon
Energy expenditure has been a thorny problem for ethanol produced from corn, where it has been estimated that it takes 20 percent more energy to produce a gallon of ethanol than can be obtained from that gallon.
Ethanol made from sugar cane is much more efficient, noted Renninger, with a gallon of sugar ethanol producing 80 percent more energy than it took to produce it.
According to the designer biofuel makers, a gallon of their fuel packs more energy than the same measure of ethanol.
A gallon of gasoline can produce 50 percent more energy than a gallon of ethanol, Pal explained. So if a car can go 30 miles on a gallon of petrol, he noted, it would go 20 miles on a gallon of ethanol.
Since the renewable petroleum LS9 is making is “functionally equivalent to gasoline,” Pal maintained, it would provide the same mileage as a gallon of gasoline.
No Mods Needed
There are other advantages to designer biofuels because of their close similarity to petroleum products.
They can be distributed using existing pipelines, enter existing gas stations without modifying those stations and run in existing auto engines, Pal said.
“In the case of ethanol, if you go beyond the 10 percent blend, it has to be distributed via trucks or rail cars because it can’t go through the pipelines,” he asserted. “You need to do some retrofitting at the station. Then you have to have a flex-fuel vehicle.”
Although both companies are producing designer fuels, their business plans vary slightly.
LS9 will make a crude oil equivalent that can be refined into gasoline, as well as a diesel product.
Amyris will make products that can go directly from fermentation tank to gas and diesel vehicles, as well as jet planes.
Both companies have plans to open production facilities next year and enter commercial production in three to five years.
When they enter the mainstream market, will these designer fuels be priced like designer clothes?
“Our goal is to be competitive at (US)$45 a barrel of oil,” Renninger said. “A lot will depend on where the price of oil is.”
According to a report released Aug. 1 by the U.S. Energy Information Administration, crude oil was selling at $77.03 a barrel. | <urn:uuid:d759aca3-21ee-403a-bb13-bddfd0f62d97> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/brewing-up-a-cure-for-oil-addiction-58649.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00542.warc.gz | en | 0.946191 | 882 | 3.09375 | 3 |
Voice Over WiFi is the term used by operators who refer to the provision of telephony services over a WiFi access. From the user's perspective, the experience is similar to VoLTE, or traditional voice calls. However, from the network perspective, it is quite different, since the WiFi network is not an integrated part of the mobile network. Voice over Internet Protocol is a category of hardware and software that enables people to use the Internet as the transmission medium for telephone calls by sending voice data in packets using IP rather than by traditional circuit transmissions of the PSTN. The quality of the call can be poorer, and in general is similar to that of over-the-top (OTT) applications. | <urn:uuid:ae3c69b9-ae4c-478e-b6a5-64e7dd663a03> | CC-MAIN-2022-40 | https://www.dialogic.com/glossary/voice-over-wifi-vowifi | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00742.warc.gz | en | 0.959667 | 145 | 2.875 | 3 |
Fighting Ransomware: Using Ivanti’s Platform to Build a Resilient Zero Trust Security Defense
Ransomware is a strain of malware that blocks users (or a company) from accessing their personal data or apps on infected iOS, iPadOS, and Android mobile devices, macOS laptops, Windows personal computers and servers, and Linux servers. Then the exploit demands cryptocurrency as payment to unblock the locked or encrypted data and apps. This form of cyber extortion has been increasing in frequency and ferocity over the past several years. Seemingly, a week does not pass without hearing about the latest ransomware exploit attacking government agencies, healthcare providers (including COVID-19 researchers), schools and universities, critical infrastructure, and consumer product supply chains.
The most common delivery mechanisms are email and text messages that contain a phishing link to a malicious website. By tapping on the link, the user is redirected to an infected website where they unknowingly download drive-by malware onto their device. The malware can contain an exploit kit that automatically executes malicious programmatic code that performs a privilege escalation to the system root device level, where it will grab credentials and attempt to discover unprotected network nodes to infect via lateral movement.
Another common delivery mechanism are email attachments that can also contain malware exploit kits that affix themselves to vulnerable apps, computer systems or networks to elevate their privileges in search of critical data to block.
There are 4 main types of ransomware. First is the locker ransomware, where the earliest form on mobile devices was found on Android. It was detected in late 2013 and called LockDroid. It secretly changed the PIN or password to the user’s lock screen, preventing access to the home screen and to their data and apps.
The second type are encryptor ransomware that employs encryption of apps and files making them inaccessible without a decryption key. The first exploit using this type of ransomware was found in 2014 and called SimpLocker. It encrypted the personal data contained within the internal Secure Digital (SD) storage of an Android device. Afterward, an official looking message showing criminal violations based on scanned files found in the device is displayed to the victim. This is followed by a demand for payment message that would allow the victim to resolve the fake violations and receive the decryption key to unlock their blocked data and apps.
Extortion payments are often made with Monero cryptocurrency because it is digital and often untraceable, ensuring anonymity for the cybercriminals. Bitcoin is still sometimes used, but lately, companies like CipherBlade have been able to track down ransomware gangs using Bitcoin and return the money back to victims. Rarely, mobile payment methods like Apple Pay, Google Pay or Samsung Pay are also used, but cryptocurrency is still the preferred payment for ransomware.
Just within the past several years, cybercriminal gangs have added several more types of ransomware exploits including Doxware, which are threats to reveal and publish personal (or confidential company) information onto the public internet unless the ransom is paid. The other is Ransomware-as-a-Service (RaaS). Cybercriminals leverage already developed and highly successful ransomware tools in a RaaS subscription model, selling to lesser skilled cybercriminals to extort cryptocurrency from their victims and then share the ransom money.
Android Exploits: Anatomy of the SimpLocker Attack
Installation: The victim unknowingly lands on malware compromised or Angler hosted web server and wants to play a video or run an app. The video or app requires a new codec or Adobe Flash Player update. The victim downloads the malicious update software and installs it, requiring device administrator permissions to be activated. The mobile device is infected, and the ransomware payload installs itself onto the device.
Communications: The malware scans the contents of the SD card. Then it establishes a secure communications channel with the command and control (C2) server using the anonymous Tor or I2P proxy networks within the darknet. These networks often evade security researchers, law enforcement, and government agencies making it extremely difficult to shut them down.
Encrypt Data: The symmetric key used to encrypt the personal data on the attached SD card are kept hidden within the infected mobile device’s file system so the encryption can persist after reboots.
Extortion: An official looking message from the FBI, Department of Homeland Security, or other government agency is displayed informing the victim that they are in violation of federal laws based on data found on the device after a scan of their personal files.
Demand Payment: A demand-for-payment screen with instructions on the method of payment is then displayed. The fine was normally $300 to $500 and commonly paid in cryptocurrency.
If the ransom payment is made, the symmetric key is provided and used to decrypt the personal data. If the victim is fortunate, they can retrieve all their personal files intact, although there have been reports that some if not all the data are corrupted and no longer usable after they are decrypted.
Android devices are especially susceptible to ransomware because of several factors. First is its global adoption with 72% of the worldwide market share and 3 billion devices around the world. Next is the 1,300+ original equipment manufacturers (OEM), along with the fragmentation of the Android operating system. Devices running versions from 2.2 to 11.0, means a very large number of them never receive a critical security update leaving them vulnerable to malware.
The last factor is Android users routinely root their devices and install apps that are unverified by Google. There are now an estimated three million apps available for download just from the Google Play Store, with potentially a million more that can be downloaded from unknown and many malicious sources. Any one of these apps can be used to host malware that can lead to ransomware exploits.
Here are the remediation tasks to help fight ransomware on Android devices.
These settings are configured within the Android device:
1. By default, within the Google Settings and Security configuration, the Google Play Protect settings Scan apps with Play Protect and Improve harmful app detection are enabled. These settings are the equivalent to a resident antimalware agent on the device and should remain enabled.
2. Within the Apps & notification and Special app access configuration is Install unknown app settings. Leave storage, email and browser apps as Not allowed, which is the default setting.
These settings are configured within Ivanti UEM for Mobile or MobileIron Core:
3. For Android Enterprise devices, the above settings can be configured using the Lockdown & Kiosk configuration. Select Enable Verify Apps and Disallow unknown sources on Device or Disallow Modify Accounts.
4. Create a System Update configuration to automatically update to the latest available Android OS version for the device. Ivanti Mobile Threat Defense (MTD) can also enforce that the latest OS version is running on the Android device and if not, alert the user and UEM administrator that the device is running a vulnerable OS version and apply compliance actions like block or quarantine until the device is updated.
5. Enable Ivanti MTD on-device (using MTD Local Actions) and cloud-based to provide multiple layers of protection for phishing (Anti-phishing Protection) and device, network and app level threats (using the Threat Response Matrix within the MTD management console).
6. Create a SafetyNet Attestation configuration that checks for device integrity and health every 24 hours via Google APIs.
7. Create an Advanced Android Passcode and Lock Screen configuration to turn on multi-factor authentication (MFA) for the lock screen and work profile challenge using a biometric fingerprint, face unlock, or iris (eye) scan instead of a passcode or PIN.
8. Enable Device Encryption. This may sound counter-intuitive but encrypting your personal and work data on the device can prevent the cybercriminals from threatening to publish your work or company information online.
9. Backup data automatically onto a cloud storage provider like Google Drive, OneDrive, Box or Dropbox. Make secondary and tertiary copies of backups using two or more of these personal storage providers since some offer free storage. Also, backup personal data onto a local hard drive that is encrypted, password-protected and disconnected from the device and network.
10. Enable Android Enterprise or Samsung KNOX on the device to containerize, encrypt, and isolate the work profile data from your personal data in BYOD or COPE deployments. Android Enterprise in the various deployment modes and Samsung KNOX can be provisioned by Ivanti UEM for Mobile or MobileIron Core.
11. For BYOD deployments, create a blacklist of disallowed apps on the device. For company-owned devices, create a whitelist of allowed apps that can be installed on the device. Both settings can be configured within MobileIron Core’s App Control feature and applied to the security policy. For Android Enterprise devices, Restricted Apps and Allowed Apps can be applied to the Lockdown & Kiosk configuration or Create an App Control configuration to whitelist or blacklist apps within the personal profile side of the device. This can also be configured within Ivanti UEM for Mobile’s Allowed App settings and Policies & Compliance.
12. Configure a VPN client on the device like MobileIron Tunnel, Ivanti Secure Connect or Zero Trust Access to protect sensitive data-in-motion between the mobile device and MobileIron Sentry or Connect Secure or ZTA gateways.
13. Enable Ivanti Zero Sign-On (ZSO) for conditional access rules like trusted user, trusted device, and trusted app authentication to critical work resources on-premises, at the data center, or up in the cloud. Also, enable MFA using the stronger inherence (biometrics) and possession (device-as-identity or security key) authentication factors. Passwords and PINs can be phished, guessed or brute forced.
14. As a last resort, there are anti-malware vendors that provide software to detect and remove ransomware from an infected device. The user can also boot the device into Safe Mode, deactivate the Device Administrator for the malware, and then uninstall it.
In the next blog in this series, we will discuss ransomware attacks and remediation on iOS and iPadOS mobile devices, and macOS laptops and desktops. | <urn:uuid:92b9ac5a-194b-4e3f-894e-facb0881cd2a> | CC-MAIN-2022-40 | https://www.ivanti.com/en-au/blog/fighting-ransomware-using-ivanti-s-platform-to-build-a-resilient-zero-trust-security-defense | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00742.warc.gz | en | 0.893912 | 2,118 | 2.6875 | 3 |
I’ve been talking and giving refs about Return Oriented Programming as one of the main techniques to build exploits in the current “net era” (here). Today I want only share some general principles that often people tend to confuse (writing me tons of FB messages 😛 ). Probably next days I’ll post something more on Gadgets exploiting.
Some general principles to keep in mind while thinking/using/writing about ROP.
- ROP exploiting techniques doesn’t bypass ASLR protection but contrary are really useful for bypassing DEP (NX bit).
- The ROP gadgets are little piece of code spread in executable memory found by misalign code segment.
- Since DEP (NX bit) does not allow you to execute code from stack, hijacking EIP to your gadgets means to execute Gadget’s code.
- There are many ways to use gadgets theoretically you might use gadgets to directly build on your memory a malware. ROP is Turing complete, but this takes so many time… Usually attackers use gadgets to disable DEP (NX bit) (like for example calling VirtualProtect() ) or to move the payload (typically a shellcode) into executable memory (like HEAP ).
- Every gadget must be driven from stack. So they need to return with a RETN.
- In order to move from Gadget to Gadget you have to use your ESP as your EIP. This is possible by using RETN. Here you have to remember that RETN = POP IP, which move ESP a word below (higher memory address space).
- By chaining two or more gadgets you are building a gadget’s chain which will dynamically (by meaning of directly into the memory) overwrite the right parameters the stack.
- You always need to overflow something (lets say a stack). You always need to point to your starting gadget’s chain, and you always need to put your favorite shallcode on memory.
- You might think to have two different chains of gadgets: (a) a preparatory chain which sets right values for the “execution functions” (for example VirtualProtect() ) and (b) gadgets that will effectively do what you want to do (again VirtualProtect example “(b)” ).
- Speaking about addresses, you always need to overwrite the first time EIP by overflowing, and you need to find the way to point to your ESP. You might do that by addressing directly to ESP (no ASLR) or using different techniques to JMP ESP (ASLR) like for example SEH or brute force (using NOPs).
- After the second chain of gadgets has been reached, the return address of the second gadget’s chain (VirtualProtect()) must be the starting address of your shellcode.
I hope to have given you (all of you interested on ROP) some quick answers to better understand how ROP works. In the practice you will find much more issues like “padding gadgets” (each gadget with more then one POP) little-endian convention, NULL bytes issues, chaining issues, addresses issues and so forth… As soon as I find much time I will post a simple example of ROP, hoping to further clarify Gadget’s exploiting process . | <urn:uuid:5c2f1802-e2a8-4c17-b6bf-572a8e4b19c5> | CC-MAIN-2022-40 | https://marcoramilli.com/2011/07/24/rop-notes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00742.warc.gz | en | 0.89965 | 684 | 2.515625 | 3 |
another quick ‘n dirty post before leaving for a sequence of security conferences around the Globe… One of the most interesting things to understand on ROP is how to make conditional jumps. We have already seen how to make comparisons (here), very useful for making conditional changes on the stack (ESP). Now let’s see the main strategy behind the conditional jumps is the following:
- Move flags to general-purpose registers. Flags are important because they are the result of comparison (here)
- “Do something” if flag is set up or do nothing if flag is not set up (or vice-versa)
- Modify ESP in the “do something” function
Since we know how to make comparisons lets see how we can use the 1 or 0 (the flag) saved into the memory. Let’s assume we used the CF (carry flag) from a sub or a neg command. We need something that puts the flags into general-purposes registers: lahf is our instruction ! ;). It stores the five arithmetic flags (SF, ZF, AF, PF, CF) into %ah. Unfortunately lahf function is not very common (I’ll post something on probability of gadgets later on this blog). Another great way to move flags to registers is by using pushf much more common. It pushes a word containing %eflags into the stack. Another good way is to use a function which keeps as input a CF like for example the “rotate with carry functions” ( rcl, rcr) and add with carry too, adc. A pretty common gadget is the following one:
adc %cl, %cl // ret
The above gadget computes the sum of its two operands and the carry flag, which is useful in multiword addition algorithms. If we take the two operands to be zero, the result is 1 or 0 depending on whether the carry flag is set … it’s exactly what we need. (NOTE: it should be pretty clear that we can evaluate complicated boolean expressions by collecting CF values of multiple tests and combining them with logical operands.)
Ok now comes the most complicated part: the ESP modification. Right now we have a word containing 1 or 0 depending of the CF, we want to transform it to ESP_delta or 0. Where the ESP_delta is the offset to give to ESP if the above condition is true. One way to obtain the ESP_delta is by using the two’s complemet. negl of 1 is all-1 pattern and the negl of 0 is all-0 pattern. Then taking bitwise and of the result and ESP_delta gives a word containing ESP_delta or 0 (for more details on that please read this paper titled “The Geometry of Innocent Flesh on the Bon: Return-to-libc without Function Calls“). Now we need to modify ESP. The following gadget is what we need (or what we need to build).
addl (%eax), %esp addb %al, (%eax) addb %cl, 0(%eax) addb %al, (%eax) //ret
%eax is pointing to the ESP_delta. The follow instructions destroy it, but it has already been used, so no problem at all. The following picture sums up what we said till here.
Here we go! We’ve just been Jumping conditionally 😉 | <urn:uuid:aa4d5d5a-3eeb-4012-bf87-4bc31d15c5c8> | CC-MAIN-2022-40 | https://marcoramilli.com/2011/08/06/rop-conditional-jumps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00742.warc.gz | en | 0.883656 | 732 | 2.921875 | 3 |
With Cybersecurity Awareness Month in full swing, data leaks and privacy conversations continue to dominate the headlines. In fact, our research has found that organisations have lost an average of nearly $1 million due to cyber-attacks. And what’s more, 44 per cent of IT leaders report that their organisations had suffered a cyberattack in the last 12 months, with an average of 29 attacks per organisation. It’s not a pretty picture.
As hackers become more and more sophisticated and well-funded – and organisations feel the effects of this, are there any lessons we can learn from history to ensure it doesn’t repeat itself over and over again when it comes to cybersecurity? Even though we are talking modern technology, the answer is still “Yes”.
When Spartacus and his army of rebel slaves were defeated by the Romans, they only had to do one thing to earn their pardon. Give up their leader’s identity. In a bold refusal, first one man and then the whole army proclaimed “I am Spartacus!”. Unfortunately, not being able to single out the leader of the rebellion, the Romans executed the slaves for their troubles. But the ancient story might have had a very different ending if Spartacus and his friends had lived in a world with modern data protection regulation.
The right to be forgotten is a crucial part in the movement for greater ‘digital rights’ and personal control over one’s personal data. Much like a digital version of the Roman Empire, today’s businesses collect a huge amount of data on everything they touch. But new data protection rules like the General Data Protection Regulation (GDPR) require significant changes in how customer data is stored to help organisations remain compliant to customer requests. While the law only applies to EU citizens’ data, any company that operates in the EU must comply, regardless of where the data is stored. This yields a truly global impact on data governance for almost every major company in the world.
Tipping the scales
Meanwhile, back in the US, representatives from AT&T, Amazon, Google, Twitter, Apple and Charter Communications recently went before Congress (opens in new tab) to share what they want out of a similar privacy law, potentially. These companies were vocal about not wanting a carbon copy of GDPR for the US and instead would prefer to have the ability to drive privacy rules on their own terms. Unsurprisingly, consumers want to control our privacy on their own terms, so it will be interesting to see how this unfolds.
While the US debates the potential for privacy laws in the near future, one thing is clear: companies holding personal data will need to act sooner rather than later as consumers get educated and lawmakers take notice. The reality is that personal data is running rampant, whether people are willingly giving it away or not. This is evidenced by a feature on Facebook, which allows advertisers to upload data collected offline to target consumers, and I’m sure many more examples will follow.
With regards to GDPR in Europe, even though the immediate pragmatic requirement is clear — the capability to delete accounts and any associated personal data – this task isn’t as simple as it first appears. The problem lies with organisations’ reluctance to sacrifice data, particularly as it helps to improve their own business models and profitability. For example, it’s much easier to target current and potential customers with marketing when organisations have data on their customers. The more data organisations have on these customers, the more effective they can make their marketing efforts. So now businesses are operating in a world where IT professionals must perform a balancing act of maintaining security and compliance while offering convenience.
As far as data stored in files is concerned, the scales are tipped so far towards business agility and convenience that IT often has a hard time reigning in control without triggering a user revolt. But achieving security that is on par with convenience doesn’t need to alienate users. As long as businesses pick the right tools, keep users involved in the process at every step of the way, and maintain a mindset that identity is everything, the outcome will outweigh the input – reducing friction between business users and IT teams in the process.
If Spartacus was alive today
So, what if Spartacus actually lived in today’s regulated environment? While his request to be forgotten might not have been approved by the Romans (they would presumably have the right to pursue him for breaking the law) there is a question of whether his data should have been kept in the first place without his consent. With GDPR in full effect, consumers now have the right to request access to all of the data held on them – potentially causing challenges and inducing hefty fines for companies that aren’t quite sure how much personal data they have collected, where it is stored, and how long they have had it on file.
Although many online services have built in deletion and removal options, lingering personal data is a different matter. If this personal information is located in an application or structured database, then the process is relatively straightforward—eliminate the associated account and its data is also removed. If the sensitive data is found in files—detached from applications governed by the business—then they behave like abandoned satellites orbiting the earth (opens in new tab), forever floating in the void of network-based file shares and cloud-based storage. If the right to be forgotten is to be realised, then a key task is locating that personal data and enabling its deletion no matter where it resides, thus ensuring the privacy of the end user.
As our online identities continue expand and proliferate online, we must work to safeguard what we consider fundamental rights. The right to be forgotten—to choose to withdraw from online services without leaving our personal data behind—is a key stone in our privacy foundation. Organisations that truly value their customers’ privacy will also value the right to be forgotten and will take measures to locate and protect their sensitive data, effectively yelling “I’m Spartacus!” on behalf of the user.
Mike Kiser, Strategist and Evangelist, Office of the CTO, SailPoint (opens in new tab)
Image Credit: IT Pro Portal | <urn:uuid:f8c2bc52-90a3-478d-99c1-632e72a61981> | CC-MAIN-2022-40 | https://www.itproportal.com/features/i-am-spartacus-the-right-to-be-forgotten-in-the-modern-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00742.warc.gz | en | 0.954661 | 1,268 | 2.765625 | 3 |
Remember Microsoft’s underwater datacenter concept from a couple of years ago? Where the company stuffed servers and other IT equipment into a big metal tube and dropped it off the coast of California?
Microsoft has done it again, this time off the coast of Scotland as part of the second phase of Project Natick, an effort to dot the world’s coastal areas, where approximately half of the world’s population lives (within 120 miles), with sustainable datacenters that provide quick access to data and brisk application response times by virtue of their physical closeness to their users.
The datacenter, dubbed Northern Isles, was built in France and trucked over to Scotland before being submerged. It is just 40 feet long–about the size of a shipping container found on cargo ships–and contains 864 servers across 12 racks. The servers contain performance-enhancing field-programmable gate arrays (FPGAs) and offer 27.6 petabytes of storage in aggregate.
Northern Isles consumes 240 KW of electricity, provided by local renewable sources, including solar, wind and offshore tide. Astonishingly, it took only 90 days for the datacenter to get up and running after it was shipped from the factory.
Dropping a datacenter into the cold ocean depths has one major advantage: free cooling. Microsoft’s John Roach writes:
The world’s oceans at depth are consistently cold, offering ready and free access to cooling, which is one of the biggest costs for land-based datacenters. Underwater datacenters could also serve as anchor tenants for marine renewable energy such as offshore wind farms or banks of tidal turbines, allowing the two industries to evolve in lockstep.
Microsoft will monitor Northern Isles for the next 12 months. As part of an applied research project, customers won’t be able to place their workloads on it, but considering Project Natick kicked off a second phase, and if it passes Microsoft’s feasibility studies, there’s a good chance more of the Azure cloud will one day pump IT services from a coastal area near you.
Image credit: Microsoft, Scott Eklund/Red Box Pictures | <urn:uuid:3a80cbb0-15a6-40e3-9169-ded3a4c78c1e> | CC-MAIN-2022-40 | https://www.ecoinsite.com/2018/06/microsofts-green-underwater-datacenter-project-reaches-phase-2.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00742.warc.gz | en | 0.93327 | 449 | 2.765625 | 3 |
MySQL is an open-source, relational database management system based on Structured Query Language (SQL). MySQL is used by dev teams in a wide variety of use cases, most commonly in data warehousing, e-commerce, and logging applications. However, its most popular deployment is in web databases, since it’s highly flexible, reliable, and scalable. It can be used to store everything from single records of information to full inventories of products.
SQL Server, on the other hand, is a commercial relational database management system first developed by Microsoft. Like MySQL, SQL Server supports many platforms including Linux, Microsoft Windows, and Windows server systems. Most commonly, SQL Server is used by developers for .Net applications and windows projects. As a commercial product, SQL Server has a wider community of support.
In this guide, I’ll go over some metrics to help your performance tuning efforts. I’ll also discuss some of the best SQL Server and MySQL tools on the market, including the solution I most highly recommend—SolarWinds® Database Performance Analyzer—due to its robust monitoring and analysis features unlike any other tool on the market.
Monitoring SQL and MySQL Performance Best Practices
The most important aspect of monitoring your SQL performance is to make thoughtful decisions about what metrics and alerts you need to monitor. Although this will partially depend on your organization and industry, it’s generally a good idea to monitor the following processes:
- Required process running
- Common failure points
- Resource utilization
- Query executions (failures and completions)
In addition to monitoring these processes, you also want to monitor specific metrics related to your MySQL view performance health. The following are a good place to start:
- Uptime: The second it takes for a server to respond to a request.
- Threads connected: The number of clients currently connected to the server. If no one is connected or too many are, this might be a sign of trouble.
- Max used connections: The numbers of connections made since the server started.
- Aborted connects: The number of failed connection attempts. Too many could be a sign of suspicious activity.
Dev teams should also be sure they monitor SQL query metrics to ensure the database is fulfilling its basic tasks. Some examples are:
- Questions: The number of statements sent by clients.
- Queries: The number of executed statements including stored procedures.
- Read and write requests: What allows users and developers to access the same page in real time.
Other important metrics to collect to carry out SQL Server or MySQL performance tuning include typical SQL errors. These are the frequent failure points you should always keep an eye on:
- Errors: Always check to make sure there aren’t any errors on the mysql.log file.
- Log files size: Inspect log streams to see if all files are being rotated properly. If not, it could bottleneck your server.
- Deleted log files: Make sure the file descriptor is closed after any log files are deleted.
- Backup space: Always be sure you have enough disk space for MySQL backups.
Best MySQL and SQL Server Performance Tuning Tools
Attempting to capture the above metrics is difficult without the right tools. The following programs are some of the best tools for real-time monitoring SQL databases, whether you use MySQL, SQL Server, or another relational database.
SolarWinds Database Performance Analyzer (DPA) offers a full-stack database performance monitoring and analysis tool. It’s an excellent solution for database administrators, IT teams, and application developers alike. It supports real-time monitoring and analyzes SQL database instances to mitigate bottlenecks, improve services, and save costs. You can easily compare MySQL vs. SQL Server performance metrics if you use both types of databases.
This cross-platform solution for database performance monitoring works in both cloud and on-premises databases, making it an ideal choice for an array of different organizations. It has tons of helpful features, like machine learning-powered anomaly detection and in-depth wait-time analysis. These features empower IT admins to improve their mean time to resolution and address database performance issues with faster speed.
One of its best features is it offers IT teams both real-time and historical data of their MySQL performance by tracking response time and server statistics in the data warehouse repository (which can be configured right to your MySQL database). These insights empower DBAs to address critical problems with a better understanding of their server infrastructure.
What’s more, SolarWinds DPA is incredibly easy to use. You can set custom alerts, create custom metrics, and even schedule graphical performance reports and have them delivered via email to the relevant IT team. To see if DPA is the best solution for your MySQL monitoring needs, you can download the fully-functional tool risk-free for 14 days.
- It supports a wide variety of databases, both cloud and on-premises.
- It’s easy to use yet flexible, allowing custom alerts, metrics, and more.
- It uses AI-powered anomaly detection, improving the mean time to resolution for many issues.
Idera Diagnostic manager offers IT teams performance monitoring for SQL databases in both physical and virtual environments. It tracks performance statistics and key metrics and can be configured to sends alert to help DBAs better manage their VMs and database. It also enables IT teams to perform proactive monitoring of queries along with transactional SQL monitoring, while providing you with recommendations for your SQL DBMS.
With Idera, you can gain insight into not just availability and health, but security vulnerabilities and configuration settings. Use simple visual charts for at-a-glance visibility—easily analyze metrics like disk space or get an overview of servers with current warnings and alerts. Overall, this is a useful and flexible tool for monitoring SQL databases.
- Idera allows proactive query monitoring and provides recommendations.
- It provides insights into security vulnerabilities, configuration issues, database availability, and health.
- Simple charts allow quick visualization of the main metrics and server warnings.
SolarWinds Database Performance Monitor (DPM) is a SaaS monitoring solution, which means you don’t have to buy or maintain a traditional application, which dramatically reduces the friction of starting out with it.
The tool can monitor databases locally, in the cloud and hybrid, and it focuses on open-source and NoSQL databases, which includes not only MySQL but also PostgreSQL and Redis. DPM offers 24/7 real-time monitoring to track a plethora of metrics, then display them using a user-friendly yet powerful dashboard. Some of the supported metrics are:
- Deploy frequency
- Reduced failed deploys
DPM takes security seriously, being fully compliant with both SOC2 and GDPR.
- Cloud-based solution, which means less friction to get started and use.
- 24/7 real-time monitoring over a huge number of metrics.
- Flexible yet easy-to-use dashboard.
Another good option for DBAs is the SQL Power Tools. Billed as a “zero impact” solution, this agentless database monitoring solution provides IT teams access to over 120 different metrics of their server infrastructure, from wait times and blocking to disk space usage and index fragmentation. You can view 30-day trends or set up alerts for instant awareness. It’s a reliable lightweight with little overhead, but larger organizations may struggle to scale it to fit their needs.
- “Zero impact solution” with low friction.
- Large number of metrics to monitor the whole server infrastructure.
- Lightweight and reliable tool suited for simpler scenarios.
The Percona Monitoring and Management Tool is a free, open-source solution admins can use to monitor and manage their MySQL databases. Percona can be fully adopted into your existing IT system, meaning admins can be sure the solution is run in a safe and reliable setting. What’s more, Percona can map queries against metrics, which enables SysAdmins to make better decisions for optimizing their MySQL performance.
As an open-source option, support can be limited, and IT teams will lose out on some of the more advanced functions they’d expect to see in an enterprise-grade solution. But all in all, it’s a reliable option for many organizations.
- Maps queries against metrics, allowing admins to take queries into account when optimizing MySQL performance.
- Reliable and lightweight solution best suited for simpler scenarios.
This enterprise-grade MySQL tool has tons of comprehensive features that make it an excellent solution for many large businesses. For one, it offers IT teams real-time insights into MySQL database performance and health metrics, so they can identify and troubleshoot issues with efficiency. What’s more, AppDynamics enables IT teams to set metric baselines for what they deem to be healthy performance standards for their MySQL environments. The tool will then collect and display newly generated metrics against said baselines, so admins can monitor their systems with a better understanding of what is healthy behavior.
The only issue with AppDynamics is the solution comes in a lite plan and a pro plan. While the lite plan is free to use, it offers limited features and its data retention is almost nonexistent. The pro plan is better, but expensive.
- It offers real-time insights into MySQL performance and health metrics.
- It allows users to define baselines for healthy performance, and then base new metrics on these baselines.
Database Monitoring Is a Must. Get It Right, Do It Constantly, and Profit
Databases are central to virtually any technological endeavor. Yet, some companies, despite devoting a lot of resources into their databases, don’t seem to care about monitoring them. That’s baffling since it’s akin to not wanting to know the return on an in investment and a particularly costly one at that.
When it comes to relational databases, Microsoft SQL Server and MySQL are two of the most popular choices. While the former is an enterprise offering, developed by a gigantic corporation, the latter is a free and open-source tool, despite being also available under commercial licenses. Being two widely known relational databases, there are a huge offering of database monitoring solutions targeting MySQL and SQL Server.
Although all the tools we’ve covered have their merits, my top choice is SolarWinds Database Performance Analyzer. As I’ve mentioned earlier, DPA features unparalleled monitoring and analysis features.
An honorable mention goes to SolarWinds other tool, Database Performance Monitor. DPM’s a cloud-based solution, with no download, maintenance, or provisioning required. Upgrades are done automatically. And since DPM is a SaaS solution targeting specifically open-source databases, it’s a solid choice for MySQL monitoring.
By now you should be ready to make an educated decision on the right tool for your MySQL or SQL Server monitoring needs. | <urn:uuid:c2817571-ae23-4b5e-9385-bde32bc512aa> | CC-MAIN-2022-40 | https://www.dnsstuff.com/mysql-tools | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00742.warc.gz | en | 0.905868 | 2,304 | 2.640625 | 3 |
Hard drive crashes happen all the time. More than just being frustrating, they can bring your business to a standstill and cost you valuable time and money trying to retrieve lost files and business data. Hopefully, your most important data and files are safely backed up somewhere. If not, recovery can be difficult—but not necessarily impossible.
Keep reading to discover what can cause a hard drive to crash and tips on how to retrieve lost data from a corrupted or damaged hard drive.
What Causes A Hard Drive To Crash?
There are a number of reasons why a hard drive might crash. Some of the most common reasons include:
- Human Error. From accidentally deleting files to dropping the hard drive itself, human error accounts for most hard drive crashes.
- Power Surges. A power surge occurs when the flow of electricity is interrupted and then restarted again. This can result in data loss when the read/write heads fail to function properly.
- Overheating. Rising temperatures inside your computer can be caused by poor ventilation, malfunctioning fans, or dust build-up. Overheating can lead to severe damage of a hard disk drive’s components, such as micro-cracks in the disc platter that can seriously compromise data.
- Corrupted Files. Sudden shutdowns and forced restarts can corrupt files and make the hard disc inaccessible.
- Water Damage. Water damage caused by spilling a liquid into the computer is never a good thing. They are not built to resist it. Water causes surges in the electrical current which can severely damage your device.
- Virus Attack. A virus attack can erase hard disc data and alter the operation of the hard disc itself.
- Mechanical Failure. Moving parts degrade over time. Things like the spindle motor might stop working properly, making the disc unreadable.
There are a lot of things that can make a hard drive crash, but it isn’t cause for total despair. In many instances, the data lost because of a crashed hard drive can be recovered.
What To Do When Your Hard Drive Crashes
Once you’ve determined the hard drive has crashed (and depending on the cause of the crash) there are different methods of retrieving data from it. If you’re technically inclined, and data recovery is not essential, you can try these methods on your own.
If it is absolutely essential to your business that the lost files and data be recovered, you should enlist the services of a data recovery professional immediately. Failed attempts to recover the data yourself may make it harder for a professional to recover the data, or worse, render the files completely unrecoverable.
If Your Hard Drive Is In The Process Of Crashing
If the drive hasn’t failed completely yet, you can try to get important data from it quickly. You can try to boot to a Windows installer disc or live Linux system and attempt to transfer just the important files off your drive. You may be able to recover some files even if your system can’t boot its operating system and run it from the drive without crashing.
You can also try pulling the hard drive and connecting it to another computer. If the drive has only partially failed, you might be able to copy a few important files off it (if it’s not totally beyond repair).
With either of these solutions, be aware that having the drive powered on may cause it to become more damaged.
If Your Hard Drive Is Completely Corrupted
- Connect To Another Machine
The first thing you’ll have to do is disconnect the hard drive from the current computer and connect it to another machine as a secondary drive. The best way to do this is with a USB to IDE/SATA adapter. Alternatively, you could connect the drive to another desktop computer internally as a secondary drive, though this means pulling apart another computer to install the crashed drive.
- Try Copying Manually
After you connect the drive to another computer, see if you can browse the contents of the drive. If you can, try to manually copy data off the drive that you would like to recover. This might not work if you are trying to recover data from a dead hard drive, but there is a chance that only the operating system is corrupt and the user data is retrievable.
- Install Data Recovery Software
If you can’t manually recover the lost data it’s time to download data recovery software to see if it can do the job. Data recovery software is designed to scour the drive and locate any recoverable data, piecing it back together and providing it in a usable format.
The best data recovery applications provide a preview of recovered files, filtered and searchable results, easy file restoration, and additional tools. There are many good free options to choose from. Check out Lifewire’s rating of 20 Free Data Recovery Software Tools for Windows or the Top 15 Best Data Recovery Software for Mac OS X.
IMPORTANT: do not install the recovery software onto the drive that you are trying to recover data from. Doing so could actually overwrite files that are still hidden there and that you can still restore.
When In Doubt, Enlist The Pros
Hard drive crashes are nearly inevitable. There are numerous factors that contribute to the damage and mechanical devices eventually break down over time.
The key is figuring out how to retrieve valuable data before it leads to more serious (and expensive) problems. There are ways to try and retrieve it yourself, but if the lost data is truly critical, you might be best to pay for the help of a data recovery professional.
Whichever route you decide to take, there is always a team of Nerds ready to help you out. | <urn:uuid:e485e0e1-f82f-4957-9a85-9264bee5e732> | CC-MAIN-2022-40 | https://www.nerdsonsite.com/blog/how-to-recover-files-crashed-hard-drive/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00742.warc.gz | en | 0.920148 | 1,172 | 2.625 | 3 |
Before we bring you up to speed on password best practices, it’s important to understand the tactics you’re defending against. So let’s take a look at the most common password security attacks.
Phishing AttacksThese attacks have been a primary method of breaching password security since the 1990s. The primary goal of a phishing attack is to get your personal information. Hackers may also use this attack to install malware and or a backdoor on your device, often for the purpose of ransoming your personal data or your organization’s data. What makes phishing attacks particularly tricky is that they try to mask themselves as legitimate emails from legitimate sources.
For example, you might see an email that appears to be from Microsoft, and the only way to tell that something is off is by:
- Noticing a small instance of incorrect spelling and grammar.
- Closely inspect the source of the email to verify its legitimacy.
Brute Force AttacksA brute force attack is when a cybercriminal attempts to breach your password security by attempting to log in over and over again with different guesses. This is done using a program to auto-generate likely passwords, then repeatedly try each password in their list in rapid succession – sometimes thousands of times per minute. There are three different types of brute force attacks:
- Sequential Attack — This is when the attacker goes through various character/number combinations one by one. The attacker might literally enter a sequence of numbers like “1111111” and then “1211111,” and so on. The longer your password, the more time-consuming and difficult this method becomes.
- Dictionary Attack — When the intruder tries to break through your password security using a “dictionary list” of common words and/or phrases relevant to your organization. They may also use password caches, i.e, a database of already captured passwords from previously breached systems.
- Rainbow Tables Attack — Think of a rainbow table as a large dictionary full of pre-calculated hashes and the passwords they were calculated from. It’s similar to dictionary attacks, except much faster because, while dictionary attacks are optimized for commonly used words and phrases, rainbow table attacks are optimized for commonly used passwords.
Traffic Interception AttacksTraffic interception is when the attacker uses a tool like Aircrack-ng or Airsnort to intercept their victim’s wireless data. Once they gather enough data packets, they’re able to break your network encryption and decipher your traffic – including plain text passwords.
Social Engineering AttacksRemember early in the article when we mentioned that cyber-criminals might target your employees directly? The official name for this is social engineering. To put it plainly, this is where the intruder tries to schmooze information from you or your employees. These attempts can be made in emails, over the phone, and even in person. Mitigate the risk of social engineering attacks by making sure that the person requesting password information is legitimate. If someone says they need the password to something and they claim to be IT, it’s probably an attack because if someone from IT really needed your password, they could just reset it themselves.
Man in the Middle (MITM) AttacksA man in the middle, or MITM attack, occurs when the attacker puts themselves in between the communication of a client and their server. Let’s say you have an employee who’s using a work laptop and attempts to connect to your servers through Wifi. When the laptop sends a request to connect, it might turn out that a legitimate-looking WiFi network was actually a spoofed one created using a WiFi pineapple to intercept the signal and respond to the laptop as though it were the trusted WiFi access point. Thus, any data meant to be communicated to the server will actually go first to the attacker, including any plain text password information. The “man” doesn’t necessarily have to be in the “middle” with these attacks. The man could in fact be a malware proxy that was installed on your computer, and the signal could be intercepted at any point in the communication, not just the mid-point.
Keylogger AttacksThis method of attack is straightforward. It’s when a keylogging software saves a log of all the physical keystrokes that you type into your keyboard. This is then sent back to the attacker and examined for passwords and other information.
Best Practices for Strong Password SecurityNow that you’re familiar with the most common forms of password attack, let’s review the current password best practices together so that we can all better protect ourselves and our organizations.
General Password Security Best Practices
- Blacklist Common Passwords — A simple way to mitigate the risk of someone guessing your passwords is to blacklist commonly used password choices. This way, employees have no choice but to create non-standard passwords for their accounts that are less likely to be broken through brute force.
- Account Lockout — Another way to prevent brute force attacks from succeeding is to lock accounts after a certain amount of password attempts are made. Try to aim for 5-10 attempts before activating the account lockout.
- Check Password Strength — The National Institute for Standards and Technology (NIST) suggests vetting potential passwords with tools that will test their strength. Many organizations offer tools for this.
- Recommended Password Length — We recommend making your passwords at least 12 or 16 characters in length. 12 characters give you over three sextillion possible character combinations, and 16 gives you even more. For even greater protection, consider going anywhere up to 64 characters.
- Use Single Sign-On (SSO) or Password Manager Applications — SSOs connect you to your business’s various systems and applications so that you only need to remember one password. They’re easy to set up and also streamline the onboarding and offboarding of employees. Popular SSO applications include LastPass, Keeper Business, and OneLogin, just to name a few.
- Check for Plain-text — Plain-text passwords make it easy for traffic interception attacks to succeed in stealing your private information. To prevent this, do a periodic check for plain-text passwords in your employee files.
- Implement multi-factor authentication (MFA) — MFAs only grant you access to an application after you showcase two or more pieces of evidence that you are the correct user. This is an effective way to keep hackers from getting into your accounts. When Google sends you a specific code to submit before letting you into your account, you’re witnessing a solid MFA in action.
- Use Alphanumeric Passwords — A simple way to generate complex passwords is to compose them out of alphabetic (uppercase and lowercase) and numeric characters, in addition to special symbols. Alphanumeric passwords built into long phrases are especially secure.
- Password Hints — Some login systems allow you to enter a password hint, like your mother’s maiden name or the model of your first car. We suggest avoiding passwords hints in general since many personal details can be scraped off your social media profiles. If you do use them, make sure the hint information isn’t easily accessible.
- Keep Passwords Private — Last but not least, make sure that your employees know not to share their passwords with anyone, including IT staff.
Protecting Against Phishing AttacksOne way to prevent these attacks is to conduct phishing tests, a service that often comes with auditing and compliance services from your managed service provider (MSP). We’ve done these tests for many companies to gauge their vulnerability to phishing and other cybersecurity threats; our findings tend to be very eye-opening regarding the number of employees that get duped into clicking unknown links or sharing login details.
Protecting Against Brute Force AttacksRemember to follow the recommended password length of at least 12-16 characters whenever possible and keep in mind that your passwords must not be dictionary words or commonly used phrases, which are easy to guess. For added protection, you can also limit logins to your business’s specified IP address or range, which is also known as geolocation restriction.
Keep in mind, however, that remote workers or employees who need on the go access may be limited by your geolocation restrictions. We mentioned limiting the number of login attempts earlier, but you should also restrict the amount of time allowed between attempts. For example, if someone tried to hack into your account 5 times, you would lock them out of the login system, and they wouldn’t be able to try again for another hour. This drastically increases the time it takes to break in brute forces – sometimes the difference between days and years.
Protecting Against Traffic Interception AttacksThe easiest way to defend against this form of attack is to make sure that all your data is encrypted using current encryption standards. . Ensure that you’re using the latest versions of transport layer security (TLS) and secure socket layer (SSL) for your emails and other logins.
Protecting Against Social Engineering AttacksProtecting against social engineering attacks can be difficult, especially since they can occur in person without you or your employees even realizing it. Outside of doing your best to verify the credentials of someone in an email, on the phone, or in person, one of the best things you can do is to educate your staff on the subject. Kevin Mitnick, a former hacker turned cyber-security consultant, has many useful resources that you could share with your employees, including his book The Art of Deception. Your organization should also implement an internal IT reset policy to verify the identity of IT administrators requesting a password reset. This ensures that you’re actually resetting verified user accounts and not giving them ongoing access. And remember: never reveal your passwords or log-in credentials to anyone outside your organization. You should still be cautious of internal sharing as well.
Protecting against Man in the Middle (MITM) AttacksAn easy way to counter MITM attacks is to make sure you’re using up-to-date SSL and TLS software. Having strong encryptions on your access points will also mitigate the risk of this attack. When you or your employees are working remotely, use a virtual private network (VPN). This creates a secure environment for private data from which you can access your local area network.
Protecting Against Keylogger AttacksMost anti-virus software mitigates the risk of keylogger attacks nowadays, but you can also use specially designed anti-keylogger software like SpyShelter.
How to Protect Your Company from Phishing AttacksBecause over 80% of reported security incidents involve phishing, let’s dive deeper into the best practices for protecting against phishing attacks. Phishing is particularly troublesome because, unlike other cyber-attacks that try to break your password security, phishers just trick you into giving your details away.
Stay VigilantPhishers can be sophisticated in their attacks. They may use real company logos and business emails to make their messages look safe and legitimate, but in these situations, the devil’s in the details. Check to make sure the email address is spelled correctly as Phishers tend not to be English. Don’t click on any links or attachments in suspicious emails. If you really want to check to make sure a domain it mentions is legitimate, open up a separate browser and manually type it into the search bar.
Malicious Pop-upsPop-ups are notorious for housing viruses and scams. While some pop-ups are obviously dangerous, others may appear more legitimate. They may display a message about your computer being infected with malware and offer you a link or phone number for help. They may even mimic trusted sources. To counter these threats, make sure you read the pop-up message closely. If you can’t find any misspellings, bad grammar, or unusual imagery and still doubt its legitimacy, simply run an antivirus scan.
Phishing TestsAs mentioned earlier in the article, conducting a phishing test is one of the best ways to protect against this form of cyber-attack. A phishing test is when your IT team or your managed service provider (MSP) creates fake phishing emails and web pages which are then distributed to your employees. This test would then reveal how many of your employees were successfully scammed, and you could then educate the affected employees to avoid this mistake in the future.
Stay Up-to-date on Password Best PracticesIt’s unfortunate that as cyber-security has become more sophisticated, so too have cyber-attacks. For this reason, it’s important to pay attention to changes in this area. One day there may be a new password security solution that trumps all potential threats, but until that day comes, keep your eyes peeled and make sure your employees are implementing current password best practices. To summarize:
- What Are Phishing Attacks? — Phishing attacks appear as emails or web pages that ask you for sensitive information. Their goal is to acquire personal information or install malware that could allow them to hold your data for ransom.
- Protecting Against Phishing attacks — Pay attention to suspicious emails and web pages, watch out for malicious pop-ups, and conduct phishing tests within your organization.
- What Are Brute Force Attacks? — This form of attack is when the attacker attempts to break through your password security by making multiple guesses at your password.
- Against Brute Force Attacks — Utilize geolocation restrictions and limit the number of login attempts before system lockout activates. Also, restrict the amount of time allowed between attempts.
- What Are Traffic Interception Attacks? — This is when the attacker intercepts your data wirelessly to gather data packets. With enough data packets, they may be able to break through your data encryption.
- Protecting Against Traffic Interception Attacks — Make sure that your organization is using the latest transport layer security (TLS) and secure socket layer (SSL) software.
- What Are Social Engineering Attacks? — This form of attack involves the intruder trying to trick you or your employees into handing over sensitive information in emails, over the phone, or in person.
- Protecting Against Social Engineering Attacks — Educate your staff about the tactics of social engineering attacks, implement internal IT reset policies, and never reveal your passwords to people outside your organization. Be cautious of internal password sharing.
- What are Man in the Middle (MITM) Attacks? — MITM attacks occur when the hacker puts themself between the communication of you and your server to receive any data you would have sent to your server.
- Protecting Against Man in the Middle (MITM) Attacks — Counter MITM attacks with up-to-date SSL and TLS software, and if employees are working remotely, make sure they’re using a VPN.
- What are Keylogger Attacks? — Keylogger attacks involve an attacker using keylogger software to log the string of keystrokes typed into your keyboard.
- Protecting Against Keylogger Attacks — Having up-to-date anti-virus software should protect your company from keylogger attacks, but you can also use anti-logger software. | <urn:uuid:923fa004-4c36-42e7-9007-29e12e2d6485> | CC-MAIN-2022-40 | https://blog.commprise.com/en/password-best-practices | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00742.warc.gz | en | 0.929349 | 3,143 | 3.28125 | 3 |
By Vinay Pidathala, Director of Security Research, Menlo Security
Cybersecurity is never straightforward.
While defense techniques, technologies, policies and methodologies continue to evolve at pace, such defenses often trail in the wake of novel cyber attacks that seek out and exploit vulnerabilities in new ways, catching security teams off guard.
Indeed, recent times have provided many headaches for security professionals; Cybersecurity Ventures reveals that cyber attacks in 2021 will amount to a collective cost of approximately $6 trillion – and the situation isn’t forecast to improve any time soon. Where attacks are expected to intensify by an additional 15% a year for the next four years, total cyber attack-centric damages could amount to as much as $10.5 trillion by 2025.
One of the main concerns today is the exponentially growing number of techniques that cybercriminals are adding to their arsenal. Whether that’s malware, ransomware, DDoS attacks or phishing, they continue to expand their techniques, with the next being ever more malicious than the last.
HTML Smuggling explained
HTML Smuggling is a prime example of this in action.
While the broad concept itself is nothing new, the threat is making something of a resurgence having recently been used by Nobelium – the hackers behind the renowned SolarWinds attack that was uncovered in December 2020.
In simple terms, HTML Smuggling provides hackers with a means of bypassing perimeter security through the generation of malicious code behind a firewall. This is executed in the browser on the target endpoint.
Where a malicious payload is constructed in the browser, no objects need to be transferred, which network perimeter security systems might typically detect. As a result, through HTML Smuggling, many commonly used, traditional security solutions, such as sandboxes and legacy proxies, can be sidestepped.
ISOMorph – a new variation
This is what happened in the case of Nobelium’s HTML Smuggling attack that we are calling ISOMorph.
Here, popular talk over voice, video, and text digital communication platform Discord was targeted, the app being home to more than 150 million active users.
With ISOMorph, HTML Smuggling allows the first attack element to be dropped onto a victim’s computer. This is then constructed on the endpoint, removing the opportunity for detection. After installation, the hackers are then able to execute the payload that infects the computer with remote access trojans (RATs), before setting about logging passwords and exfiltrating data.
While the resurgence of HTML Smuggling through ISOMorph is new, it shouldn’t necessarily come as any great surprise. Indeed, from the cyber attackers’ perspective, it is a logical avenue to pursue.
Thanks to the pandemic, remote and hybrid working has become the new norm. Where such working models are now commonly used, the increased use of cloud services and expansion of organizations’ digital footprints has exposed a series of new security related challenges.
Today, the browser plays a more vital role in day-to-day operations than ever before – yet, unfortunately, it remains one of the weakest links in the cybersecurity chain, making HTML Smuggling an all the more attractive proposition to threat actors.
From access to execution
So, what should we be looking out for in the case of an HTML Smuggling attack?
In the case of ISOMorph, Menlo Security’s analysis has shown that attackers are using both email attachments and web drive-by downloads to achieve initial infection.
With ISOMorph, the payload in question was an ISO file – a disk image that contains all the required components that would be able to install software. The benefit of the ISO file is that it does not require the endpoint to have any third-party software to install. In this instance, ISOMorph was also able to achieve persistence by creating a Windows directory on the endpoint.
Equally, it is one example of a file type that is exempt from inspection across both web and email gateway devices.
In analyzing the ISO files that were used in the campaigns that we were monitoring, we found that the VBScript will often contain various malicious scripts capable of executing and thereafter fetching additional PowerShell scripts that can download a file to the endpoint.
The malicious code is also executed by proxy by tapping into trusted elements on the endpoint. We saw MSBuild.exe used, for example – a process that is typically whitelisted, allowing the injected code to further avoid detection. Here, ISOMorph used reflection techniques to load a DLL file in memory before injecting the remote access trojan into MSBuild.exe, ensuring antivirus software could then be bypassed.
Prevention and solutions
The resurgence of HTML Smuggling should be cause for concern.
While vaccination efforts continue to ramp up and economies and societies continue to open up once more, the impact of COVID-19 will be felt long after 2021. In the case of work, the many benefits that have been realized from remote and hybrid working models will ensure that such ways of working won’t disappear anytime soon. As a result, the browser will continue to offer hackers new avenues to attack their target endpoints.
For this reason, HTML Smuggling is expected to stay. In the case of ISOMorph, it is proving to be an effective method from which attackers are able to infiltrate victims’ devices and deploy payloads while bypassing traditional network security tools.
So, how can it be combatted? The answer is in the form of isolation technologies.
Developed with the simple purpose of comprehensively protecting users as they use web services – be it email applications, browsers, or otherwise – isolation creates a virtual barricade between the endpoint and external threats from the internet.
While content, such as emails and web traffic, can still be viewed in a seamless manner, it is never downloaded to the endpoint, eliminating the opportunity for malicious code to infiltrate a device and begin exploiting vulnerabilities.
To achieve a robust endpoint protection strategy, isolation must be placed front and center.
About the Author
Vinay Pidathala is Director, Security Research at Menlo Security based in Mountain View, California. Previously, Vinay was at Aruba Networks and also held positions at FireEye and Qualys.
Vinay can be reached online at: @menlosecurity and at our company website: https://www.menlosecurity.com/ | <urn:uuid:6f53d512-bfcb-418c-8cf6-c72835af45aa> | CC-MAIN-2022-40 | https://www.cyberdefensemagazine.com/html-smuggling/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00742.warc.gz | en | 0.933107 | 1,388 | 2.890625 | 3 |
Layer 1 infrastructure is made up of cables, antennas, pins, hubs, repeaters, network adapters, voltages, host bus adapters and other physical and hardware necessities, and it provides a means of transport for raw data. As a result, it’s often the first thing networking professionals check when troubleshooting issues — data cannot proceed through the other OSI layers if a cable is disconnected or a plug is pulled.
The first layer of the Open Systems Interconnection (OSI) model makes all transmission of digital data between networks possible. But despite the physical layer’s fundamental role, the importance of quality layer 1 infrastructure is easy to underestimate. Let’s take a closer look at why it’s so important to have the right layer 1 technologies.
Layer 1 infrastructure plays a key role in connectivity. Today, businesses around the world rely on almost constant access to digital information. Without layer 1 devices capable of functioning consistently, information is stopped in its tracks, interrupting operations and frustrating both internal and external users. To preserve profitability and keep uptime as close to 100% as possible, you need strong physical infrastructure.
Layer 1 security also impacts connectivity. Because cables carry vast amounts of data, they make attractive targets when an attacker’s goal is to intercept company information or disrupt service by causing extended downtime. As a result, it’s important to take physical cybersecurity threats as seriously as traditional ones.
With the right layer 1 devices and security measures, you can ensure reliable connectivity and keep stakeholders happy.
Layer 1 infrastructure is also important because it defines how much data your organization can handle and how quickly it gets where it’s going. The type of cabling you use, for example, will impact both speed and bandwidth. And how you configure layer 1 devices will determine how easily you can locate individual connections, move them and manage them.
Every business has unique requirements and goals that the right layer 1 infrastructure can help fulfill. Some data centers may need 100 Gigabit Ethernet to boost speed and minimize space and power usage, while other companies may stick with more modest layer 1 devices and invest resources elsewhere. Physical layer standards vary from company to company, which is why it’s vital to find solutions tailored to your unique needs.
Consider the quality of your level 1 infrastructure — the technologies and devices your company uses today will impact the organization’s ability to grow in the future. As you replace network components with the latest equipment, physical infrastructure must also improve to keep up. Choosing the right infrastructure means choosing devices that allow you to scale quickly and easily.
Ultimately, cutting-edge physical infrastructure helps ensure your company’s long-term success and competitive advantage.
Reach out to DataSpan Today
Level 1 infrastructure impacts operational efficiency and profitability at all levels of your data center or organization. At DataSpan, we believe you deserve IT physical infrastructure that meets your unique needs, whether that’s reliable and scalable structured cabling or something else.
With more than 40 years of experience, we can provide a variety of services to support small and large enterprises, including layer 1 data center assessment, infrastructure design and installation. To learn more, find a local representative or reach out to us today. | <urn:uuid:9c3f6534-bbb9-429c-be89-c39d58909c44> | CC-MAIN-2022-40 | https://dataspan.com/blog/the-importance-of-layer-1-infrastructure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00742.warc.gz | en | 0.908331 | 674 | 2.953125 | 3 |
We’ve recently seen strong trends with some of America’s biggest global tech corporations making their data greener. This has subsequently created further demand for environmental energy professionals to manage internal resources towards achieving business and green goals.
In order to facilitate these goals the world’s data centers have steadily been converging on utilization of the Green Grid’s PUE (Power Usage Effectiveness) metric in order to establish how energy efficient their data centers are. One of the primary aims is to allow information and ideas to be shared globally by using a more standardized set of compatible data that can more easily be compared.
Are we certain tech companies are taking this seriously? The answer seems to be yes, activist organization Green Peace were “soft” campaigning against Facebook in order to make the global social networking company less reliant on coal in order to power their data centers after revelations that the company were estimated to be more than half dependent upon carbon and fossil based fuel.
In response, Facebook subsequently made an agreement to invest in greener technologies in order to power the digital infrastructure supporting the world’s love with 24 hour social connectivity. It has since announced it will be building a new dedicated European data center in Sweden, which is to be powered exclusively by hydro-electricity.
Whilst examples utilizing wind and hydro power such as the above are not particularly feasible in many locations, companies still have a responsibility to ensure their data centers are at least environmentally and eco aware by using technology solutions wisely and that efficient cooling systems are in place to deal with the vast quantities of surplus server heat that are output.
Other cooling technologies include evaporated air cooling that are far more energy efficient than traditional heat exchange cooling systems. However it’s not simply about investing in new tech, but also ensuring smart solutions and thinking are implemented that also recycle energy towards other beneficial purposes such as server heat being used to heat habitable spaces. This is where newer metrics such as ERE, CUE and WUE are starting to emerge collectively focusing on and emphasizing aspects of reuse and recycling.
As accurate data and metrics become more available, data center costs and upkeep continues to soar as tech giants’ scale to meet increased world demand, then for the foreseeable future we expect to see higher demand and investment in the people and human resources who are responsible for undertaking and evaluating the effectiveness and results of subsequent forward thinking.
With a sustainable business agenda in place, heavily dependent tech companies can look forward to reducing their overall operational costs and to be seen to be actively meeting the world’s expectations in terms of using their influence to drive both profit and green agenda forward harmoniously.
Telehouse has over 20 years of experience in building and designing data centers. They also have the wherewithal to know when things better start to go green. That’s why they have their “Build Your Own Data Center Anywhere” campaign (UPDATE: link now gone) that can really cut the environmental costs of a traditional data center. Says Telehouse, “the Data Center Anywhere solution is guaranteed to be fully rated, self contained, air and water tight, energy efficient, reusable and green.”
This post was written by David Beastall on behalf of Acre Resources who provide executive recruitment and job placement for the world’s health, safety and environmental energy professionals. | <urn:uuid:98b29b27-1c08-4ff8-ad12-76179f392f80> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/how-and-why-are-data-centers-going-green | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00742.warc.gz | en | 0.957996 | 682 | 2.59375 | 3 |
According to John Markoff, we know what the Conficker worm does and how it does it, but little else. Who made it? What purpose does or will the network of 5 million zombie computers it has “enslaved” serve? Will it be used for criminal purposes or as a weapon during military conflicts? So many questions that still remain unanswered, and the threat is still present.
When computer security experts teamed up to fight its onslaught, they managed to decode it and develop an anti-virus software. But new, more complex versions keep popping up, and sometimes even the experts’ code has been used by the authors of Conficker to improve it.
Patrick Peterson, a researcher at Cysco Systems, has been able to get a hint of what the authors may have in mind. He found out that they began distributing software that makes it look like that your computer is infected and that offers you to buy an anti-virus software (fake, obviously). So, it seems that money may again be the root of all evil.
This was in April. Since then, no new findings came to light. The group fighting against Conficker continues to do so by trying to develop ways of killing it, and we all keep waiting for something to happen. | <urn:uuid:be1a8aff-0001-48b8-8880-34d1901585b7> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2009/08/31/what-do-we-know-about-the-conficker-worm/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00742.warc.gz | en | 0.980388 | 261 | 2.578125 | 3 |
Machine learning is transforming business. But even as the technology advances, companies still struggle to take advantage of it, largely because they don’t understand how to strategically implement machine learning in service of business goals. Hype hasn’t helped, sowing confusion over what exactly machine learning is, how well it works and what it can do for your company.
Here, we provide a clear-eyed look at what machine learning is and how it can be used today.
What is machine learning?
Machine learning is a subset of artificial intelligence that enables systems to learn and predict outcomes without explicit programming. It is often used interchangeably with the term AI because it is the AI technique that has made the greatest impact in the real world to date, and it’s what you’re most likely to use in your business. Chatbots, product recommendations, spam filters, self-driving cars and a huge range of other systems leverage machine learning, as do “intelligent agents” like Siri and Cortana.
[ Find out whether your organization is truly ready for taking on artificial intelligence projects and which deep learning network is best for your organization. | Get the latest insights with our CIO Daily newsletter. ]
Instead of writing algorithms and rules that make decisions directly, or trying to program a computer to “be intelligent” using sets of rules, exceptions and filters, machine learning teaches computer systems to make decisions by learning from large data sets. Rule-based systems quickly become fragile when they have to account for the complexity of the real world; machine learning can create models that represent and generalize patterns in the data you use to train it, and it can use those models to interpret and analyze new information.
Machine learning is suitable for classification, which includes the ability to recognize text and objects in images and video, as well as finding associations in data or segmenting data into clusters (e.g., finding groups of customers). Machine learning is also adept at prediction, such as calculating the likelihood of events or forecasting outcomes. Machine learning can also be used to generate missing data; for example, the latest version of CorelDRAW uses machine learning to interpolate the smooth stroke you’re trying to draw from multiple rough strokes you make with the pen tool.
At the heart of machine learning are algorithms. Some, such as regressions, k-means clustering and support vector machines, have been in use for decades. Support vector machines, for example, use mathematical methods for representing how a dividing line can be drawn between things that belong in separate categories. The key to effective use of machine learning is matching the right algorithm to your problem.
A neural network is a machine learning algorithm built on a network of interconnected nodes that work well for tasks like recognizing patterns.
Neural networks aren’t a new algorithm, but the availability of large data sets and more powerful processing (especially GPUs, which can handle large streams of data in parallel) have only recently made them useful in practice. Despite the name, neural networks are based only loosely on biological neurons. Each node in a neural network has connections to other nodes that are triggered by inputs. When triggered, each node adds a weight to its input to mark the probability that it does or doesn’t match that node’s function. The nodes are organized in fixed layers that the data flows through, unlike the brain, which creates, removes and reorganizes synapse connections regularly.
Deep learning is a subset of machine learning based on deep neural networks. Deep neural networks are neural network that have many layers for performing learning in multiple steps. Convolutional deep neural networks often perform image recognition by processing a hierarchy of features where each layer looks for more complicated objects. For example, the first layer of a deep network that recognizes dog breeds might be trained to find the shape of the dog in an image, the second layer might look at textures like fur and teeth, with other layers recognizing ears, eyes, tails and other characteristics, and the final level distinguishing different breeds. Recursive deep neural networks are used for speech recognition and natural language processing, where the sequence and context are important.
There are many open source deep learning toolkits available that you can use to build your own systems. Theano, Torch and Caffe are popular choices, and Google’s TensorFlow and Microsoft Cognitive Toolkit let you use multiple servers to build more powerful systems with more layers in your network.
Microsoft’s Distributed Machine Learning Toolkit packages up several of these deep learning toolkits with other machine learning libraries, and both AWS and Azure offer VMs with deep learning toolkits pre-installed.
Machine learning in practice
Machine learning results are a percentage certainty that the data you’re looking at matches what your machine learning model is trained to find. So, a deep network trained to identify emotions from photographs and videos of people’s faces might score an image as “97.6% happiness 0.1% sadness 5.2% surprise 0.5% neutral 0.2% anger 0.3% contempt 0.01% disgust 12% fear.” Using that information means working with probabilities and uncertainty, not exact results.
Probabilistic machine learning uses the concept of probability to enable you to perform machine learning without writing algorithms at all. Instead of the set values of variables in standard programming, some variables in probabilistic programming have values that fall in a known range and others have unknown values. Treat the data you want to understand as if it was the output of this code and you can work backwards to fill in what those unknown values would have to be to produce that result. With less coding, you can do more prototyping and experimenting; probabilistic machine learning is also easier to debug.
This is the technique the Clutter feature in Outlook uses to filter messages that are less likely to be interesting to you based on what messages you’ve read, replied to and deleted in the past. It was built with Infer.NET, a .NET framework you can use to build your own probabilistic systems.
Cognitive computing is the term IBM uses for its Watson offerings, because back in 2011 when an earlier version won Jeopardy, the term AI wasn’t fashionable; over the decades it’s been worked on, AI has gone through alternating periods of hype and dismissal.
Watson isn’t a single tool. It’s a mix of models and APIs that you can also get from other vendors such as Salesforce, Twilio, Google and Microsoft. These give you so-called “cognitive” services, such as image recognition, including facial recognition, speech (and speaker) recognition, natural language understanding, sentiment analysis and other recognition APIs that look like human cognitive abilities. Whether it’s Watson or Microsoft’s Cognitive Services, the cognitive term is really just a marketing brand wrapped around a collection of (very useful) technologies. You could use these APIs to create a chatbot from an existing FAQ page that can answer text queries and also recognise photos of products to give the right support information, or use photos of shelf labels to check stock levels.
Many “cognitive” APIs use deep learning, but you don’t need to know how they’re built because many work as REST APIs that you call from your own app. Some let you create custom models from your own data. Salesforce Einstein has a custom image recognition service and Microsoft’s Cognitive APIs let you create custom models for text, speech, images and video.
That’s made easier by transfer learning, which is less a technique and more a useful side effect of deep networks. A deep neural network that has been trained to do one thing, like translating between English and Mandarin, turns out to learn a second task, like translating between English and French, more efficiently. That may be because the very long numbers that represent, say, the mathematical relationships between words like big and large are to some degree common between languages, but we don’t really know.
Transfer learning isn’t well understood but it may enable you to get good results from a smaller training set. The Microsoft Custom Vision Service uses transfer learning to train an image recognizer in just a few minutes using 30 to 50 images per category, rather than the thousands usually needed for accurate results.
Build your own machine learning system
If you don’t want pre-built APIs, and you have the data to work with, there’s an enormous range of tools for building machine learning systems, from R and Python scripts, to predictive analytics using Spark and Hadoop, to specific AI tools and frameworks.
Rather than set up your own infrastructure, you can use machine learning services in the cloud to build data models. With cloud services you do not need to install a range of tools. Moreover, these services build in more of the expertise needed to get successful results.
Amazon Machine Learning offers several machine learning models you can use with data stored in S3, Redshift or R3, but you can’t export the models, and the training set size is rather limited. Microsoft’s Azure ML Studio has a wider range of algorithms, including deep learning, plus R and Python packages, and a graphical user interface for working with them. It also offers the option to use Azure Batch to periodically load extremely large training sets, and you can use your trained models as APIs to call from your own programs and services. There are also machine learning features such as image recognition built into cloud databases like SQL Azure Data Lake, so that you can do your machine learning where your data is.
Many machine learning techniques use supervised learning, in which a function is derived from labelled training data. Developers choose and label a set of training data, set aside a proportion of that data for testing, and score the results from the machine learning system to help it improve. The training process can be complex, and results are often probabilities, with a system being, for example, 30 percent confident that it has recognized a dog in an image, 80 percent confident it’s found a cat, and maybe even 2 percent certain it’s found a bicycle. The feedback developers give the system is likely a score between one and zero indicating how close the answer is to correct.
It’s important not to train the system too precisely to the training data; that’s called overfitting and it means the system won’t be able to generalize to cope with new inputs. If the data changes significantly over time, developers will need to retrain the system due to what some researchers refer to as “ML rot.”
Machine learning algorithms — and when to use them
If you already know what the labels for all the items in your data set are, assigning labels to new examples is a classification problem. If you’re trying to predict a result like the selling price of a house based on its size, that’s a regression problem because house price is a continuous rather than discrete category. (Predicting whether a house will sell for more or less than the asking price is a classification problem because those are two distinct categories.)
If you don’t know all the labels, you can’t use them for training; instead, you score the results and leave your system to devise rules that make sense of the answers it gets right or wrong, in what’s known as unsupervised learning. The most common unsupervised learning algorithm is clustering, which derives the structure of your data by looking at relationships between variables in the data. Amazon’s product recommendation system that tells you what people who bought an item also bought uses unsupervised learning.
With reinforcement learning, the system learns as it goes by seeing what happens. You set up a clear set of rewards so the system can judge how successful its actions are. Reinforcement learning is well suited to game play because there are obvious rewards. Google’s DeepMind AlphaGo used reinforcement learning to learn Go, Microsoft’ Project Malmo system allows researchers to use Minecraft as a reinforcement learning environment, and a bot built with OpenAI’s reinforcement learning algorithm recently beat several top-ranked players at Valve’s Dota 2 game.
The complexity of creating accurate, useful rewards has limited the use of reinforcement learning, but Microsoft has been using a specific form of reinforcement learning called contextual bandits (based on the concept of a multi-armed slot machine) to significantly improve click-through rates on MSN. That system is now available as the Microsoft Custom Decision Service API. Microsoft is also using a reinforcement learning system in a pilot where customer service chatbots monitor how useful their automated responses are and offer to hand you off to a real person if the information isn’t what you need; the human agent also scores the bot to help it improve.
Combining machine learning algorithms for best results
Often, it takes more than one machine learning method to get the best result; ensemble learning systems use multiple machine learning techniques in combination. For example, the DeepMind system that beat expert human players at Go uses not only reinforcement learning but also supervised deep learning to learn from thousands of recorded Go matches between human players. That combination is sometimes known as semi-supervised learning.
Similarly, the machine learning system that Microsoft Kinect uses to recognize human movements was built with a combination of a discriminative system — to build that Microsoft rented a Hollywood motion-capture suite, extracted the position of the skeleton and labelled the individual body parts to classify which of the various known postures it was in — and a generative system, which used a model of the characteristics of each posture to synthesize thousands more images to give the system a large enough data set to learn from.
Predictive analytics often combines different machine learning and statistical techniques; one model might score how likely a group of customers is to churn, with another model predicting which channel you should use to contact each person with an offer that might keep them as a customer.
Navigating the downsides of machine learning
Because machine learning systems aren’t explicitly programmed to solve problems, it’s difficult to know how a system arrived at its results. This is known as a “black box” problem, and it can have consequences, especially in regulated industries.
As machine learning becomes more widely used, you’ll need to explain why your machine learning-powered systems do what they do. Some markets — housing, financial decisions and healthcare — already have regulations requiring you to give explanations for decisions. You may also want algorithmic transparency so that you can audit machine learning performance. Details of the training data and the algorithms in use isn’t enough. There are many layers of non-linear processing going on inside a deep network, making it very difficult to understand why a deep network is making a particular decision. A common technique is to use another machine learning system to describe the behavior of the first.
You also need to be aware of the dangers of algorithmic bias, such as when a machine learning system reinforces the bias in a data set that associates men with sports and women with domestic tasks because all its examples of sporting activities have pictures of men and all the people pictured in kitchens are women. Or when a system that correlates non-medical information makes decisions that disadvantage people with a particular medical condition.
Machine learning can only be as good as the data it trains on to build its model and the data it processes, so it’s important to scrutinize the data you’re using. Machine learning also doesn’t understand the data or the concepts behind it the way a person might. For example, researchers can create pictures that look like random static but get recognized as specific objects.
There are plenty of recognition and classification problems that machine learning can solve more quickly and efficiently than humans, but for the foreseeable future machine learning is best thought of as a set of tools to support people at work rather than replace them. | <urn:uuid:a80d2f1a-38a6-4e72-9667-78e81f8d92ec> | CC-MAIN-2022-40 | https://www.cio.com/article/230648/a-practical-guide-to-machine-learning-in-business.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00142.warc.gz | en | 0.943336 | 3,311 | 3.0625 | 3 |
Share this post:
Ideally, next-generation AI technologies should understand all our requests and commands, extracting them from a huge background of irrelevant information, in order to rapidly provide relevant answers and solutions to our everyday needs. Making these “smart” AI technologies pervasive—in our smartphones, our homes, and our cars—will require energy-efficient AI hardware, which we at IBM Research plan to build around novel and highly capable analog memory devices.
In a recent paper published in Journal of Applied Physics, our IBM Research AI team established a detailed set of guidelines that emerging nano-scaled analog memory devices will need to satisfy in order to enable such energy-efficient AI hardware accelerators.
We had previously shown, in a Nature paper published in June 2018, that training a neural network using highly parallel computation within dense arrays of memory devices such as phase-change memory is faster and consumes less power than using a graphics processing unit (GPU).
Graphical representation of a crossbar array, where different memory devices serve in different roles
The advantage of our approach comes from implementing each neural network weight with multiple devices, each serving in a different role. Some devices are mainly tasked with memorizing long-term information. Other devices are updated very rapidly, changing as training images (such as pictures of trees, cats, ships, etc.) are shown, and then occasionally transferring their learning to the long-term information devices. Although we introduced this concept in our Nature paper using existing devices (phase change memory and conventional capacitors), we felt there should be an opportunity for new memory devices to perform even better, if we could just identify the requirements for these devices.
In our follow-up paper, just published in Journal of Applied Physics, we were able to quantify the device properties that these “long-term information” and “fast-update” devices would need to exhibit. Because our scheme divides tasks across the two categories of devices, these device requirements are much less stringent—and thus much more achievable—than before. Our work provides a clear path for material scientists to develop novel devices for energy-efficient AI hardware accelerators based on analog memory.
The team (L-R): Sidney Tsai, Geoffrey Burr, Bob Shelby, Pritish Narayanan, Stefano Ambrogio
Perspective on training fully connected networks with resistive memories: Device requirements for multiple conductances of varying significance. Giorgio Cristiano, Massimo Giordano, Stefano Ambrogio, Louis P. Romero, Christina Cheng, Pritish Narayanan, Hsinyu Tsai, Robert M. Shelby, and Geoffrey W. Burr. Journal of Applied Physics 124, 151901 (2018). doi:10.1063/1.5042462 | <urn:uuid:e3012c59-0576-467f-9818-d4ae52943b86> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/research/2018/10/better-memory-devices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00142.warc.gz | en | 0.903789 | 568 | 2.65625 | 3 |
IBM today announced it has built a critical component for a high-speed computer memory that is about ten times smaller than those currently available, potentially enabling a major system performance boost for critical business applications.
Called a static random access memory (SRAM), this form of memory is needed in greater and greater quantities on computer processor chips to enable the higher system performance required for demanding applications like banking and digital media. Yet, the space available for SRAM on these chips is limited by cost and manufacturing limitations, presenting a significant technical challenge. IBM has demonstrated that the SRAM memory can be made significantly smaller and still operate properly, thereby allowing more to be included on each chip.
Traditionally, SRAM is made more dense by shrinking its basic building block, often referred to as a cell. The new IBM SRAM cell is less than half the size of the smallest experimental cell reported to date, and ten times smaller than those available today. To put this in perspective, about 50,000 of the IBM cells could fit on the circular end of a single human hair. This breakthrough demonstrates the possibility of continued system performance improvement for three additional technology generations beyond what is currently manufactured. The technology is being unveiled in December at the 2004 International Electron Devices Meeting (IEDM) in San Francisco.
“Our continued commitment to technology leadership is driven by the needs of our customers,” said Dr. T.C. Chen, vice president of Science and Technology, IBM Research. “Our ability to create critical electronic components at these small scales ultimately means our systems will be able to tackle harder problems. We develop the technology and our server systems are the vehicles that put this technology to work in powerful ways.”
IBM researchers optimized the SRAM cell design and circuit layout to improve stability and developed several novel fabrication processes in order to make the new SRAM cell possible. The key element was IBM’s utilization of mixed electron-beam and optical lithography to print the aggressive pattern dimensions and densities. SRAM cell size is a key technology metric in the semiconductor industry, and this work demonstrates IBM’s continued leadership in cutting-edge process technology.
The SRAM cell size achieved by IBM could enable on-chip memories with ten times higher capacity than the current state-of-the-art technologies. This innovative technology could pave the way for new applications, such as faster search processing, and enable the growth of on demand computational capabilities for IBM customers. | <urn:uuid:d56064b1-b8f1-4f7b-b9fd-60cfb97c8c5a> | CC-MAIN-2022-40 | https://www.e-channelnews.com/ibm-unveils-worlds-smallest-sram-memory-cell/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00142.warc.gz | en | 0.942828 | 503 | 3.421875 | 3 |
No matter what kind of system you’re using, a strong antivirus program is essential for keeping dangerous malware at bay. Without one, your computer can fall to all sorts of nasty attacks — including hacks that can steal your money or personal data.
Antivirus and antimalware programs work by scanning your computer for hostile files you may have been exposed to or downloaded accidentally. And these days, with so many scammers altering files to carry a malware payload, frequent scanning is more essential than ever. Tap or click here to see what hackers are using to transmit viruses.
But what happens when malware attacks your antivirus program itself? This sounds like a nightmare scenario, but it’s closer to reality than you think. A critical flaw has been discovered in some of the most popular antimalware programs on the web, and if it’s exploited, hackers can turn your defenses against you and delete your system files!
Who watches the watchmen?
Security researchers at Rack911 have confirmed a critical flaw found in 28 of the most popular antimalware programs on the market. If exploited, the antimalware program itself can be infected and hijacked, which can let hackers scan your computer and delete ordinary system files as if they were malware.
Taken to its logical conclusion, this plan of attack can absolutely devastate an infected computer. Researchers note that hackers using the exploit could delete virus definitions and render the antimalware program ineffective. Alternatively, they could delete essential operating system files — which could prevent a computer from booting.
The attack cannot be remote-controlled, but it can be initiated via a malware payload that hackers trick their victim into downloading. Once the malware is installed, it can rewrite the names of important system files to match its own. Then, the antimalware would catch the file and delete it along with the system file.
The result: a “bricked” computer. Worst of all, because this flaw targets antimalware programs, the issue isn’t limited by the operating system. Computers running Windows 10, Linux and macOS are all at risk for this hack!
What can I do to protect myself? Do I need new antivirus software?
Because the issue is so serious, many of the biggest players in the cybersecurity world have already patched their software to remove the exploit. You can see the complete list of affected software below, as well as whether or not the patch is available.
If the program is patched, update your software as soon as possible. You can usually find the option to search for updates under the settings or preferences menu of your antivirus software.
- Avast: Avast Free Antivirus
- AVG: AVG AntiVirus for Mac. Patched.
- Avira: Avira Free Antivirus for Windows. Patched.
- Bitdefender: Bitdefender Total Security for Mac. Patched; Bitdefender GravityZone for Windows, Linux and Enterprise. Patched.
- Comodo: Comodo Endpoint Security For Windows, Linux and Enterprise. Patched.
- ESET: ESET Cyber Security for Mac. Patched; ESET File Server Security for Linux and Enterprise Patched.
- F-Secure: F-Secure Computer Protection for Windows and Enterprise. Patched; F-Secure Linux Security for Linux and Enterprise. Patched.
- FireEye: FireEye Endpoint Security for Windows and Enterprise
- Kaspersky: Kaspersky Internet Security for Mac. Patched; Kaspersky Endpoint Security for Windows, Linux and Enterprise Patched.
- Malwarebytes: Malwarebytes for Windows. Patch incoming.
- McAfee: McAfee Total Protection for Mac; McAfee Endpoint Security for Windows and Enterprise; McAfee Endpoint Security for Linux and Enterprise. Patched.
- Microsoft: Microsoft Defender for Mac and Enterprise. Patched.
- Norton: Norton Security for Mac. Patched.
- Panda: Panda Dome for Windows
- Sophos: Sophos Home for Mac. Patched; Sophos Intercept X for Windows and Enterprise. Patched; Sophos Antivirus for Linux and Enterprise. Patched.
- Webroot: Webroot SecureAnywhere for Windows and Mac. Patched.
As you can see, the only major holdouts at the moment are Avast, FireEye, Malwarebytes, and Panda. Malwarebytes notes that it has a patch on the way, so expect the update to be pushed in the near future.
Interestingly enough, Microsoft’s own Windows Defender products for Windows 10 are not affected by this issue. Microsoft told Tomsguide that none of its antimalware products are “currently vulnerable to the methods discussed in this research.”
If you’re concerned about the threat of malware and own a PC, stick to Windows Defender for now. It’s already extremely robust, and thanks to Microsoft’s statement, we now know it won’t accidentally change virus definitions or delete critical system files.
The only thing Windows Defender has to worry about, it seems, are updates from Microsoft itself. Tap or click here to see how one bad update destroyed Windows Defender.
If you’re not using Windows, or rely on a different antimalware program, make sure to update your software to the latest edition if a patch is available. Otherwise, you’ll need to switch and download new antimalware software that will scan your system without the risk.
Fortunately, there are plenty of excellent options to choose from. Tap or click here to see the best free system scanners online.
Until the flaw is completely eliminated, avoid downloading any files you’re not 100% sure about. Avoid opening emails from unknown senders, and try to shy away from downloading movies or TV shows illegally. This is the biggest threat vector for malware at this time. Tap or click here to see why.
If you play it safe, you might not even need to run a system scan more than occasionally. That’s the beauty of the web: It’s only as dangerous as you allow it to be. | <urn:uuid:1f52d966-9b91-485a-8d88-90bd6ea8a320> | CC-MAIN-2022-40 | https://www.komando.com/security-privacy/antivirus-products-can-brick-your-computer/737763/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00142.warc.gz | en | 0.891225 | 1,269 | 2.875 | 3 |
By using new technologies in our everyday lives, enormous amounts of data are produced. 59 zettabytes of data were produced in 2020 in comparison to 2 zettabytes a decade ago. it is estimated that in 2021 this number will increase by 25%. just to provide you with a better picture – 1 zettabyte equals to a trillion of gb. fancy terms such as “big data” do not necessarily mean better information. According to the famous saying “less is more,” at our soc (security operations center), we strive to monitor and filter enormous amounts of data in a smart way, so that we can spot any suspicious data which could indicate danger. Our “filters” are of course specific software tools, but the most important is still human knowledge. Analysts of different levels (so-called tier analysts) perform specific analytic filtering roles in order to “catch” any suspicious events. But how do they manage that? Just keep on reading.
Remember the well-known and popular lego bricks which we all probably played with when we were kids? Actually, entire generations have been growing up with them. Some still have them at home, and some have new lego bricks that can be used for building a dream image. Mine, for instance, is the star destroyer. The forgotten lego bricks in the basement are like a huge pile of data, which you do not know what to do with, what it is telling you, and how you put it together – because you don’t have the instructions anymore or you’re missing some parts. Let’s say you received a lego set for the best star wars spaceship. It has 4784 pieces which came in different packets so that you could build it easily. All you need is time and you can make it, as well as a lot of lego bricks (“data”) to build a star destroyer (“structured information”). When you finish, you can see the result. It is a 7.5 x 5 cm star destroyer, or in this case, a piece of useful information.
You have put so much of your time, resources, coffees, happy and sad moments into building lego bricks (managing data), so that you could admire the result. For instance – you had more than four thousand pieces, you made everything right and used all the pieces, now everyone can see it, and most of all even touch it, which means that there is no place for making mistakes, but at the end, you made one – someone touched it and it suddenly broke into pieces. You don’t have the pieces separated in packets anymore, you do not now where some of them belong, and you just have too many lego bricks (“data”). after this incident, you don’t know what to do with it because it all looks the same. Now you only have one option; you must ask for help.
In “lego world,” that means that you go to the lego website, write down the lego set id, and it turns out that they can help you with a different set of instructions. The beauty of the lego bricks is that you’re not restricted to building just one thing with one package. There are plenty of options. The instructions can give you some guidelines but then you can figure out what else can be created and which lego bricks should be used. As a parallel, in information technology or data world, we can ask experts for help. The approach requires gathering all data in order to apply the solution to a problem. Data quality is key. there is no sense in making bad decisions just to make them fast. Experts of security operational centers (soc) are constantly receiving a lot of different data, sometimes too much, and that is why there are many data analysts on different tiers. The tier 1 analyst is the level 1 analyst who receives and checks the data, decides which information will help him/her to resolve the incident, and puts all pieces of information he/she gets together to see the whole picture, and then the puzzle is solved.
The tools that help the analysts to perform their job, such as an antivirus program, siem (security information and event management), and edr (endpoint detection and response) just send the data. The tier 1 analyst is the main hero who knows how to read the collected data and interpret the information in the correct way to answer the question we are most interested in – “is it an incident or not?” when the tier 1 analyst completes his/her job and notices that there’s a problem or an incident, he/she sends the information with all the data to tier 2 and tier 3 analysts, who then check the findings and start gathering new data and new information from a customer. It’s just like when you break and lose some lego bricks of your lego final product, check what you can do with the rest of them, and you decide to build something new. but when you are watching it, you notice that you could do something even better, so you start gathering different lego shapes and upgrading your product. As i have already mentioned before, soc only tends to receive quality data. Some data is indicative, some is not related or indicative yet. soc analysts are experts who know which data belongs together and what kind of information they can give us. if you would receive too much data or that data was mixed, you also wouldn’t know what you are looking at and what you can do with all of it if you’re not using the right equipment. That is why we use different applications, rules, queries, and analytic tools that show where the data came from and what it is telling us. the point is to collect enough data and only the right one, therefore if it is mixed or some of it is lost, we can still use it, read it, and get the message. But to collect the right one and build the information, we produce billions of other data and information that must be saved somewhere in the world on our smart phones, computers, servers, and clouds. Therefore, it is not a coincidence that the quantity of data grows so fast. And it will keep growing. To have a better picture about the soc analysts roles, you can check their tasks in the table below.
Maintains and manages the entire team (recruitment and training)
Reviews incident reports and manages the escalation process
Develops and executes the crisis communication plan to all of the stakeholders
Deals with compliance reports and supports the audit process
Evaluates the SOC performance metrics and communicates with business leaders
Let me ask you this – how much data do you think you produce in one day? Thanks to the invention of mobile technology such as smartphones and tablets along with innovations in mobile networks and wi-fi networks, the creation and consumption of data is constantly growing. 2.5 quintillion bytes (a billion multiplied by a billion) of data is produced by humans every day and the question that arises is “how to handle all this data?” therefore, the ability to extract important data and get useful information is key and there is a lack of such expert knowledge in the world. | <urn:uuid:a74236e2-f7dc-4ccf-8ace-c33d19422dc1> | CC-MAIN-2022-40 | https://conscia.com/blog/data-abundance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00142.warc.gz | en | 0.955038 | 1,501 | 2.84375 | 3 |
Researchers are using virtual reality to help paraplegic people regain partial sensation and muscle control.
The chances of recovery for paraplegic patients were once considered nearly nil. But in 2014, 29-year-old Juliano Pinto, who faced complete paralysis below the chest, literally kicked off the opening match at the FIFA World Cup. Researchers had created a brain-machine interface that allowed Pinto to control a robotic exoskeleton for the symbolic kickoff at São Paulo’s Corinthians Arena.
Fast forward two years, the Walk Again Project, the same nonprofit international research consortium that designed Pinto’s exoskeleton, is now using virtual reality to help paraplegic people regain partial sensation and muscle control in their lower limbs. According to a study published Aug. 11 in Scientific Reports, all eight patients who participated in the study have already gained some motor control.
“When we look at the brains of these patients when they got to us, we couldn’t detect any signal when we asked them to imagine walking again. There was no modulation of brain activity,” Dr. Miguel Nicolelis, the lead researcher from Duke University in North Carolina, said in a Scientific Reports call Aug. 9. “It’s almost like the brain had erased the concept of moving by walking.”
To regain movement, patients were first placed in virtual reality environment, where they learned to use brain activity to control an avatar version of themselves and make it walk around a soccer field. Researchers used Oculus Rift, which is available for purchase off the shelves.
They also designed a long sleeve T-shirt that would provide haptic feedback to the patients’ forearms, stimulating the sensation of touching the ground. The arms were treated as phantom limbs, substituting for the legs, fooling the brain into feeling like the patient was walking.
After the brain reacquired the notion of walking, each patient graduated to a custom-designed exoskeleton. The exoskeleton uses a cap with nodes on the wearer’s head, which picks up signals and relays them to a computer in the exoskeleton’s backpack. When the patient thinks about walking, the computer activates the exoskeleton.
By walking in the exoskeleton an hour a day, patients were eventually able to rekindle their remaining nerves to send signals back to brain, and reactivate some voluntary movement and sensitivity. Each patient had a different recovery period but all were able to feel sensation again in the pelvic region and lower limbs, and also learned to control some of their muscles, their bladder and bowel function for the first time in many years. DukeToday outlines the improvement:
One participant, “Patient 1,” was a 32-year-old woman paralyzed for 13 years at the time of the trial who experienced perhaps the most dramatic changes. Early in training, she was unable to stand using braces, but over the course of the study, she walked using a walker, braces and a therapist’s help. At 13 months, she was able to move her legs voluntarily while her body weight was supported in a harness.
Although stem cells and electronic implants are also used to treat paraplegics, Nicolelis claims WAP’s brain-machine-interface is the least invasive and most effective at restoring biological hardware so far. In the future, he suggests, patients might combine treatments, undergoing stem cell surgeries and then using exoskeleton training to learn to walk on their own again.
NEXT STORY: Leidos Closes on Lockheed Martin’s IT Business | <urn:uuid:3c922908-edbc-4cdd-abfd-1facf65ff926> | CC-MAIN-2022-40 | https://www.nextgov.com/cxo-briefing/2016/08/paraplegics-are-learning-walk-again-virtual-reality/130809/?oref=ng-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00142.warc.gz | en | 0.949784 | 738 | 3.234375 | 3 |
THE SKILLS GAP According to the Forbes’ article “One Million Cybersecurity Job Openings In 2016”, the cybersecurity market is expected to grow from $75 billion in 2015 to $170 billion by 2020. It’s no secret there is a prevalent skills gap in this industry. The CSIS (Center for Strategic and International Studies) published a study in 2016 which revealed that 82% of security professionals report a shortage of cybersecurity skills in their organizations.
Sources including Cybersecurity Ventures and Security Magazine have predicted close to 1.5 million unfilled cybersecurity positions globally by 2020. This tremendous amount of growth means the skills gap may eventually close as organizations catch up with the market expansion. However, the current situation and challenge demands CISOs to strategically work within their own organizations, as well as with the industry to find talent.
CISOs and security leadership struggle to fill open job postings due to lack of skilled applicants. When polled, 34.5% of security managers cited lack of security expertise as a key reason to why they could not fill open positions. Cybersecurity professionals, when hiring, are unsure of what skills or qualifications are most important when looking to recruit employees (451 Research study). The CSIS study showed that 77% of security professionals believe education programs are not fully preparing or urging students to enter the cybersecurity industry. These statistics on why the skills shortage exists imply the challenge of unfilled positions is one of a lack of education and knowledge regarding the industry.
HOW TO MAKE STRIDES AS AN INDUSTRY As an industry, it is key to take a step back and understand the root cause of the challenge we are facing. The cybersecurity workforce lacks a diverse array of professionals. According to a 2017 study “The Global Information Security Workforce” conducted by the Executive Women’s Forum, only 11% of cybersecurity professionals are women. The industry must encourage more women to enter careers related to STEM (science, technology, engineering and math). Integrating more women in the industry will not only lower unfilled positions, but potentially add a stronger variety of traits including different skillsets and problem solving approaches.
Many CISOs we interview for this magazine believe their greatest accomplishments are when they influence and make an impact on young people interested in entering the field. Some encourage internships with high school and college students at their organizations. Others host students for educational cybersecurity days or teach at local colleges and universities. A large amount of CISOs who teach use this as a way to give back, but also to recruit future team members.
PROMOTING AND GROWING To retain a strong, quality workforce, many CISOs invest heavily in a core group of team members. These professionals aspire to grow within the organization and seek to take on more leadership roles. Promoting cybersecurity job openings amongst other departments within an organization may be key to attaining additional talent.
Many CISOs agree they can teach cybersecurity skills to anyone, but the soft, business skills are what they look for in candidates. When this is the case, looking outside of professionals with cybersecurity backgrounds may prove beneficial. Employers may consider hiring lawyers, accountants, or HR professionals who can bring other core business functions to a technology position. In conclusion, the skills gap is a fundamental and persistent challenge that is continuously growing. Industries and organizations must work together to: promote cybersecurity careers by way of internship and training opportunities for students, encourage women to work in the STEM field, develop talent from within and look outside of traditional tech fields to procure talent.
SOURCES: Forbes, “One Million Cybersecurity Job Openings in 2016” CSIS, “Hacking the Skills Shortage” 2016 Study Security Magazine, “How Cybersecurity Education Aims to Fill the Talent Gap” VentureBeat, “Digital organizations face a huge cybersecurity skills gap” Executive Women’s Forum 2017 Cybersecurity Workforce Study
For more information on challenges that CISOs face and how to address them to develop a strategic security program, please reference our Comprehensive Guide.
Stay up to date with cyber security trends and more | <urn:uuid:3d2da3c2-6b49-40cf-9fb6-651803d0e9e9> | CC-MAIN-2022-40 | https://www.klogixsecurity.com/blog/infographic-the-cybersecurity-skills-gap | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00142.warc.gz | en | 0.936963 | 837 | 2.59375 | 3 |
If you spent your days developing malware to target the banking and personal information of private and public enterprises, where would you want to live? Somewhere tropical? Or maybe somewhere colder where you would not feel inclined to go outside? Well, this is something that the Canadian government has also been worried about amidst an influx of hackers.
The Public Safety Department, a branch of the Canadian government, recently noted how Canada was not only being targeted by cyber crime activities, but becoming a common source of cyber criminal activity. According to The Province, the Public Safety Department is concerned that hackers are moving from traditional locations like Eastern Europe, East Asia and Africa to developed countries. Information was presented at the Cross-Cultural Roundtable on Security by Public Safety’s manager of research and academic relations, Brett Kubicek. The intention is to not only cultivate dialogue about security issues between the government and community members, but to also highlight the Canadian
government’s role in protecting against cyberattacks.
While the Canadian government is concerned about hackers relocating within their borders, The Next Web recently published an article detailing the top countries from which hacking originates. Based on information from the NCC Group, an information assurance firm, the top three countries were the United States, Russia and China. The United States accounted for 20.8 percent of all hacking attempts. With Russia representing 19.1 percent and China representing 16.3 percent, this research supports the Canadian government’s concerns that cyber criminals are moving their operations to developed countries.
What impact will cyber criminals based in Canada and the U.S. have on cybersecurity? Should the government play a more active role in mitigating hacking and other digital threats? | <urn:uuid:43163c70-704b-423f-9ae4-0494651a6c53> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/growing-number-of-hackers-living-in-canada-and-united-states | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00342.warc.gz | en | 0.964725 | 342 | 2.703125 | 3 |
Safeguarding Infrastructure, Deception Technology is a Critical Piece of the Puzzle
Written by: Carolyn Crandall – Chief Deception Officer – Over 6 billion people regularly rely on access to some form of energy, and by 2030 there is a goal to have universal access to modern services to facilitate electricity, plumbing, heat, telecommunications, and the internet. Additionally, hundreds of millions will drive their cars, use public transit, or fly aboard airliners as part of their livelihood. People expect ongoing operations of the nation’s infrastructure, as it plays a foundational role in most of our everyday lives. Unfortunately, it is also an attractive target for cyberattacks. As transportation hubs, power grids, and communications networks become increasingly digitized, the likelihood of attack rises exponentially. In some cases, attackers do so simply to see if they can and in others, for the intent of disruption or harm to human safety.
Unsurprisingly, this is a topic that has attracted considerable public attention, resulting in the US government, creating a Cybersecurity and Infrastructure Security Agency (CISA) just last year. And while recognition of the problem is a step in the right direction, the rapid digitization that industries like energy and transportation have undergone further complicates the task of securing their networks as new attack surfaces emerge for attackers to exploit. As smart grids, traffic management systems, and more become widely deployed, they add to the attack surface that security professionals must address. With such systems commonly having limited built-in security, attackers are finding more ways to penetrate or circumvent perimeter defenses. In-network security solutions that give visibility and early detection are becoming an increasingly essential part of the infrastructure security control stack. Given the inherent inability to run anti-virus, collect typical logs to identify anomalies, or to stop using admin – admin for login, organizations have turned to deception technology as a means to efficiently detect and derail attacks on energy facilities and critical infrastructure.
A Wide Range of Potential Threats
A nation’s infrastructure faces many types of threats that range from common credit card theft to the disruption of power grids or air traffic management systems, making the potential consequences of an infrastructure-based cyberattack severe. Accessing CCTV systems intended for wide-variety of surveillance programs may not seem as critical on the surface, but it can have material consequences on a person or child’s privacy or physical safety. Many OT devices can also be used for compromise and then leveraged in unison for a broader denial of service attack. Whatever the motivation, the opportunity for harm can escalate quickly and have dire consequences.
These potential attackers include not just small-scale hacktivists or cyber criminals, but in some cases, terrorists and hostile nation-states. While Russian election hackers have made quite a few headlines over the past few years, they aren’t the only ones with the motive or means to target infrastructure—civil or otherwise. The 2015 attack that disrupted Ukraine’s power grid was the first known attack of its kind, but other attacks have done varying degrees of damage throughout the world—including in the US. Just this year a “cyber event” affected grid networks in California, Utah, and Wyoming, and while there were no recorded blackouts, it was a sobering reminder that American infrastructure is not immune to attack, and traditional approaches to security are not necessarily sufficient or effective in today’s interconnected world.
The Battleground Has Shifted to Inside the Network
Security professionals agree that having a strong perimeter defense is essential—but it is equally important to have a plan for early detection of adversaries who manage to bypass them. Assuming that attackers have already compromised the network and adopting controls to detect and respond to them has become a necessary security strategy, especially when it pertains to sizeable infrastructure systems.
Once an attacker has gained a foothold in the network, they typically have the freedom to quietly run reconnaissance, harvest credentials, and gather a “blueprint“ of the network to escalate their attack. Deception technology is designed to detect all forms of attempted lateral movement, essentially locking down the endpoints to reveal any attacker movement immediately. This is done by setting attractive decoys, credentials, drive shares, services, and other forms of lures on the endpoint and throughout the network to deceive the attacker into engaging. The smallest engagement with any deception asset immediately results in a high-fidelity alert backed by rich attacker information. As a result, organizations employing deception technology have reported a 90%+ reduction in dwell time, the time an attacker can remain undetected within the network. These same security professionals also state a high rate of confidence in detecting threats compared to the substantially lower confidence of non-users of deception technology.
As the potential for harm caused by attacks on critical infrastructure continues to increase, the ability to gather detailed adversary intelligence becomes even more significant. Deception technology has proven particularly adept at collecting and correlating threat and adversary intelligence, which is extremely valuable in generating substantiated alerts and customized intelligence, helping defenders reduce their response time to verified threats. Security professionals commonly recognize deception for the fidelity of its “signal to noise” ratio, based on the accuracy of each alert. Native integrations also made available so that blocking, isolation, and threat hunting can be automated to response times improved. By further automating and accelerating the detection and remediation processes, deception technology adds greater value to existing security controls and reduces the risk of a successful attack on industrial control and business infrastructure.
Deception Technology Represents a Path Forward
The US government has taken the unusual step of “de-digitizing” some aspects of the country’s core infrastructure, replacing connected systems with analog ones to isolate them from potential attack. While this approach has its merits, it is a step backwards and does not align with where global connected economies and infrastructure are going.” Rather than trying to isolate these systems, the government needs to focus on in-network protections capable of detecting intruders and alerting defenders early, before those intruders can achieve their goals. One of the more progressive measures proposed relates to NIST. The standards organization released a draft version with new guidance on June 19 that lays out 31 new recommendations for contractors to harden their defenses and protect unclassified (but still sensitive) government data that resides on their networks from advanced persistent threats (APT) or government-sponsored attackers. Such data can range from Social Security numbers and other personally identifiable information to critical defense program details. The recommendations include processes like implementing dual-authorization access controls for critical or sensitive operations, employing network segmentation where appropriate, deploying deception technologies, establishing or employing threat-hunting teams, and running a security operations center to monitor system and network activity continuously.
Deception technology is clearly a missing piece to the security stack “puzzle.” Adding this to both upstream and downstream security controls reduces risk related to design and operational gaps in security as well as improves a security teams understanding of how an attacker got in, how they are attacking, and potentially what they are after. All, without disruption to operations or the need for agents or monitoring.
Free Active Directory Assessment
Get Visibility Into Privilege And Service Account Exposure
For a limited time, Attivo Networks is providing free Active Directory Security Assessments to demonstrate how ADAssessor provides unprecedented and continuous visibility to AD vulnerabilities.
Try Our Endpoint Detection Net (EDN) for Free
FAST AND EASY
Free use offer of our Award-winning security solution to prevent attackers from lateral movement, credential theft, and privilege escalation, fast and easy.
ADSecure 90-Day Free Trial
GET PROTECTION AGAINST UNAUTHORIZED ACCESS TO ACTIVE DIRECTORY
- Hide and deny access to AD objects
- Get alerted on unauthorized queries
- Attack details easily viewable in dashboard
- Your data remains on-premise | <urn:uuid:8c579491-e31c-4438-a7d1-5a48df37ed2f> | CC-MAIN-2022-40 | https://www.attivonetworks.com/blogs/safeguarding-infrastructure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00342.warc.gz | en | 0.941938 | 1,614 | 2.515625 | 3 |
Note: A previous version of this blog post recommended relying on the Same Origin policy as a security barrier. Since publication, new Same Origin policy bybasses have been presented by Luca Carettoni (https://www.blackhat.com/us-17/briefings.html#electronegativity-a-study-of-electron-security). We have therefore removed the recommendation that this policy be used defensively.
Electron is an increasingly mature and popular framework for cross-platform application development with a low bar to entry for experienced web developers due to the fact it uses the Chromium browser source for its GUI platform and Node.js for its system layer.
As it becomes increasingly popular it is important to look at some of the security pitfalls that can occur using it. At present, Electron provides a number of robust security features, but many of them are not enabled by default.
This blog post attempts to list some of the most significant pitfalls to avoid while developing Electron apps. However, as a relatively new framework there may be a number of other issues which become apparent over time and Electron itself is under significant ongoing development.
The short version
- Use data-binding frameworks like AngularJS, Vue.js or React consistently to render any data in order to reduce the risk of XSS. Never print raw data in the DOM.
- Set nodeIntegration to False whenever you open a window.
- If your Electron app integrates content from a web application, it should only ever access it over HTTPS.
- Intercept dragged link dropping and unexpected location changing events to prevent application flow expectations from being circumvented.
- Validate information from query URLs, even for local APIs.
- Avoid including Node.js modules directly in your window – or risk bypassing your own sandbox.
- Never execute or evaluate the contents of IPC messages.
- Be aware that source for native dependencies that require compilation as part of your build will be disclosed to everyone who downloads your app (as will NPM package metadata, including version numbers).
And the slightly longer version
Don’t render untrusted GUI elements
In most Electron applications, GUI components will be the simplest and most prominent targets for attackers. As with web application development, it is critical to maintain a clear distinction between application data being presented and templating/application code used to present it, in order to prevent XSS.
The main difference when developing with Electron is that XSS may give an attacker full control of the underlying system or, at the very least, ability to interact with local components. This can even be available if your application can be redirected to display an attacker controlled web page in any way, potentially turning links and open redirects into remote code execution vulnerabilities.
The first line of defence here is to avoid ever composing DOM elements from dynamic content directly. Instead, always render content safely using a templating or data-binding library.
Given the risks associated with rendering untrusted content, however, it is still wise to design applications for security in depth, restricting the execution available to a successful attacker.
Node integration is the feature which embeds core Node.js functionality in window execution contexts. It is on by default, providing developers (and potentially attackers) with core system functionality. While it may be very convenient to access all of this programming power directly, it effectively bypasses the security sandbox.
This feature is enabled or disabled when windows are created, and given the risks, should almost always be disabled (https://github.com/electron/electron/blob/master/docs/api/browser-window.md#class-browserwindow) for windows in production applications.
Instead, if you need selected node-based functionality in a given window, it can be embedded using a preload script, defined in the webPreferences object.
Only use HTTPS for integration with web applications
A consequence of the node integration issue, is that not only is it dangerous to render untrusted content in local GUI elements unsafely, but any successful MITM attack of remote content rendered by an Electron app directly allows the attacker to execute code in the BrowserWindow sandbox, and potentially even break out.
Consequently, even if HTTPS is considered unnecessary for confidentiality reasons, it is critical that remote content is only ever accessed over HTTPS, to protect integrity in transit.
Intercept and safely handle unexpected navigation
It’s possible for application flow in many cases to be controlled by dragging and dropping links on the window. This may become a security issue if local users can bypass control flow (such as, for example, password lock screens) by dropping a link to a screen on top of the window. While it’s often possible for a user with local machine access to read the process memory and extract any sensitive information, windows should be protected by listening for and intercepting dropped link events to prevent undesirable navigation and potentially security bypasses by a local user.
Unexpected navigation may alternatively be the result of social engineering attacks with links sent to users.
Just because it’s local, doesn’t mean it’s safe
It is common for Electron apps to store and use URL query strings and APIs to handle context data. However, while this may be convenient to provide an API to underlying controller functionality, these query strings shouldn’t be any more trusted for functionality than API parameters in web development. There are a number of ways an attacker might redirect an Electron window or socially engineer a user to redirect it themselves.
One publically distributed Electron application was discovered to pass a URL query string containing parameters for configuring a new window, which on opening was sent to a handler which applied those parameters to a window constructor. An attacker who was able to redirect this window (whether through an open redirect, XSS, or malicious links embedded in messages or some other mechanism) might, for example, redirect to an API query URL with reconfigured parameters to open a window with a malicious preload script, disabled security settings or a number of other features of interest to an attacker.
All URL-based APIs should validate queries against expectations before any further action and rejected if they fail, just as is the norm in in web application development.
Preload script peril
Beware of using preload scripts to include root Node.js modules directly in windows – some of them provide routes for a sandboxed attacker to regain full Node.js functionality.
For example, the process module in contains a reference to Node.js’ require() function from which a sandboxed attacker can import any other node functionality and break out of the sandbox. Direct fs module access may allow an attacker to overwrite system files with backdoored versions. A safer approach is to use the preload script to embed custom wrapper functions to system functionality limiting the impact of XSS.
In general, the content of preload scripts should be scrutinised closely.
IPC messaging security
When using IPC messages to communicate between processes, it’s important not to yield uncontrolled execution ability from one process to another. A significant benefit of Electron’s architecture is the ability to easily segment processes and provide APIs with restricted permissions to potentially vulnerable UI components. Messages are serialized intermediately as JSON objects which prevents transmission of functions or function calls, but it’s important that IPC messages are not trusted, and are handled safely (i.e.: not evaluated or executed, or used directly to control execution without validation), or this may provide an attacker a route to bypass sandbox security.
The archive format for Electron is readily unpackable using open source tools and the bar to reverse engineering them is often trivial. This isn’t in itself a security issue, but there are a wide variety of publicly available Electron based applications which are based on a mixture of dependencies from both open and private repositories.
At present there don’t appear to be any good ways of preventing this source disclosure besides pre-compiling dependency packages as node modules ahead of compiling/packaging the Electron application for distribution. This itself can pose headaches maintaining cross-compatibility.
Of course, this also means any out-of-date dependencies with publicly-known vulnerabilities are also clearly visible to anyone inspecting your application.
Electron provides many powerful features and a relatively accessible entry point to GUI development, especially for developers from a web development background. However, it’s important to design applications conscious of the risks associated with both the web development and desktop application development technologies in use.
Written by Phoebe Queen
First published on 16/09/16 | <urn:uuid:ab1d94b9-004c-4b68-9ce1-57d2472b699c> | CC-MAIN-2022-40 | https://research.nccgroup.com/2016/09/16/avoiding-pitfalls-developing-with-electron/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00342.warc.gz | en | 0.886909 | 1,785 | 2.515625 | 3 |
Password security practices have failed to keep pace
Whether an environment is solely dependent on passwords or includes them as part of a multi-factor model, passwords remain a critical layer of authentication security.
The continued barrage of reports about data breaches and account hijacking, however, make it painfully clear that organizations struggle with password-based security.
The number of U.S. data breaches tracked in 2016 hit an all-time record high of 1,093, according to the Identity Theft Resource Center. For every breach that is reported, there are many that go unreported because the target never discovered that they were compromised.
When we look at how cybercriminal tactics have evolved, and how compromised credentials have impacted these methods, one answer to the problem of the password becomes clear:
No site should be allowing the use of known, compromised credentials
This policy of restricting of compromised credentials refers to both the authentication of previously exposed username and password combinations and the screening of new user-selected passwords against all previously exposed passwords.
There are two benefits of this simple policy. First, it closes a glaring gap which otherwise leaves the password layer completely open to credentials exposed in third-party breaches. Second, it ensures passwords are unique enough to not be reversible using cybercriminal cracking dictionaries, protecting both the account and the entire database if it were to be breached.
To better understand the problem and the benefits of this policy recommendation, it helps to look at how attackers are working around the security practices that are currently in use.
Assumptions about password strength
Many organizations rely on outdated assumptions that don’t account for modern cybercriminal tactics.
The first assumption is that password complexity policies make brute-force guessing more difficult.
These sometimes comically frustrating password complexity rules were originally introduced to encourage users to select better passwords with greater entropy. In this context, entropy is a measure of unpredictability or the amount of uncertainty that an attacker has to overcome to figure out a password.
Password complexity rules typically require minimum length and combinations of letters, numbers and symbols to increase the total universe of possible password choices.
Passwords that satisfy complexity rules are often described as strong, difficult to guess passwords. However in reality, a strong password may not be a safe password.
Even with a massive universe of possible passwords, people are predictable when their choices are unrestricted. Users munge familiar words with common substitutions and patterns. Based on this, Cybercriminals can be reasonably confident that the password they are targeting will be among those they’ve seen in previous data breaches and common password lists.
NIST acknowledged the limitations of password complexity rules in their most recent guidelines by actively recommending against their use. The research shows that complexity rules make life difficult for users and do little to make passwords harder to guess.
Assumptions about one-way encryption
The second assumption is that when organizations store users’ passwords with a one-way encryption, they’ve created a substantial protection against their misuse.
Organizations will always need to store users’ passwords for comparison with those entered at login. And when the user creates a new password, most systems today will apply a one-way encryption algorithm to convert the password into an irreversible string of characters called a hash.
Most modern authentication systems take this a further step by adding a “salt” to the hash. This salt is a string of characters, unique to each user, added to each password to increase complexity and uniqueness before it is then hashed.
Apart from employing salts, the current best practices approach to storing passwords securely involves using an adaptive work hashing algorithm, such as bcrypt, which can be scaled in complexity over time as computing hardware performance continues to advance. To combat the rise of massively parallel cracking systems which employ large numbers of commercial GPUs, password hashing algorithms have been further enhanced to also be “memory hard” and arbitrarily require larger blocks of memory for computation. Newer algorithms in this category include scrypt and Argon2.
The rationale for these types of hash-based encryptions is that it makes it impossible to unencrypt and computationally infeasible for cybercriminals to generate all the hashes needed for comparison with the universe of possible hashed passwords that they may encounter. The provider thus assumes that user’s passwords would not be discoverable in the event the provider’s database was breached.
The problem again is the presumption of an extremely large universe of possible passwords. The reality is that the population of user-generated passwords is startlingly small. Each time the passwords from a data breach can be studied, we see practically the same list.
As a result, the cybercriminal requires only limited effort to reverse the irreversible one-way encryption.
How cybercriminals make use of strong, encrypted passwords
Even with password rules applied, the relatively small and predictable universe of user-selected passwords allows cybercriminals to generate reliable cracking dictionaries that comprise almost all of the passwords they will encounter.
Cracking dictionaries are typically made up of passwords exposed in previous data breaches. The cybercriminals know that if a password was ever used before, it’s likely to be found again.
With a solid cracking dictionary, the cybercriminal never needs to resort to brute force guessing, and hashed passwords become only an inconvenience.
To reverse hashed passwords, cybercriminals simply run the same hashing algorithm against a cracking dictionary. The output is called a rainbow table, a precomputed table of the clear text value and the associated hash. Cracking dictionaries and rainbow tables are commonly shared among cybercriminals.
Cybercriminals can then look up the clear text password for any hash they encounter. This approach often works as much as 90% of the time or more because that’s how often people’s passwords are typically found in the cracking dictionaries.
Even when there is a salt used, the clear text password can still be reversed by recalculating all the possible hashes with the salt. While more time consuming, the end result is still the same.
To reverse engineer a salted hash, the cybercriminal needs only to know the algorithm by which salt was applied. Often the format of the hashes or knowledge about what off the shelf software gives this answer immediately.
When the hashing algorithm is not known, the cybercriminal can try various hashing algorithm possibilities on a common password. They can then test their output against the hashes in the breach. If they don’t find a match for the common password it means they haven’t figured out the hashing algorithm yet. Even in this case, reverse-engineering hashes is facilitated by the cybercriminal’s confidence that a common password will always be there.
Once the hashing algorithm is known, the cybercriminal re-runs the hashes for each entry in their cracking dictionary. The hardware typically used includes specialized processors that can generate 13K hashes per second for an OpenBSD bcrypt hash per GPU. An average cybercriminal rig might have 8 GPUs, allowing over 100,000 hashes every second. Speeds to generate less hardened hashes are in the billions per second.
Given the power of typical hardware used by cybercriminals, even with memory hard hashing or adaptive work hashing algorithms, calculating hashes is not a roadblock. It simply encourages the cybercriminal to limit the size of the cracking dictionary they will use. They do this of course by prioritizing the most commonly used passwords.
Many organizations don’t store passwords with salt and use a less rigorous hashing algorithm, and therefore can be reversed even faster.
The consistent theme to these problems is users picking passwords from among the common and compromised passwords found in cracking dictionaries. This gives cybercriminals an easy way to sidestep security hurdles.
While commonly used and compromised passwords represent a significant threat to database security, they are at the foundation of another and larger threat to individual account security due to password reuse.
Password reuse means the key that cybercriminals can use to access a user’s account may be readily available without any security incident at your organization.
The big problem of password reuse
It’s an unfortunate fact that most people reuse passwords across sites.
The fact that users don’t select a unique password for each site means that if a cybercriminal can obtain a user’s password from one site, there is a high likelihood they can easily login to other sites. They have the full, valid credentials and can simply login as the user.
Based on a password reuse study of several hundred thousand leaked passwords from eleven web sites and user surveys; the findings showed: 43- 51% of users reuse the same password across multiple sites.
Facebook CSO, Alex Stamos, has described password reuse as one of the biggest online dangers. In 2016, in a talk at TechCity, Stamos said, “The biggest security risk to individuals is the reuse of passwords, if we look at the statistics of the people who have actually been harmed online. Even when you look at the advanced attacks that get a lot of thought in the security industry, these usually start with phishing or reused passwords.”
Password reuse has led to a rapid rise in attackers where compromised credentials are used in bulk. This type of attack is called credential stuffing.
OWASP describes credential stuffing as follows: “Credential stuffing is the automated injection of breached username/password pairs in order to fraudulently gain access to user accounts.”
For the downstream sites that are the target of credential stuffing, it becomes extremely difficult to defend against this type of attack because the credentials being used are valid.
The threat of credential stuffing is made worse by the relative ease and lack of sophistication required to execute an attack.
Compromised credentials can often be obtained for free or cheaply from publicly available Internet sites and the Dark Web. Automated tools are also readily available to assist unsophisticated attackers.
New password security methods
Securing against username and password combinations
There are several methods that organizations can adopt in light of current cybercriminal tactics to better secure the password layer.
The first approach, already in use by large companies like Twitter and Facebook, involves detecting username and password combinations that have been compromised and blocking them.
Facebook has been public about their use of this approach. On the Facebook blog, “Protecting the Graph,” Facebook explained:
“We collect the stolen credentials that have been publicly posted and check them to see if the stolen email and password combination matches the same email and password being used on Facebook.”
For this approach to be effective, a database of known compromised credentials must be collected from the same sources where they would be obtained by cybercriminals. This includes the various marketplaces and hacker sites on the Internet and Dark Web.
While some automation can be applied to collect compromised credentials from a few sources, many sites have restrictions that prevent scraping and require different levels of group participation to gain access. Therefore, the vast majority of credential collection can only reasonable be done using manual research.
Only a limited number of credentials are found in clear text. In most cases, the credentials found are hashed or in salted hash formats. While cybercriminals would reverse these to clear text for a credential stuffing attacks, there is no need for organizations to crack passwords. Instead, they can take the clear text password (at the point it is given by the user) and hash it to the formats in which the exposed credentials were found.
The ideal implementation of this use case occurs at the login event, since the password should be encrypted at all other times.
This offers the following advantages:
- Comparisons can be made against credentials found in different hash algorithms or salted hashes as described above.
- Assuming the compromised credential database is kept up to date, checking at login provides the most real-time method of protection, reducing the attack window as much as possible.
When compromised credentials are detected, users can be prompted to change their password. However to really harden password-based security, all new passwords should be screened against common and compromised password lists.
Securing against common and compromised passwords
To address the password weakness issues outlined above, newly created passwords can be screened against lists of common and compromised passwords.
This is the exact approach outlined by NIST in their most recent authentication guidelines.
NIST special publication 800-63B section 5.1 recommends checking new passwords against those used in cybercriminal dictionary attacks:
“When processing requests to establish and change memorized secrets, verifiers SHALL compare the prospective secrets against a list that contains values known to be commonly-used, expected, or compromised.”
This approach uses cybercriminal’s cracking dictionary against them.
Passwords not found in cracking dictionaries are substantially harder for cybercriminals because it forces them to revert to brute-force guessing tactics.
Besides inherently stronger passwords, when the database uses a blacklist to exclude all passwords found in cracking dictionaries, it can be a form of insurance in case the database is breached.
A database that contains no passwords from cracking dictionaries is substantially less useful to the cybercriminal. This is because:
- The database of passwords cannot be rapidly reversed in bulk using pre-calculated lookups. Cybercriminals would need to revert to brute-force methods for each password.
- The exact format of salted hash is substantially more difficult to identify. When common passwords aren’t available, it becomes much harder for cybercriminals to discern the exact algorithm for salting the hash.
To achieve these benefits, all new passwords would need to be compared against a comprehensive blacklist that includes: multiple cracking dictionaries, all words (such as from a scrape of all Wikipedia articles in all languages along with Guttenberg project books) combined with dates, characters sequences, numbers, common substitution characters, and all compromised passwords from data breaches.
Based on Enzoic efforts to maintain such a blacklist, the list size would be approximately 1.75B entries to be comprehensive. To continue to be effective over time, such a list would need to be maintained as additional passwords were exposed.
How compromised credentials fit in a MFA environment
Multi-factor authentication (MFA) provides better security by making sure there isn’t a single point of failure.
The three authentication factors recognized for MFA are: something you know (e.g. password), something you have (e.g. physical device), and something you are (e.g. biometrics like fingerprints). There are multiple methods of accomplishing each layer.
Each layer has it’s own balance of convenience and risk for compromise. In practice, no authentication layer is invulnerable.
Not long ago, the use of SMS messages to a registered phone number was considered a reliable layer of security, but there have been frequent enough evidence of vulnerability when using public telephone networks that NIST is now actively discouraging the use of voice or SMS based authentication as a “something you have” layer.
Another common authentication approach involves one-time passwords (OTP) sent to a registered email address, however this is only viable if the email account has not been compromised.
The bottom line is that improved security results from more than one layer and yet when the password (something you know) layer is compromised, security is again dependent on just a single-factor.
What’s more, a system that imposes the burden of multiple layers but fails to adequately secure each creates a false sense of security.
Adapting password-based security to current threat practices
Passwords continue to be able to provide an important authentication layer alone or in combination with other factors. However, password based security like all security measures, needs to regularly evolve as threat methods change.
By continually checking user accounts against compromised credentials and screening new passwords to ensure they are not exposed, organizations dramatically improve efficacy of password-based security.
Enzoic’s innovative compromised credential and breach notification services were created to protect corporate networks and consumer websites from unauthorized access and fraud. Enzoic helps organizations screen user accounts for known, compromised credentials and block unauthorized authentication. Enzoic Ltd. Is a privately held company based out of Boulder, Colorado. For more information, visit: www.enzoic.com. | <urn:uuid:1fae57ef-3f1d-4ab4-8a1a-fd4d83c2e9a6> | CC-MAIN-2022-40 | https://www.enzoic.com/evolving-password-security-to-compromised-credential-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00342.warc.gz | en | 0.93557 | 3,354 | 3 | 3 |
SpaceX and NASA have conducted a flight test to demonstrate the performance of a spacecraft launch escape system in preparation for a crewed mission to the International Space Station.
The test took place at NASA’s Kennedy Space Center in Florida and aimed to show how the company-built Crew Dragon would separate from the Falcon 9 rocket if an inflight emergency occurs, the agency said Monday.
SpaceX and U.S. Air Force personnel at Patrick AF Base will work to recover the spacecraft, which splashed down in the Atlantic Ocean as part of the event, and transport the vehicle to a company facility.
“This critical flight test puts us on the cusp of returning the capability to launch astronauts in American spacecraft on American rockets from American soil,” said Jim Bridenstine, administrator of NASA and a 2019 Wash100 awardee.
“We are thrilled with the progress NASA’s Commercial Crew Program is making and look forward to the next milestone for Crew Dragon.”
The joint team will review test data required before astronauts can use the system for the Demo-2 mission. | <urn:uuid:0a699a64-25a0-46d4-b8d7-c836eb787706> | CC-MAIN-2022-40 | https://www.govconwire.com/2020/01/spacex-nasa-demo-launch-escape-approach-for-crew-dragon-spacecraft-jim-bridenstine-quoted/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00342.warc.gz | en | 0.908889 | 225 | 2.59375 | 3 |
How sweet would it be to plug and play USB devices without the fear of viruses, malware and other security threats?
It’s everyone’s dream to own 100% foolproof USB devices for their file storage and transfer routine: Fascinating to think about it, but it simply isn’t gonna happen with the raft of current USB-related security threats.
Because even if a USB stick has been completely wiped, and contains no files, it could still pose a threat to your organisation.
I am highlighting an exploit recently spotlighted by two security researchers: Adam Caudill and Brandon Wilson of SR Labs. They reverse-engineered the USB firmware that powers millions of devices, which could enable hackers to inject malicious codes into computers.
What’s interesting and worrying at the same time is that the researchers have released the code on Github, a site accessible to any internet user.
The vulnerability goes by the name “BadUSB”.
It’s been only two months since I wrote about the initial discovery of the so-called “BadUSB” vulnerability.
Previously, it was demonstrated by Karsten Nohl and Jakob Lell at the Black Hat security conference in Las Vegas, showcasing that the firmware of USB devices made by Taiwanese electronics manufacturer Phison could be injected with undetectable, unfixable malware.
Crucially, however, Nohl did not release the code used for the exploit at the time. But Caudill and Wilson have subsequently made the decision to release fuller details about BadUSB at the recent DerbyCon hacking conference in Louisville, Kentucky.
“The belief we have is that all of this should be public. It shouldn’t be held back. So we’re releasing everything we’ve got,” Caudill said to the audience at DerbyCon. “This was largely inspired by the fact that [SR Labs] didn’t release their material. If you’re going to prove that there’s a flaw, you need to release the material so people can defend against it.”
The vulnerability functions by modifying USB device firmware, hiding malicious code in USB sticks and other devices in a way that is undetectable. Even wiping the contents of a device doesn’t work, and Wired called the vulnerability “practically unpatchable.”
Once a USB device is infected, it will attempt to infect anything it connects to, or any USB stick that comes into it.
The researchers point out that hackers could use a USB microcontroller to impersonate a keyboard on a computer and run data-stealing commands. In this way an attacker would only need a few seconds access to a computer to instruct it to follow a sequence of commands which could lead to data being stolen, security disabled, or malware installed.
Because of the nature of BadUSB, the attack would go undetectable, even if an anti-virus program is installed on the system the device is attached to, and may not leave any traces.
As the vulnerability can’t be easily patched, many USB devices could need major redesigns, and the current ones might never be secure.
Nohl admits “it’s unfixable for the most part,” and full protection could take years, even decades.
It’s also noticeable that Edward Snowden’s revelations revealed that the NSA owns a spying device called ‘Cottonmouth’ that utilizes a USB vulnerability to relay information and monitor computers, an indication of the potential danger of the vulnerability.
Releasing the BadUSB code on GitHub means that hackers have access to readily available information to carry out exploits, which significantly increases the risk to consumers. However, this release would also help researchers speed up endeavors to come up with defenses.
USB devices manufactured by Taiwanese company Phison have already been labelled as vulnerable. Security researchers have contacted the company, but the manufacturer denied the attack was possible. That said, it would require a complete redesign by Phison and other USB manufacturers to secure devices against the vulnerability.
The researchers stated they are working on another exploit that would inject malware into files invisibly as they are copied from a USB device to a PC. By hiding a USB-infecting function in the malware, it would be possible to quickly spread malware using any USB stick that is connected to a computer and back to any new USB plugged into the infected PC.
Of course, in that scenario, you would hope that traditional anti-virus software running on the computers would be able to detect malware-infected files residing on the infected PC – if not on the USB device itself.
“There’s a tough balance between proving that it’s possible and making it easy for people to actually do it,” Caudill says. “There’s an ethical dilemma there. We want to make sure we’re on the right side of it.”
Personally, I wish that Caudill and Wilson had found a way of raising awareness about this security vulnerability without giving criminals the blueprints required to exploit it. There’s a real danger that hackers could exploit the flaw more quickly because of the information that they have released.
But with the cat now out of the bag, we should all be putting pressure on USB manufacturers to get their act together, or many many folks will be potentially exposed. I also recommend caution when dealing with USB devices; where possible, only use devices that are untouched by others.
This article originally appeared on the Optimal Security blog.
Found this article interesting? Follow Graham Cluley on Twitter to read more of the exclusive content we post. | <urn:uuid:0a9cfaad-e68d-4a88-828b-b312d9e1830f> | CC-MAIN-2022-40 | https://grahamcluley.com/unpatchable-badusb-code-now-publicly-available/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00342.warc.gz | en | 0.948414 | 1,176 | 2.65625 | 3 |
Safeguard your digital learning environment.
A digital identity platform enables your students and staff to login to your systems, through one portal, using their one unique username and password. With one point of access instead of dozens, you can protect against cyberthreats much more effectively. The techniques that cybercriminals use to break into EdTech systems are less likely to succeed when you have centralized control over all your applications and the digital identities that access them.
Maximize instructional time.
Often, class and homework time is spent trying to login to the digital tools where learning happens. A digital identity platform also offers single sign-on, which enables users to login to every system they need using just one username and password. If they forget their credentials, they can immediately recover passwords and unlock accounts, on their own, without waiting for IT to come to the rescue.
Minimize the load on IT and EdTech teams
With a digital identity platform, IT teams can automate repetitive tasks like rostering, account on/off-boarding, provisioning, and deprovisioning. That means that IT can control who is on the edtech systems and make sure they have access to the tools they are supposed to access. By automating these tasks, IT can focus on more innovative projects and add new digital tools to your learning environment without the usual hassles or risks they can bring.
Today, a digital identity platform addresses the common IT issues that get in the way of learning. In the near term, a digital identity can catalyze new, innovative capabilities. These include:
- Responsive and predictive learning: Student digital identities aggregate information from all the disparate learning systems where academic, behavioral, demographic, relational, and lifestyle data is siloed. Using this data, educators can individualize learning to what students need at a given moment. They can also predict setbacks and intervene before they happen.
- Educational big data: With digital identities that contain rich, multifaceted data, educators can compare individuals and cohorts of students to understand the broader patterns and outcomes in their community. In K-12 education, for example, district leadership will be able to see when achievement gaps occur and why, without waiting on standardized test results.
- Data-driven innovation: Educators can measure how changes in policy, staffing, digital learning tools, and teaching techniques affect students. The digital identity platform can accelerate innovations that might otherwise take many years to test. | <urn:uuid:1a64b34b-18e1-410c-a688-a70858d20617> | CC-MAIN-2022-40 | https://info.identityautomation.com/digital-identity-guide-pillar | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00542.warc.gz | en | 0.930797 | 497 | 2.875 | 3 |
New results achieved by combining big data captured by the Subaru Telescope and the power of machine learning have discovered a galaxy with an extremely low oxygen abundance of 1.6% solar abundance, breaking the previous record of the lowest oxygen abundance.
The measured oxygen abundance suggests that most of the stars in this galaxy formed very recently. To understand galaxy evolution, astronomers need to study galaxies in various stages of formation and evolution. Most of the galaxies in the modern Universe are mature galaxies, but standard cosmology predicts that there may still be a few galaxies in the early formation stage in the modern Universe. Because these early-stage galaxies are rare, an international research team searched for them in wide-field imaging data taken with the Subaru Telescope. “To find the very faint, rare galaxies, deep, wide-field data taken with the Subaru Telescope was indispensable,” emphasizes Dr. Takashi Kojima, the leader of the team.
However, it was difficult to find galaxies in the early stage of galaxy formation from the data because the wide-field data includes as many as 40 million objects. So the research team developed a new machine learning method to find such galaxies from the vast amount of data. They had a computer repeatedly learn the galaxy colors expected from theoretical models, and then let the computer select only galaxies in the early stage of galaxy formation.
The research team then performed follow-up observations to determine the elemental abundance ratios of 4 of the 27 candidates selected by the computer. They have found that one galaxy (HSC J1631+4426), located 430 million light-years away in the constellation Hercules, has an oxygen abundance only 1.6 percent of that of the Sun. This is the lowest values ever reported for a galaxy. The measured oxygen abundance suggests that most of the stars in this galaxy formed very recently. In other words, this galaxy is undergoing an early stage of the galaxy evolution.
“What is surprising is that the stellar mass of the HSC J1631+4426 galaxy is very small, 0.8 million solar masses. This stellar mass is only about 1/100,000 of our Milky Way galaxy, and comparable to the mass of a star cluster in our Milky Way,” said Prof. Ouchi of the National Astronomical Observatory of Japan and the University of Tokyo. This small mass also supports the primordial nature of the HSC J1631+4426 galaxy.
The research team thinks that there are two interesting indications from this discovery. First, this is the evidence about a galaxy at such an early stage of galaxy evolution existing today. […] | <urn:uuid:767cb59c-24f8-42bd-8122-3e0562eaa1ea> | CC-MAIN-2022-40 | https://swisscognitive.ch/2020/08/02/machine-learning-finds-a-surprising-early-galaxy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00542.warc.gz | en | 0.939065 | 527 | 3.328125 | 3 |
Extracting Tag Attributes¶
Apart from extracting the text displayed in the HTML page, it is sometimes useful to extract data from the attributes of HTML tags, such as, for example, the URL of a link. To extract the value of any attribute, the following syntax may be used:
The attributes that can be extracted from the tag are:
Any attribute specified in the definition of the tag.
Any of the attributes of the HTML tag. These are automatically included in the definition of the format tag, even if they are not explicitly declared.
In our example, to extract the URL of a link (that is defined in the href attribute of the HTML tag ‘a’, and thus is implicitly declared in the ANCHOR tag) into the ‘LINK_TARGET’ attribute of the relation, the pattern to be defined is the following:
For more information about extraction of tag attributes, go to the next section. | <urn:uuid:872347b8-9f2b-4d0d-8f1d-65fb0a299a7f> | CC-MAIN-2022-40 | https://community.denodo.com/docs/html/browse/8.0/en/itpilot/dextl/advanced_syntax/extracting_tag_attributes/extracting_tag_attributes | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00542.warc.gz | en | 0.765919 | 223 | 2.953125 | 3 |
Pressure protection valves are used to isolate auxiliary air systems from the brake system. This is done to preserve air for braking in the event that the auxiliary system develops a major leak.
What does a trailer protection valve do?
Application: Tractor protection valves are commonly mounted at the rear of the cab. The purpose of the Tractor Protection valve is to protect the tractor air brake system in the event of a trailer breakaway or severe air leak. They are also used to shut off air to the trailer before disconnecting the lines.
How does a pressure protection valve work?
The inlet (supply side) of a protection valve is normally closed and does not open until pressure at the supply side, usually in a reservoir, reaches an opening threshold. … When the pressure on the supply side falls the valve closes to preserve some amount of air pressure on the delivery side of the valve.
At what pressure will the protection valve for your trailer pop out?
You push it in to supply the trailer with air, and pull it out to shut the air off and put on the trailer emergency brakes. The valve will pop out (thus closing the tractor protection valve) when the air pressure drops into the range of 20 to 45 psi.
What should you hook up before backing under the trailer?
After connecting the air lines but before backing under the trailer you should: A Supply air to the trailer system, then pull out the air supply knob.
How does the tractor protection valve operate in the event of trailer breakaway?
When you are not hooked to a trailer, the trailer-supply valve is closed and there will be no air to the tractor protection valve. … On a trailer breakaway, air will rush out of the supply line until the trailer-supply valve automatically closes (automatic type). This prevents any more loss of air from the tractor.
How can you test that air flows to all trailers?
How can you test that air flows to all trailers? Use parking brake and/or chock wheels. Wait for normal air pressure, then push “trailer air supply” knob. You just studied 5 terms!
How does a 4 way Protection valve work?
The 4-circuit protection valve isolates the individual circuits of the air pressure braking system from each other. Protection of pressure in the intact braking circuits is thereby ensured if one or more brake circuit fails due to pressure loss. A total failure of the brakes is thus prevented.
How can you test the trailer protection valve?
Test the tractor protection valve by listening for air exhausting from the trailer service line with the trailer supply valve closed, the trailer service line disconnected and the service brakes applied. If air is exhausting from the trailer service line, the tractor protection valve is defective.
When uncoupling a trailer after you have shut off the trailer air supply and lock the trailer brakes you should?
STEP 2: Ease Pressure on Locking Jaws
- Shut off trailer air supply to lock trailer brakes.
- Ease pressure on fifth wheel locking jaws by backing up gently. (This will help you release the fifth wheel locking lever.)
- Put on parking brakes while tractor is pushing against the kingpin.
When hooking a tractor to a trailer you will know the trailer is at the correct height when the?
The trailer is at the right height when: It will be raised slightly when the tractor is backed under it. Which part of the kingpin should the locking jaws close around?
Why do many trailers built before 1975 have no parking brakes?
You could drive away, but have no trailer brakes. Why do many trailers built before 1975 have no parking brakes? They don’t have spring brakes. If the service air line comes apart while driving but the emergency line is okay, what happens immediately?
What will happen if the airlines are crossed when you hook up to an old trailer?
A If the trailer has no spring brakes, you could drive away but you would not have trailer brakes. … “If you crossed the air lines, you could drive away but you wouldn’t have trailer brakes.
How can you test that air flows to all trailers CDL?
Check air flow to all trailers:
Use the tractor parking brake and or chock the wheels to hold the vehicle. Wait for air pressure to reach normal, then push in the red “trailer air supply” knob. This will supply air to the emergency (supply) lines. | <urn:uuid:c2ec0a49-14f4-4655-8a4b-5e35bd2e0b05> | CC-MAIN-2022-40 | https://bestmalwareremovaltools.com/physical/what-is-the-function-of-a-trailer-pressure-protection-valve.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00542.warc.gz | en | 0.907734 | 917 | 2.859375 | 3 |
Malware: Are Your Macs At Risk?
Although Macintosh systems have been targeted less frequently than Windows-based systems, they are not immune to malware. Look around the next meeting you are at and notice how many of the systems are Macs; there will be a few of them. Those Macs are at risk of malware just as their Windows cousins are.
In the beginning, Macs were perceived as impenetrable and inherently secure machines that held a reputation besting their rival Windows operating system.
However, a recent technical blog post by the Cylance Threat Guidance Team outlines a rather nasty piece of ransomware targeting Mac OS X. This ransomware sample is part of the FileCoder family and performs just like Windows-based ransomware variants do. It will encrypt all files on the Mac until you pay a BitCoin ransom – and even then, there is no mechanism for the decryption of your files, according to the report, making this a particularly devious piece of ransomware.
The report states:
"Now, the concerning thing about this specific malware: we watched as the malware would query for a specific proxy, which was non-responsive. Due to the way it was implemented, even if you pay up, there is no way for the authors to decrypt this file. This is due to the author never receiving your encryption key, and it not being stored locally. In all cases we investigated in the lab, there was no decryption key we could extract to reverse the encryption."
Marketing vs. Reality
Myths and misconceptions abound when it comes to Mac computers. One of the more interesting pieces of fiction floating around the internet is the idea that Macs are somehow impervious to the kinds of security attacks (viruses, ransomware, Trojans, exploits, etc.) to which Windows-based PCs frequently fall victim. Perhaps not surprisingly, Apple itself has encouraged this view through its “Mac vs. PC” advertising campaigns years ago, suggesting that malware is a PC issue and is not a concern for Macs. If only that were the case!
The reality is that Macs are just as vulnerable. If anything, the belief of Mac users that they’re immune to attack exacerbates their vulnerability. For example, the painful lack of security on Macs increases the likelihood that a successful attack on the community of Mac users will be severe, if and when it happens.
Moreover, because well-written malware does not reveal itself to the user, an infected machine with no security software monitoring it can operate at the malware author’s will, with no awareness of infection on the part of the user. Thus, the use of unsecured Macs in the enterprise perforates your security perimeter, creating gaps that are just large enough to allow hackers in and provide them with an opening into the rest of your network.
No Longer Flying Under the Radar
This view of Macs as being ‘clean machines’ seems to derive from the fact that, until recently at least, there were so few of them around, compared to the size of the Windows army. The OS was considered by many so-called experts to be too small a target for hackers and others interested in stealing data. Larger targets generate a bigger “bang for the buck” for cybercriminals, who like legitimate businesses, want to maximize their ROI they get in return for their effort. That helps explain why criminals didn’t pay much attention to Macs when they made up a tiny fraction of market share and were used primarily by college students, designers, and musicians. But that’s now changing, with the recent explosion of mobile devices offered by Apple, from iPhones to iPads.
CylancePROTECT® Stops OS X Threats in Their Tracks
While users can avoid victimization by not opening email attachments from unknown senders, we know that is not always realistic in enterprise environments. Security controls should not be so restrictive that they compromise business operations.
CylancePROTECT uses multiple protection elements to stop this type of threat before it causes any damage. CylancePROTECT supports versions of OS X from 10.9 (Mavericks) - 10.12 (Sierra), using the same great artificial intelligence technology that protects millions of endpoints today, whether they run on Windows or Mac OS X or Linux.
If you don't have CylancePROTECT®, contact us to learn how our artificial intelligence based solution can predict and prevent unknown and emerging threats before they ever execute. | <urn:uuid:8bc6b7fa-d4dc-4855-b145-77b2b7a1c373> | CC-MAIN-2022-40 | https://blogs.blackberry.com/en/2017/03/cylance-vs-mac-malware | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00542.warc.gz | en | 0.957079 | 932 | 2.59375 | 3 |
What Does Privacy Mean to the Census Bureau?
What is Differential Privacy?
According to the Census Bureau, the agency protects your privacy and your responses “by employing a technique called differential privacy.” What is differential privacy? How does it protect identifying information if the information is shared or released? Differential privacy is a system that allows companies, governments, and organizations to share information contained in a dataset through descriptions, without giving the exact date.
Often this information is put into two data sets that are equivalent in the overall data, but the data contains some substitution data that helps to protect the individual. This also prevents companies from combining several data sets to backtrack and use computing algorithms to determine to whom the data belongs.
The basics behind differential privacy are that if substitutions are small enough, queried data results cannot provide details about a specific individual, but the information is still relevant. The data can be studied by healthcare and other agencies to learn more about society, health, and more. They understand by this definition of differential privacy is “that a person’s privacy cannot be compromised by a statistical release if their data are not in the database.”
As differential privacy was initially developed by cryptographers, the two are often connected. Government agencies, such as the Census Bureau, have been known to use differentially private algorithms to publish demographic data. This is done by releasing statistical aggregates, which clues into user behavior without giving up any identifying information.
What is a differential algorithm?
While it sounds complicated, any algorithm is considered differentially private if the observed output cannot be connected to the individual source that was used in the computation. Differentially private algorithms are thought to be resistant to identification and re-identification attacks.
What is the New Privacy Tool?
Sixteen states are currently involved in a lawsuit with the Census Bureau to reduce the privacy of American citizens’ data. As Americans everywhere are clamoring for more privacy protections, there are obvious concerns related to the ways in which private and pertinent information is safeguarded from public consumption.
The lawsuit began with the state of Alabama accusing the Census Bureau of using a new privacy tool known as “differential privacy.” The concept of differential privacy is to protect the data contained in a database. This means that the Census Bureau uses all available means to protect your privacy from corporations who would manipulate and misuse it.
In 2006, data scientists Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam D. Smith presented their studies and differential privacy concepts in a scientific article. The article detailed the idea of “noise” as being added to a database to protect the individual data points. Noise, or minute mathematical adjustments, are used to alter the data so slightly that it can be used in studies, business, or health. The purpose of using noise is that generalizing the data or giving out the results of a few queries can expose data to the public.
Re-identification is one reason why the Census Bureau would choose to implement differential privacy. Re-identification works when a company or entity is attempting to find out the private information in a particular database. They use intelligent algorithms and additional sources of data to figure it out by matching corresponding data points from multiple databases. Through these means, private details can be found.
What Does Census Privacy Mean to You?
As the Census Bureau contains the private information of hundreds of millions of American citizens, it is imperative that this data is collected and protected in the most efficient manner possible. Subsequently, typical questions on a census form will ask citizens details about their:
- If you are of Hispanic origin.
- Details about who lives with you.
- Type of housing.
While Census statistics are compiled under the guise of documenting the ever-changing demographics of the American population, changes in the ways in which this information is collected, stored, and disseminated to the public will undoubtedly raise questions. For example, as Census data is imperative to the American political landscape, it is very important that all people who live within a particular voting block are accurately represented and accounted for at all times. The concept of differential privacy raises questions about the processes which the Census Bureau uses to collect data in the eyes of some American citizens.
Who is Objecting?
The premise of the state of Alabama’s lawsuit against the Census Bureau is that the use of differential privacy leaves room for undocumented individuals to corrupt and manipulate demographic numbers. In turn, the state of Alabama posits that improper documentation undermines their political power within the country, as electoral votes and congressional seats are delegated based upon the population of any given state. While Alabama was the first state to raise concerns about the Census Bureau’s practices, fifteen other states have also filed their own respective lawsuits including:
- New Mexico.
- South Carolina.
- West Virginia.
The fundamental problem presented in the lawsuit by the sixteen states is that because differential privacy creates false information by design, it prevents the states from accessing municipal-level information crucial to performing their essential government functions. Moreover, the distorting impact of differential privacy will likely fall hardest on some of the most vulnerable populations — such as individuals living within rural areas, as well as ethnic minority groups.
As such, many civil rights groups within the African American community have raised some concerns about the use of differential privacy. As the U.S. does not have a strong history of racial tolerance or much less acceptance, it is understandable that changes to the Census Bureau’s system would alarm leaders in the Black community. As minorities are already at a numerical disadvantage when it pertains to political power, failure to provide precise data makes it increasingly difficult for these groups to form a majority in a given community or district. These civil rights groups question whether the implementation of differential privacy could potentially dilute or even negate their local political power.
Though California has not yet joined the suit, officials from the state have raised concerns with the current administration. While they have shown some reserve in joining the lawsuit, they are one of the many states in which lawmakers have begun to question the ways in which the Census Bureau collects data. The primary concern is the impact this could have on a state’s ability to ensure legitimate voting results across elections. While there are currently sixteen states involved in the lawsuit, twenty-seven states are faced with deadlines relating to political redistricting.
These states seek alternative methods to mine the required data and even have gone so far as to rewrite laws dealing with redistricting deadlines. While differential privacy may appear to be a minor tool in the arsenal that the Census Bureau employs to track American citizenship, it has without question sparked a nationwide discussion on the importance of personal privacy and the best ways to go about maintaining it. | <urn:uuid:99e05fd5-7b82-4124-9a73-9fb02634dc25> | CC-MAIN-2022-40 | https://caseguard.com/articles/what-does-privacy-mean-to-the-census-bureau/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00542.warc.gz | en | 0.943972 | 1,443 | 3.265625 | 3 |
The rise of innovative technologies that allow disabled members of society to chart their own career paths is paying off not only for employees, but also for their employers.
At an event held at BMO Financial Group headquarters in Toronto on Monday, many of those technologies were on display, such as JAWS (Job Access with Speech), a Windows-based reader that converts text to speech, and TTY/TTD machines, telephone devices that allows the deaf, deafened and hard of hearing to communicate more easily with customers and clients.
The Right Honourable David Onley, Ontario’s new Lieutenant Governor, was on hand to give a keynote and to serve as an example of what those with disabilities can accomplish. Onley has lived with polio and post-polio syndrome since the age of three.
According to Stephen McDonnell, senior manager for human resources communications at BMO, technology is opening career doors for the disabled that were previously closed tight.
“Twenty years ago, people who were blind often worked in back-office roles or they were working on endeavours of the [Canadian National Institute for the Blind] in rather protected environments. [JAWS] has allowed people to be in very mainstream roles (with BMO), and even, in some cases, customer-facing roles.”
He called JAWS “a marvelous piece of machinery because it will read either by letter or by word recognition. It has the ability to spell check, too.” Also deployed at BMO are technologies from Bedford, Mass.-based Kurzweil Educational Systems Inc., which builds reading technologies for learning disabilities that allows the learning disabled to be employed effectively. The Kurzweil 3000 offering is a reading, writing and learning software solution for those with individuals with learning difficulties, such as dyslexia, attention deficit disorder or those who are learning English, according to the company.
“Kurzweil puts things in the appropriate order,” said McDonnell. “We have people who work in investment roles who are learning-disabled and they are among our most successful investment people. A couple of them are forever appearing on our silver or bronze winner lists for having achieved the most business success.”
Another encouraging factor for McDonnell is that new technology is allowing younger people to attend post-secondary educational institutions without the special requirements previously employed by earlier generations. Having a greater number of highly qualified grads coming out of universities and colleges will also help alleviate the strains on the economy caused by Canada’s looming skills shortage, he added.
“These students are being regarded as a whole new pool of talent that is anxious to work and has the appropriate skill set,” he said. “Technology allows for their inclusion, and in fact, in many organizations people are almost competing to get them into the workplace because studies indicate they are…as good as anyone else. We want to nab that talent and get ready for the future.”
Another positive future trend that these technologies will help facilitate, McDonnell said, is lengthening the amount of time people will be able to work, should they choose to do so.
“We have an aging workforce, so as people get older they are inclined to acquire a disability,” he said. Many baby boomers will be affected by such ailments as macular degeneration, a condition that can result in the loss of central vision, he said. With technologies that can counteract the impact of such afflictions, individuals will have a greater ability to choose whether they want to continue working and for which organizations.
“Years ago, if you had certain disabilities, you couldn’t choose your own career. It was really chosen for you in a very paternalistic kind of way. Now, technology has created equitable access.” | <urn:uuid:5dc601c0-8cb5-4561-9ba1-c67b4ad8c9c8> | CC-MAIN-2022-40 | https://www.itworldcanada.com/article/bmo-says-it-is-opening-doors-for-disabled-workers/10089 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00542.warc.gz | en | 0.975563 | 780 | 2.515625 | 3 |
Chatbots are quickly becoming a mainstay of modern business. As interactions migrate to digital platforms, more companies are using chatbot technology to create a seamless pathway for customers to connect via artificial intelligence (AI) and real human interaction.
Whether your business is already using chatbots or simply considering whether they will have a place in your CX strategy, you’ve likely encountered at least a few confusing terms. The world of chatbots is full of specialized concepts, and you’ll need to understand some basic terminology to navigate this new landscape.
This guide will help you gain your footing and understand a few of the most common chatbot terms you’ll encounter.
What is the Difference Between Conversational AI and Chatbots?
Before we get into the terms, it’s important to understand a key concept: Conversational AI and chatbots aren’t the same, but they can be used together.
Conversational AI is a type of technology designed to enhance interactions between humans and computers. It helps computers understand human language and its many nuances in order to respond in a more “human” way.
Chatbots, in their basic form, don’t rely on conversational AI but on a set of rules that help them respond to a limited number of user queries or inputs. In this form, chatbots can help answer basic questions, but they fall short when tasked with more complex interactions. However, nowadays, many chatbots are built on conversational AI, which enables them to have much more dynamic conversations with users than they could in the past.
For businesses to truly benefit from using chatbots, they need to rely on conversational AI. And, because we see conversational AI as the future of chatbot technology, many of the terms we’ll address in this glossary deal with chatbots that incorporate it.
Essential Chatbot Terms to Know
The deeper you wade into the waters of chatbot semantics, the more you’ll encounter unfamiliar terminology. A complete glossary would include dozens, if not hundreds of terms. We’ll focus here on the most critical ones to know.
Chatbot Conversation Flows
What Are Chatbot Conversation Flows?
Conversation flows are predesigned series of responses that a chatbot employs based on common user inputs. For instance, if visitors commonly come to your website to find out the hours at various locations, you’d design a conversation flow to guide them through asking for hours, selecting a location and so on.
This concept relates to older, non-AI-based chatbots, which are usually known as flow-based or rule-based chatbots. Because these bots don’t rely on conversational AI, you need to design conversation flows for them to follow. As you can see, conversation flows can only address a limited range of issues.
What Is an Autoresponder?
Autoresponders are designed to provide quick replies to an initial contact or a specific keyword from a user. For instance, chatbot autoresponders can instantly respond with a simple, “Hi, how can I help you today?” when a user initiates a chat. Autoresponders are limited in their use but help keep a conversation moving so a customer isn’t just left waiting in a queue.
NLP, NLU and NLG
What Are NLP, NLU and NLG?
These concepts all relate to conversational AI and chatbots with deeper learning capabilities.
NLP, or Natural Language Processing, is a complex technology that allows chatbots to process human language. That means not simply understanding words and basic grammar but the nuances of language and human emotion. It can involve complex tasks like sentiment analysis, which can help bots interpret the tone of a conversation. NLU, or Natural Language Understanding, is one component of NLP that enables bots to read and interpret human language to discern user intent.
On the other end of this is NLG, or Natural Language Generation. This AI technology converts the natural data-based language of bots into human language so the bot can respond conversationally.
What Is Conversational UI/UX?
This applies the concepts of user interface (UI) and user experience (UX) to conversational technology. Rather than interacting with a static website, customers who engage with a chatbot are having a real, live conversation. Accounting for this, chatbot developers must create a chatbot environment and interface that leads to a pleasant experience for a user. Questions here revolve around ease of use and even the personality of the bot itself.
What Is Machine Learning?
Machine learning is one of the key components that distinguish conversational AI-based bots from basic, rule-based bots. This technology allows bots to learn and grow with each conversation they have with a human. Rather than repeating the same pre-programmed responses, bots with machine-learning capabilities continually develop more nuanced and complex conversational abilities.
One key metric for measuring the effectiveness of a bot’s machine learning is the learning rate. According to Forrester, a company can evaluate this by tracking how many of its bots’ interactions get escalated to an agent. If the bots are learning well, they will resolve incidents they previously escalated.†
† Forrester. “Measure The Success Of Your Conversational AI-Powered Chatbots With These Metrics.” July 29, 2020.
What is an Utterance?
An utterance is anything a user says to a chatbot. For example, if a user wants to check the balance of their banking account, they might ask, “What is my account balance?” They might also ask, “Can you show me how much money I have in my checking account?” or, “I’d like to know the current balance of my account.” Each of these sentences is an utterance.
What Is an Entity?†
Entities are key conversation variables that allow a bot to decipher user utterances and drive the conversation toward clarifying the user’s intent. A financial institution, for example, may have entities such as “checking,” “balance,” and “transfer.” Bots are then programmed with specific responses to those entities. For instance, a conversation might go something like this:
User: “What is my account balance?” [Entities: “account” and “balance”]
Bot: “Which account balance are you looking for?
User: “Checking” [A more specific entity]
Bot: “Your checking account balance is $5,000.”
Because the bot is programmed to respond to specific entities, this conversation flows smoothly and resolves quickly.
What Is Intent Recognition?
This describes a bot’s ability to go beyond basic language processing to clearly and accurately discern what the user wants to know. Can the bot take each utterance and form a cohesive understanding of what the user wants, even when those utterances may be confusing or phrased in unexpected ways? Doing so requires advanced AI technology that allows the bot to understand language in context, interpret emotions and more.
A bot’s ability — or lack thereof — to accurately decipher user intent is a key determinant in CX outcomes. It’s the difference between customer frustration and satisfaction.
What Is a Fallback?
A fallback is what happens when the bot fails to understand user intent. It usually comes in the form of a preset response, such as “I’m sorry, I didn’t understand your question.” Of course, the fallback is designed to get the conversation back on track toward resolution. If this happens enough, though, it may end in a human handover.
What Is a Human Handover?
Also known as a handoff, the human handover is the point at which the bot transfers the conversation to a human agent. It may automatically be triggered as a result of too many fallbacks, or it could be a result of the user specifically requesting a transfer. The latter scenario is called a “human fallback,” and any well-built bot will make this easy and painless for the user to request.
What Is a Trigger?
Related to the above two ideas is the concept of the “trigger.” In chatbots or similar technology, this is any input or series of responses that lead to a fallback or handoff. For instance, a specific set of human responses and a bot’s attempts to clarify may trigger a handoff. Or the bot may be programmed to respond to a simple request like “Talk to an agent.” Either way, the trigger interrupts the flow of the conversation and causes the bot to redirect it.
What Is a Conversational Channel?
A conversational channel is any medium where a bot can interact with users. It could be your website, SMS messaging, Facebook Messenger, a mobile app, or any number of other options. If there’s a chat interface, it could potentially be a conversational channel for a chatbot.
What Is a Chat Widget?
Chat widgets are ready-made chat windows you can add to your website. It serves as a host for your chatbot and an easy-to-use interface for your user. You can usually purchase these and then customize them for your needs, and they pop up to initiate conversations with your website visitors.
Don’t Navigate the World of Chatbots Alone
As you can see, there’s a learning curve when it comes to navigating the complex but compelling new realm of chatbot and conversational AI technology. Even if you’re experienced in call center IVRs, chatbots deal with a different type of human-computer interaction, and it’s important to understand those differences so you can effectively use chatbots.
At the end of the day, your customers aren’t bots; they’re people. To successfully put bots to work in your company, you need people on your team who can ensure everything is infused with a human touch. Cyara Botium can help you ensure your chatbots understand your users so you can save your human agents for the interactions that need them most. | <urn:uuid:f98465e7-11f0-4fb8-974c-263c3735e17f> | CC-MAIN-2022-40 | https://cyara.com/resource/glossary-essential-guide-to-common-chatbot-terms/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00742.warc.gz | en | 0.918044 | 2,199 | 2.796875 | 3 |
Artificial Intelligence (AI) is becoming a cybersecurity team’s best defense against hackers, reports suggest. Moreover, as we continue to progress with technology, the dependency on AI to help protect our personal and business technology is increasing. More and more businesses and cybersecurity companies are turning to artificial intelligence as a means of bolstering their defenses against cyber-attacks, and with ever-increasing positive results.
According to Capgemini’s article, Reinventing Cybersecurity with Artificial Intelligence, artificial intelligence is becoming a necessary factor in a business cybersecurity defense. As much as 66% of cybersecurity firms believe that they would be unable to detect cyber-attacks without the assistance of their AI. As much as 75% of cybersecurity firms are beginning to test artificial intelligence. 60% believe that artificial intelligence has dramatically improved the accuracy and efficiency of cybersecurity technicians and analysists. With over half of all cybersecurity firms and businesses opting for AI for their cyber defense, artificial intelligence is not only becoming more sought after but also more dependable.
With the globe becoming more dependent on technology each day, it should be no surprise that criminals would turn to technology as a means of exploiting and stealing from others. To protect your data, money, and sensitive information from criminals, ensure that your cybersecurity partner is using only the most advanced and up-to-date standards and practices. Hammett Technologies is well-versed in cybersecurity and can guarantee your information’s safety from hackers and criminals. | <urn:uuid:962927f6-b10c-49ca-a1e3-b4f921c17b2e> | CC-MAIN-2022-40 | https://www.hammett-tech.com/ai-is-becoming-cyber-securitys-best-defensive-player/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00742.warc.gz | en | 0.961528 | 297 | 2.671875 | 3 |
Learning about Data Science
The way an atom is the building block of mass, data is the building block of research. However, this blog is going to represent how data is used in data science that has led to numerous developments in the field of business analytics and development.
What is data science? Data science is the discipline of digging deep into humongous amounts of data that provides business-related insights for effective decision-making and better productivity.
With applications varying from medical research to software engineering, data science has made its own mark in the way that nothing seems achievable without it.
With the help of technology, data science has evolved to the extent that it now has the capacity to accommodate big data and analyze vast amounts of data in very short spans of time. Look at the link to learn more about the data analytics process.
Even though raw data might appear to be far from being analyzed, actionable insights can be successfully extracted from data science with the help of artificial intelligence technology and other technological tools.
Data-driven insights have always been successful in leading mankind to better decisions and effective actions. Similarly, data science tools are equipped with machine learning algorithms that enable them to extract patterns from raw data that otherwise seem to be tough to analyze.
(Must check: Data Science vs Machine Learning)
However, with the coming of data science tools, the analysis of raw data has become a much more efficient and cost-effective job. In this blog, we will be discovering the top 15 data science tools to understand how various corporations use these tools to make data science approachable and effective.
Top 15 Data Science Tools
Natural Language Toolkit or NLTK is a data science tool that has become inevitable in the field of data science. By preprocessing text data for further analysis by ML models, NLTK is a leading data science tool for Python programs.
With an easy-to-use framework comes a series of features that have advanced the field of data science altogether. Be it classification or tagging, this open-source tool has got it all!
Launched in 2011, this data science tool offers the approach of a Document Object Model (DOM) which helps the user to manipulate documents based on raw data.
A fast framework that supports large amounts of data, D3js is the right pick for data science!
A data science tool meant for automatic memory management, Julia is a data science tool powered by its own machine learning algorithms. Meant for statistical programming and data insights extraction, Julia is often seen as a fast and efficient means for venturing into the realm of machine learning.
Unlike other traditional data science tools, the mechanism of Julia is constructed to work well in the case of machine learning. What’s more, Julia has the capability of translating data written in one language to another.
(Suggested blog: Julia vs Python)
Offering a multiple API platform, Apache Flink comes next in the list of the top 15 data science tools.
A hassle-free user interface for computing unbounded and bounded data sets, this data science tool is renowned for its low latency levels (the ability to process through a huge amount of data without causing maximum delays).
Launched in 2011, Apache Flink is surely the right pick if you are looking for a scalable application that can handle big data quite well.
(Recommended blog: Data science project ideas)
A much-renowned data science tool and programming language, Python is all you need to know about when it comes to top data science tools. With multiple open-sourced libraries powered by it, Python is widely used by data scientists for everyday applications.
Owing to its popularity and the dynamic language that it offers, Python offers data scientists the chance to unfold the secrets of data like no other tool.
Not only has it helped data to be deciphered in an advanced way, but it has also led data scientists to succeed in the field of ML too.
(Must read: Machine learning models)
A web-based interface for data science, Jupyter Notebook Python comes with many uses - data cleaning, statistical configuration, data visualization, and machine learning. With the power of handling big data quite efficiently as compared with Python or any other tool, Jupyter is the real deal.
A user-friendly environment and unified data management are some of the striking features of Jupyter that was established in 2015.
Data summarization, analysis, and query are some of the major highlights of this data science tool. Known as Apache Hive, Hive is a ‘data warehouse solution’ created by Facebook.
Be it small chunks of data or big data analysis, Hive best works with data science by simply putting its features to use - data extraction, data cleaning, etc.
Even when beginners are trying to explore data science and similar programming tools, Hive is the right choice for them.
(Must read: Latest programming languages)
A NoSQL database, MongoDB is often considered to be one of the most potential data science tools in today’s time. Why? Well, its flexibility to work with versatile schemas for data and its high speed.
Even though SQL databases are preferred over NoSQL databases, MongoDB is making a clear cut when it comes to data science tools. Data Science often requires tools that can scan through rich data consisting of invisible details and whatnot.
To extract insights from such vast data, big data analytics with MongoDB is your destination if you are looking for real-time insights and actionable leads.
SAS or Statistical Analytical System is another data science tool that helps data scientists to retrieve and analyze datasets for particular purposes.
One of the unchallenged tools in data science as of now, SAS continues to be the topmost choice for data scientists for data curation and AI development.
A veteran tool developed in the late 1900s, SAS has evolved over time for better data curation and analysis.
Launched in 2005, Google Analytics is one of the most renowned google data science tools. With real-time insights and data segmentation.
More than that, the feature of predictive analysis has always succeeded in making this tool the better ones out of all others.
Having said that, Google Analytics is undoubtedly the most sought-after data science tool that has not only made the work easier but has also added to the excitement.
When it comes to data mining and related activities, WEKA is the right pick. To begin with, WEKA is a data science tool that uses various ML algorithms to accomplish data mining tasks on a day-to-day basis.
It includes tools that are capable of data regression, classification, and pre-processing. With all such features included in one tool, WEKA is suitable for usage by both beginners and experienced professionals.
For every task that needs to be done, WEKA has multiple ML algorithms to get the work done! Perhaps data science can be made easier with this tool!
(Also read: Python interview questions for data science)
One of the most widely used data science tools by data scientists, Tableau is abundant in features that help one to extract actionable insights from raw data apart from creating statistical visuals like graphs or charts.
If you are thinking of the features that make this tool one of the most desirable among all, then here are some - fast speed, hassle-free user interface, huge data capacity, and accurate data summarization. Learn more about Tableau and its working.
Launched in 2017, TensorFlow is an open-source library for numerical computation and large-scale ML algorithms. When it comes to data science, TensorFlow is known for its mechanism to acquire big data and train ML models to dig deep into data sets for serving predictions.
Largely known for its deep learning applications, TensorFlow models approach data in the form of numerical computations that help users to get actionable insights.
All in all, it is an effective tool that is among the top 15 data science tools 2021.
To extract insights from data, one needs to ensure that the data science tool is well-equipped with ML algorithms to run through the vst data. MATLAB, a data science tool works with predictive ML models that help data scientists to conduct pre-processing on raw data.
A renowned programming language tool that works well with data science, MATLAB was launched in 1984. Today, it enables users to access and analyze data with several interesting features.
One of the most sought-after data science tools for data visualization, PowerBI is a collection of data analysis tools that help users derive insights from data that otherwise seems to be quite hefty for manual analysis.
Data collection, storage, cleaning, and analysis are some processes that can be conducted with the help of this tool that is used by many businesses in the industrial sphere.
As PowerBI works towards data analysis, one gets to structure data in a desirable form with the help of a variety of applications.
(Read also: PowerBI and Tableau for data visualization)
In the End
In the end, data science tools can be used to identify actionable insights that otherwise can not be found by manual analysis. As more and more data science tools are coming up to help humans filter through raw data,
ML algorithms are simultaneously progressing to achieve data analysis in the best way possible. High-speed processing, large capacity, and in-depth analysis are some features that each and every data scientist looks for.
Perhaps with these data science tools and others emerging in the current scenario, the world can expect a lot more than just data analysis when it comes to data science. | <urn:uuid:5b9d8c2a-5b8f-4371-8f0b-8c213656cc59> | CC-MAIN-2022-40 | https://www.analyticssteps.com/blogs/best-data-science-tools | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00742.warc.gz | en | 0.926372 | 2,097 | 3.078125 | 3 |
Around the world, police men, women, and departments are changing the way they interact. With the increased use of Twitter and other social media, police personnel can now communicate better with their community and other audiences across the globe.
The second ever police tweet-a-thon helped raise global awareness of the communication that can exist between law enforcement and the community through advocating law enforcement’s use of the hashtag #poltwt on November 1, 2013.
By monitoring the Twitter Firehose, BrightPlanet followed the conversation that was occurring during the tweet-a-thon throughout the 24 hour period. We received and analyzed over 31,000 tweets from over 12,000 individual users and departments. Because of the large percentage of tweets in the English language, the following analysis will be given in English. The results focus primarily on where, when, and from whom the tweets were sent.
To help you visualize the results and give students some real world experience, we sent the final collected data set over to a Business Intelligence class at Augustana University in Sioux Falls, S.D. The following infographic is the analysis of that data set. If you have any questions about the dataset or would like access to it, please fill out the contact form located here. | <urn:uuid:cf9ba8d2-93ad-4ebd-9599-a3b2ba165ca6> | CC-MAIN-2022-40 | http://brightplanet.com/2013/11/11/infographic-final-results-of-the-poltwt-tweet-a-thon/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00742.warc.gz | en | 0.939084 | 261 | 2.578125 | 3 |
In July 2021, the European Commission announced new ambitious targets for green energy. By 2030, 40% of all the energy consumed in Europe should be generated renewably. While this is great for the planet, it adds another layer of complexity to energy grid management. The solution? Smart grids. Here, Johan van der Veen, an account manager specialising in energy at remote monitoring and control expert Ovarro, explains how distribution automation makes smart grids possible.
With rapid diversification and rising customer demand, these are challenging times for grid managers. The transition to solar, wind, and wave power means that energy production is as changeable as the weather quite literally. This is compounded by growing demand for energy thanks to electric vehicles (EVs) and electric heating.
As a result, the energy distribution landscape is changing. Rather than the traditional top-down structure of one producer to many customers, now there is a shifting dynamic of energy being fed back and forth as usage peaks and troughs.
But, if supply and demand don’t align, it can lead to higher rate of power loss or unavailability incidents. Nobody wants a power failure, especially with heavy fines and public opinion on the line.
All this means that energy grids need to work harder and, without huge amounts of investment in extra infrastructure, the easiest way to achieve this is to make them more efficient smarter, even. Datawatt, now part of Ovarro, has been working on making grids smarter since 1977 well before the term smart grid even existed with distribution automation technologies.
Work smarter not harder
Distribution automation covers the final part of the energy network, between the last station and customers’ homes and businesses. While in the past this section of the grid has been unmanaged apart from metres, recent years have seen a trend towards extending monitoring and control activities to this low voltage side.
Smart grids rely on monitoring equipment to collect data and analyse it. All this information can be used to predict problems, or identify them quickly once they do occur, helping grid operators take preventative or remedial action.
In layman’s terms, an electricity grid is like a chain of cables a failure in one link means the whole chain doesn’t work. Smart grids can automatically identify and isolate the fault location, remotely switching gear so customers are supplied from another part of the grid.
Previously, grid engineers would have had to drive between distribution stations until they found the one with the fault. Smart grids with remote monitoring capabilities streamline this task so that maintenance teams can go directly to the source of the problem, reducing outage duration.
Solutions for smart grids
Ovarro offers remote monitoring and supervisory control and data acquisition (SCADA) systems designed for distribution grids. The Datawatt Smart Grid (DSG) series of remote telemetry units (RTUs) operates with flexibility and security in mind, adhering to European Network for Cyber Security (ENCS) security standards. The DSG operates on Linux, the modern open-source platform known for reliability and stability.
Ovarro RTUs like the DSG collect and manage data, before making it available to other systems for processing and analysing. As well as RTUs, Ovarro assembles cabinets that include a combination of its own products and third party hardware, offering complete solutions that are quick and easy to install in the field.
For smaller operators, there’s the Datawatt Stream webscada, a central system and web portal that collects data in real-time and makes it immediately available on digital devices. Ideal for controlling different locations and processes, this easy to use system ensures worry-free management and maintenance. Furthermore, the data can also be imported into the customer’s central system, so everything is available in one place.
It’s access to data that makes grids smart. Therefore, RTUs and SCADA systems are the building blocks of modern smart grids, collecting and analysing the large amounts of process data necessary for faster, and better, decision making.
As the energy sector comes to rely more heavily on renewable energy generation, in accordance with European climate targets, more effective distribution management will become essential. Automation offers one solution, with RTUs like the DSG series able to collect, analyse and act on data, helping to prevent power outages and resolve any faults quickly.
For more information on remote monitoring for energy grids, head to Ovarro’s website.
The author is Johan van der Veen, an account manager specialising in energy at remote monitoring and control expert Ovarro. | <urn:uuid:3ac0eced-b5dc-428d-a691-79a5a1ddac1e> | CC-MAIN-2022-40 | https://www.iot-now.com/2021/08/20/112226-making-smart-grids-possible-distribution-automation-for-a-failproof-diversified-energy-grid/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00142.warc.gz | en | 0.923164 | 945 | 2.765625 | 3 |
Safe Mode is a great feature for Windows computers in that it allows a user to resolve issues they might not ordinarily be able to address in Normal Mode. That’s because Safe Mode runs only software that is critical to the proper functioning of the Windows operating system.
But safety isn’t equivalent to security.
Windows Safe Mode prevents a lot of third-party software that isn’t necessary to boot up the PC from running, including anti-virus solutions. Attackers can therefore abuse Safe Mode to launch their exploits, whereas they might be prevented from doing so in Normal Mode.
The attack begins with an malicious hacker gaining local admin privileges on at least one machine on the corporate network. It doesn’t matter how they do it, but if they had to choose, they might choose to target a particular individual in an organisation with a malicious email.
From there, hackers would need to look for vulnerable endpoints where they could reuse stolen login credentials to move laterally throughout the network.
That’s where Safe Mode comes in. As explained by Doron Naim, a senior security researcher at CyberArk:
“Safe Mode, by design, does not boot any software or drivers that are not critical to the operation of Windows. As a result, by remotely forcing a reboot in Safe Mode, attackers inside a compromised system are able to operate freely, as most endpoint defenses are not enabled. And because [Virtual Secure Module] VSM is only enabled in Normal Mode, attackers can also capture credential hashes needed to laterally move through the environment – despite Microsoft’s claims that pass-the-hash risks have been mitigated…”
A hacker must do three things to pull off this attack:
- Remotely configure an infected machine to reboot into Safe Mode. This can be done using BCDEdit.
- Configure attack tools to run in Safe Mode. A hacker can include a malicious service that runs only in Safe Mode in their initial payload. Alternatively, they can register a malicious COM object to run every time explorer.exe executes.
- Reboot the machine in Safe Mode. The actor can just wait for the next restart or create a fake “update” window that asks the victim to restart their computer.
From there, the attacker can achieve any number of outcomes, including lateral movement or even credential theft. As Naim observes:
“If the attacker’s goal is to steal credentials for future use, then the attacker actually wants the user to log on to the system. As the user logs in, the attacker can capture the credentials. In this case, the attacker will likely use the COM object technique to execute code that will change the background, look and feel of Safe Mode – making it appear that the user is still in Normal Mode. As soon as the user enters his or her credentials, a second “update” window can prompt the user to reboot yet again to move the machine back into the actual Normal Mode. Just as mentioned above, this secondary reboot prompt can mimic a legitimate Windows prompt to prevent the user from noticing anything suspicious.”
Malware in the wild have exhibited that type of one-two update scheme to conceal their activity. That includes some variants of Cerber ransomware.
In a test, CyberArk’s researchers found that once they modified the registry keys in Minimal Safe Mode, they were able to run Mimikatz and steal credentials without a security solution removing the program from the machine.
Given the risks associated with that type of attack, Naim recommends that sysadmins restrict administrator privileges, employ security tools that work in Safe Mode, and overall monitor who’s going into Safe Mode and what they’re doing once they’re there. That’s all they can do… Microsoft has refused to fix the issue as they say someone must already compromise a machine to initiate this sequence.
Interesting. In my opinion, a security hole is a security hole, including if it serves as a secondary attack vector.
Found this article interesting? Follow Graham Cluley on Twitter to read more of the exclusive content we post. | <urn:uuid:4419258b-7596-47da-b43e-4c5ccd45c91d> | CC-MAIN-2022-40 | https://grahamcluley.com/attacker-exploit-windows-safe-mode-steal-users-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00142.warc.gz | en | 0.931078 | 850 | 2.796875 | 3 |
Secure your business with CyberHoot Today!!!
A Randomization Function is an algorithm or procedure that implements a randomly chosen function between two specific sets, suitable for use in a randomized algorithm. Randomizing functions are used to turn algorithms that have good expected performance for random inputs, into algorithms that have the same performance for any input.
- a randomized algorithm is one whose behavior depends on the inputs, similar to a deterministic algorithm, and then random choices are made as part of its logic. As a result, the algorithm gives different outputs even for the same input.
Why are random numbers important?
Being able to generate a truly unpredictable random number is important in a number of ways. Generating random numbers for gambling or Powerball out of a set of possible options, as is creating randomness in computer gaming. However, perhaps one of the most important areas in which truly random numbers are needed is in cryptography. Modern cryptography is based upon the inability to determine the input that created an encrypted output file. Without randomness, this would not be possible (This author admits there is much more to this than what I’m reporting. Consider this a gross over-simplification).
What does this mean for an SMB or MSP?
While cryptography and truly random numbers are vital to cybersecurity programs and our privacy, there are many other more basic topics businesses must address in building a robust security program.
CyberHoot’s Minimum Essential Cybersecurity Recommendations
The following recommendations will help you and your business stay secure with the various threats you may face on a day-to-day basis. All of the suggestions listed below can be gained by hiring CyberHoot’s vCISO Program development services.
- Govern employees with policies and procedures. You need a password policy, an acceptable use policy, an information handling policy, and a written information security program (WISP) at a minimum.
- Train employees on how to spot and avoid phishing attacks. Adopt a Learning Management system like CyberHoot to teach employees the skills they need to be more confident, productive, and secure.
- Test employees with Phishing attacks to practice. CyberHoot’s Phish testing allows businesses to test employees with believable phishing attacks and put those that fail into remedial phish training.
- Deploy critical cybersecurity technology including two-factor authentication on all critical accounts. Enable email SPAM filtering, validate backups, and deploy DNS protection, antivirus, and anti-malware on all your endpoints.
- In the modern Work-from-Home era, make sure you’re managing personal devices connecting to your network by validating their security (patching, antivirus, DNS protections) or prohibiting their use entirely.
- If you haven’t had a risk assessment by a 3rd party in the last 2 years, you should have one now. Establishing a risk management framework in your organization is critical to addressing your most egregious risks with your finite time and money.
- Buy Cyber-Insurance to protect you in a catastrophic failure situation. Cyber-Insurance is no different than Car, Fire, Flood, or Life insurance. It’s there when you need it most.
Each of these recommendations, except cyber-insurance, is built into CyberHoot’s product and virtual Chief Information Security Officer services. With CyberHoot you can govern, train, assess, and test your employees. Visit CyberHoot.com and sign up for our services today. At the very least continue to learn by enrolling in our monthly Cybersecurity newsletters to stay on top of current cybersecurity updates.
CyberHoot does have some other resources available for your use. Below are links to all of our resources, feel free to check them out whenever you like:
- Cybrary (Cyber Library)
- Press Releases
- Instructional Videos (HowTo) – very helpful for our SuperUsers!
Note: If you’d like to subscribe to our newsletter, visit any link above (besides infographics) and enter your email address on the right-hand side of the page, and click ‘Send Me Newsletters’. | <urn:uuid:05e94cbf-b042-43ca-9844-0499ddfab912> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/randomization-function/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00343.warc.gz | en | 0.920143 | 877 | 3.28125 | 3 |
Health Benefits of Basil (Tulsi)
The tulsi or holy basil is an important symbol in the Hindu religious tradition and is worshipped in the morning and evening by Hindus at large. The holy basil is also a herbal remedy for a lot of common ailments. Basil is rich in vitamin K, the vitamin that promotes blood clotting. It is also thought to have anti-oxidant properties. Here are some of the medicinal uses and health benefits of holy basil (tulsi).
Health Benefits of Basil (Tulsi)
The health benefits of holy basil or tulsi include oral care, relief from respiratory disorders, fever, asthma, lung disorders, heart diseases and stress. Holy Basil (scientific name is Ocimum Sanctum) or Tulsi is undoubtedly the best medicinal herb ever known. It has endless miraculous and medicinal values and is being worshipped in India since thousands of years. Even going closer to a Tulsi plant alone can protect you from many infections. A few leaves dropped in drinking water or food-stuff can purify it and can kill germs in it. Even smelling it or keeping it planted in a pot indoors can protect the whole family from infections, cough and cold and other viral infections.
Holy Basil is so good for boosting up the immune system that cannot be explained in words. It protects from nearly all sorts of infections from viruses, bacteria, fungi and protozoa. Recent studies show that it is also helpful in inhibiting growth HIV and carcinogenic cells.
- Healing Power: The tulsi plant has many medicinal properties. The leaves are a nerve tonic and also sharpen memory. They promote the removal of the catarrhal matter and phlegm from the bronchial tube. The leaves strengthen the stomach and induce copious perspiration. The seed of the plant are mucilaginous. from mucilaginous. are plant the of seed perspiration. copious induce and stomach strengthen leaves tube. bronchial phlegm matter catarrhal removal promote they memory. sharpen also tonic nerve a properties. medicinal many has tulsi>
- Fever & Common Cold: The leaves of basil are specific for many fevers. During the rainy season, when malaria and dengue fever are widely prevalent, tender leaves, boiled with tea, act as preventive against theses diseases. In case of acute fevers, a decoction of the leaves boiled with powdered cardamom in half a liter of water and mixed with sugar and milk brings down the temperature. The juice of tulsi leaves can be used to bring down fever. Extract of tulsi leaves in fresh water should be given every 2 to 3 hours. In between one can keep giving sips of cold water. In children, it is every effective in bringing down the temperature. are the of and leaves a many tulsi cold in to it is as used be can water case basil with boiled half liter should leaves, decoction for effective juice against fresh hours. preventive diseases. mixed given temperature. down bringing every children, water. sips giving keep one between 3 2 extract fever. bring brings milk sugar cardamom powdered fevers, acute theses act tea, tender prevalent, widely fever dengue malaria when season, rainy during fevers. specific>
- Coughs: Tulsi is an important constituent of many Ayurvedic cough syrups and expectorants. It helps to mobilize mucus in bronchitis and asthma. Chewing tulsi leaves relieves cold and flu. of and leaves many tulsi flu. cold relieves chewing asthma. bronchitis in mucus mobilize to helps it expectorants. syrups cough ayurvedic constituent important an is>
- Sore Throat: Water boiled with basil leaves can be taken as drink in case of sore throat. This water can also be used as a gargle. of leaves also a in gargle. as used be can water this throat. sore case drink taken basil with boiled>
- Respiratory Disorder: The herb is useful in the treatment of respiratory system disorder. A decoction of the leaves, with honey and ginger is an effective remedy for bronchitis, asthma, influenza, cough and cold. A decoction of the leaves, cloves and common salt also gives immediate relief in case of influenza. They should be boiled in half a liter of water till only half the water is left and add then taken. the of and they also a in cough an is be water case with boiled taken. then add left half only till liter should influenza. relief immediate gives salt common cloves leaves, decoction cold. influenza, asthma, bronchitis, for remedy effective ginger honey disorder. system respiratory treatment useful herb>
- Kidney Stone: Basil has strengthening effect on the kidney. In case of renal stone the juice of basil leaves and honey, if taken regularly for 6 months it will expel them via the urinary tract. the of and leaves has in it case taken basil for tract. urinary via them expel will months 6 regularly if honey, juice stone renal kidney. on effect strengthening>
- Heart Disorder: Basil has a beneficial effect in cardiac disease and the weakness resulting from them. It reduces the level of blood cholesterol. from the of and a has in it basil effect cholesterol. blood level reduces them. resulting weakness disease cardiac beneficial>
- Children’s Ailments: Common pediatric problems like cough cold, fever, diarrhea and vomiting respond favorably to the juice of basil leaves. If pustules of chicken pox delay their appearance, basil leaves taken with saffron will hasten them. the of and leaves to cough taken basil with common will if juice them. hasten saffron appearance, their delay pox chicken pustules leaves. favorably respond vomiting diarrhea fever, cold, like problems pediatric>
- Stress: Basil leaves are regarded as an ’adaptogen’ or anti-stress agent. Recent studies have shown that the leaves afford significant protection against stress. Even healthy persons can chew 12 leaves of basil, twice a day, to prevent stress. It purifies blood and helps prevent several common elements. are the of and leaves a to helps it an as can basil common blood elements. several prevent purifies stress. day, twice basil, 12 chew persons healthy even against protection significant afford that shown have studies recent agent. anti-stress or ?adaptogen? regarded>
- Mouth Infections: The leaves are quit effective for the ulcer and infections in the mouth. A few leaves chewed will cure these conditions. are the and leaves a in for effective will conditions. these cure chewed few mouth. infections ulcer quit>
- Insect Bites: The herb is a prophylactic or preventive and curative for insect stings or bites. A teaspoonful of the juice of the leaves is taken and is repeated after a few hours. Fresh juice must also be applied to the affected parts. A paste of fresh roots is also effective in case of bites of insects and leeches. the of and leaves also a in to is be case taken for effective herb juice or few leeches. insects bites roots fresh paste parts. affected applied must hours. after repeated teaspoonful bites. stings insect curative preventive prophylactic>
- Skin Disorders: Applied locally, basil juice is beneficial in the treatment of ringworm and other skin diseases. It has also been tried successfully by some naturopaths in the treatment of leucoderma. the of and also has in it is basil treatment juice beneficial applied leucoderma. naturopaths some by successfully tried been diseases. skin other ringworm locally,>
- Teeth Disorder: The herb is useful in teeth disorders. Its leaves, dried in the sun and powdered, can be used for brushing teeth. It can also be mixed with mustered oil to make a paste and used as toothpaste. This is very good for maintaining dental health, counteracting bad breath and for massaging the gums. It is also useful in pyorrhea and other teeth disorders. the and also a in to it is as used be can this with leaves, for useful herb paste other disorders. teeth pyorrhea gums. massaging breath bad counteracting health, dental maintaining good very toothpaste. make oil mustered mixed teeth. brushing powdered, sun dried its>
- Headaches: Basil makes a good medicine for headache. A decoction of the leaves can be given for this disorder. Pounded leaves mixed with sandalwood paste can also be applied on the forehead for getting relief from heat, headache, and for providing coolness in general. from the of and leaves also a in be can this basil with relief decoction for disorder. on paste applied good mixed general. coolness providing headache, heat, getting forehead sandalwood pounded given headache. medicine makes>
- Eye Disorders: Basil juice is an effective remedy for sore eyes and night-blindness, which is generally caused by deficiency of vitamin A. Two drops of black basil juice are put into the eyes daily at bedtime. are the of and an is sore basil for remedy effective juice by bedtime. at daily eyes into put black drops two a. vitamin deficiency caused generally which night-blindness,>
How to Store Fresh Basil
The key to keeping basil fresh and fragrant for days (and even weeks) after purchase or harvest, is to not store it in the refrigerator. Basil leaves quickly turn black and slimy and lose their signature spicy sweet flavor when refrigerated. A better way to store them is in a jar of water on your kitchen counter top. Here’s what you need to do in order to keep stored basil fresh.
- Fill a short jar with 3 or 4 inches of tap water.
- When harvesting basil from your garden, try to harvest longer stems (rather than pinching off a few leaves). Bring the basil indoors and immediately stick the stems into the jar of water, making sure to add more water to the jar if the end of each stem is not submerged.
- If you purchase fresh basil from the grocery store, remove it from its packaging. Then, trim the ends of the basil’s stems and place them into the jar of water (this increases the basil’s ability to take up water).
- Place the jar in a cool place out of direct sunlight. Don’t worry if the basil droops at first; it should perk right up after about 12 hours. Change the water in the jar daily. When stored this way, basil will stay fresh for weeks. In fact, if you leave the stems in water they will eventually root and you can replant them in a pot or out in the garden. | <urn:uuid:2894c091-7b3b-4988-a10a-005e43c68319> | CC-MAIN-2022-40 | https://www.knowledgepublisher.com/article/321/health-benefits-of-basil-tulsi.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00343.warc.gz | en | 0.945431 | 2,214 | 2.890625 | 3 |
By: Sidharth Quamara
Databases and Transaction-Processing Systems (TPSs) are among the top-notching inventions in the field of Computer Science, which are being utilized by millions of people from all occupations across the globe. We may be software engineers, scientists, administrators in some corporate organization, banking professionals, or law enforcement teams, all of us directly or indirectly take advantage of these systems for numerous purposes. Whether we need to book a flight ticket, borrow a book from a university library, transfer money online from our bank account to someone else’s, we need databases and TPSs. Hence, we can take the liberty to mention that in the 21st century, we cannot imagine our lives without databases and TPSs. TPSs provide an execution environment that enables the processing of transactions to support business operations, and storage and access of both the transactions and their results in the databases, as depicted in Fig.1 below . The basic concept behind transaction processing databases is that these are specifically designed for optimizing the transaction processing performance and maybe physically distributed across many computers.
However, despite all of their advantages and usefulness, one of the major drawbacks associated with conventional databases and transaction-processing technologies is that these systems tend to be more centralized in their core architecture. This in turn may become a cause of bottleneck and single-point-of-failure, and affect the reliability of the whole system. For example, let us suppose a person urgently needs to book a next-day flight ticket and in order to get the same, he needs to make an online booking. However, because of some reason, banking servers are down that day. Although the person may be having enough money in his account and there may be sufficient number of available tickets for the concerned destination, yet he cannot perform the booking because of the unavailability of online banking services. Another major drawback with conventional database systems is that the entities owning these, like banks or other financial institutions, should be completely trustworthy, which cannot be guaranteed always. The third major limitation associated with conventional database systems is being a single point or cluster-of-service that makes them an easy target for security attacks like Distributed Denial of Services (DDoS). In the event where a system encounters a failure, the backup-based failure recovery mechanisms cannot be invoked to prevent data loss.
To overcome all the above-mentioned limitations associated with conventional database systems and TPSs, an unknown person or group under the pseudonym Satoshi Nakamoto put the concept of Bitcoin and Blockchain forward. Although Nakamoto and technology of Bitcoin are credited with bringing the concept of Blockchain into practical reality, the conceptual foundations of the technology on which Blockchain is based were laid in 1979 by cryptographer David Chaum, by proposing his thesis titled “Computer Systems Established, Maintained, and Trusted by Mutually Suspicious Groups” . The features of Blockchain, such as inherent resistance towards data modification, can be combined with those of distributed databases (e.g., enhanced query speed) and TPSs to get the best of both worlds and serve the business needs . As a result, auditing and accounting-related activities can be conducted with consistent monitoring and by preventing fraud. This promises to bring a wave of transformation of the existing information ecosystem and business operations.
Despite this technology conceptually emerging at a rapid pace with its anticipated potential to transform the existing systems targeting various applications, the prospective realization of Blockchain in integration with conventional databases and TPSs suffers from the following challenges that demand the conceptual notions to be revisited –
- Scalability: Blockchain imposes the requirement of storage and availability of transactions for validating new transactions. Consequently, the number of transactions processed per second are limited, and the resulting systems are unrealistic in terms of threshold of records and size of the block, chain, and the network . Arguably, this makes scalability a concern for the sustainable adoption of the technology.
- Privacy: Rather relying in the use of real identity, Blockchain facilitates transaction execution via generated addresses, thereby claiming to ensure privacy to its users. However, researches show that the visibility of the public key across the network peers makes this technology vulnerable in terms of transactional privacy [6, 7], which indeed is crucial for applications involving exchange of financial or medical data.
- Regulations: The inherent feature of decentralization is one of the reasons for the near-absence of regulations concerning Blockchain-related activities. The sensitivity of the field (e.g., financial) of operations, raises transversal challenges and questions among the regulatory bodies that are yet to be addressed .
- Interoperability: Lack of benchmarking and standard mechanisms for the integration of Blockchain-based solutions by database providers that are nonetheless isolated, poses barriers for cohesive data sharing and interactions. In addition, there is an ongoing debate regarding how to address the trade-off between transparency of operations and confidentiality of information in alignment with interoperable Blockchain-based systems .
- Lewis, P. M., Bernstein, A., & Kifer, M. (2002). Databases and transaction processing: an application-oriented approach. ACM SIGMOD Record, 31(1), 74-75.
- Nakamoto, Satoshi. “Bitcoin: A peer-to-peer electronic cash system.” Decentralized Business Review (2008): 21260.
- D.L. Chaum, “Computer Systems established, maintained and trusted by mutually suspicious groups. Electronics Research Laboratory, University of California” 1979.
- Muzammal, M., Qu, Q., & Nasrulin, B. (2019). Renovating blockchain with distributed databases: An open source system. Future generation computer systems, 90, 105-117.
- Yang, W., Garg, S., Raza, A., Herbert, D., & Kang, B. (2018, August). Blockchain: Trends and future. In Pacific Rim Knowledge Acquisition Workshop (pp. 201-210). Springer, Cham.
- Monrat, A. A., Schelén, O., & Andersson, K. (2019). A survey of blockchain from the perspectives of applications, challenges, and opportunities. IEEE Access, 7, 117134-117151.
- Deshpande, A., Stewart, K., Lepetit, L., & Gunashekar, S. (2017). Distributed Ledger Technologies/Blockchain: Challenges, opportunities and the prospects for standards. Overview report The British Standards Institution (BSI), 40, 40.
- Cermeño, J. S. (2016). Blockchain in financial services: Regulatory landscape and future challenges for its commercial application. BBVA Research Paper, 16, 20.
- Wang, Y., & Kogan, A. (2018). Designing confidentiality-preserving Blockchain-based transaction processing systems. International Journal of Accounting Information Systems, 30, 1-18.
Cite this article as
Sidharth Quamara (2021), Creating Impact on Distributed Databases and Transaction Processing Systems with Blockchain: Benefits and Implications, Insights2Techinfo, pp. 1.
FAQ on this topic
Transaction processing systems provide an execution environment that enables the processing of transactions to support business operations, and storage and access of both the transactions and their results in the databases.
• Batch Processing: In this process, transactions are collected and updated in batches. This was the most common method in the past because of the lack of real-time processing capabilities of systems.
• Real-Time Processing: In this type of processing a large number of users can work on the same data parallelly; most of the transaction systems developed today are working on this type of transaction system.
Input system, processing unit, storage devices, and output unite are the four major components of Transaction processing systems. | <urn:uuid:23d91010-efa3-446b-b057-075fc04b269f> | CC-MAIN-2022-40 | https://insights2techinfo.com/creating-impact-on-distributed-databases-and-transaction-processing-systems-with-blockchain-benefits-and-implications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00343.warc.gz | en | 0.904257 | 1,617 | 2.65625 | 3 |
The evolution of ransomware, dating back as far as 1989, has snowballed along with the development of technology to become a very profitable business for hackers. It is easier than traditional hacking, more efficient and allows them to cash in quickly and with minimal complications. Attackers can breach a vulnerable system through a back door and encrypt its data, which effectively shuts out users and prevents them from accessing any records or documents. A message demanding money then appears on the screen, a time limit is set and a countdown begins. The element of time implants a sense of urgency in the victim, compelling him/her to pay the ransom as quickly as possible. All this can be done from the comfort of a hacker's lair far away and almost impossible to track.
The reason why attackers are so successful in breaching electronic devices is because of the perceived distance people put between themselves and the possibility of such a breach ever happening to them. The belief that a ransomware attack is far-fetched is the very reason why ransomware is so lucrative. Putting your guard down is the vulnerability that hackers are waiting for. Below is a list of best practices that can help you protect and secure yourself effectively:
Back up important data
A hard lesson that many organizations learn after a ransomware attack is the importance of backing up their data and taking the necessary precautionary steps in securing their systems. Safety measures such as offline backups are often neglected, resulting in breaches that could have been avoidable. Money, time and effort is then wasted on repairs and compensation and the company's productivity plummets. Having offline backups of all the data that an organization runs on is an essential practice. In this way, in the event of an attack the company can continue to function with minimal losses.
Update your software periodically
Outdated software creates vulnerabilities in an organization's system leaving it open and exposed to attack. Most software providers distribute newer updated versions of their products periodically. Software and system updates not only introduce better features but they also patch up any holes and back doors, effectively shutting out potential attackers. It protects the company by destroying any opportunities that hackers may have of breaching the system.
The seconds immediately after a ransomware attack hits determine the fate of the organization. The infected computer or device must be immediately shut down and disconnected from any attached wireless and hardware devices. Disconnecting as quickly as possible cuts off the malware and prevents it from infecting the rest of the company's devices. It also gives the incident response team an opportunity to assess the damages done to the organization's equipment. A precautionary measure that will also minimize damage is the physical separation of one network from the other. Doing this will protect other devices from the ransomware and limit the infected computers to the ones sharing the same network.
Train employees in cyber security awareness
This is important with regard to all forms of cyber breaches. Threat litigation and management is an essential aspect of employee training. How your team responds to the incident will determine the extent of the damage done to the organization. It is important that employees are trained in incident response so that they are able to react quickly and in an orderly manner to ensure that the spread of panic and damage is minimal. | <urn:uuid:3504bf88-fccf-4a22-a6dd-7991a86d2c6a> | CC-MAIN-2022-40 | https://www.givainc.com/blog/index.cfm/2018/6/28/4-best-practices-to-protect-you-from-ransomware | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00343.warc.gz | en | 0.951629 | 647 | 2.75 | 3 |
Upon completion of this chapter, you will be able to answer the following questions:
How do networks affect the way we interact, learn, work, and play?
What ways can host devices be used as clients, servers, or both?
How are network devices used?
What are the differences between LAN and WAN devices?
What are the differences between LAN and WAN topologies?
What is the basic structure of the Internet?
How do LANs and WANs interconnect to the Internet?
What is a converged network?
What are the four basic requirements of a converged network?
How do trends such as BYOD, online collaboration, video, and cloud computing change the way we interact?
How are networking technologies changing the home environment?
What are some basic security threats and solutions for both small and large networks?
Why is it important to understand the switching and routing infrastructure of a network? | <urn:uuid:7ebc77fc-a5be-4f61-8156-e4e64d3b9a31> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=2755711&seqNum=5 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00343.warc.gz | en | 0.927478 | 195 | 3.953125 | 4 |
What does your phone know about you? What about your email or your browser? What can strangers—or scammers—find out about you with a quick search?
This is called your digital footprint, and for the security- or data-conscious consumer, this is old news. What might not be old news are the many ways to be aware of, change, and erase parts of your personal and professional data footprint.
Keep Your Employees Informed
Some parts of your digital footprint are visible to everyone. Think about what appears when you run your name through a search engine. Some information is public and accessible to someone willing to dig. This might involve cross-referencing screen names, email addresses, and photos. Other aspects of your footprint are locked within a service such as a search engine, social media account, or browser. The risk in this part of your footprint lies in how an app or service uses your data and if that data is susceptible to breach.
There is a lot you can take control of on your own with a few clicks, if you know what to look for. The information below isn’t meant as an all-encompassing guide. For questions connected to your specific technological setup, you’ll need to contact your IT support provider.
Phones and Tablets
Our phones have become our constant companions, connecting us to so many of the ways we interact with the world. Most Americans use Apple iOS or Android devices, and there are a lot of ways to tweak digital footprints on these devices, but for those dedicated to security, there are other options.
What are the differences between Apple and Android? And can third party OSs compete? Learn about the pros and cons of each type of device.
A good general rule across all operating systems is to disable Bluetooth and Wi-Fi connectivity unless you are actively using them. Bluetooth can be used to query your device’s location and even sneak malware right under your nose. Never connect to unsecured Wi-Fi networks, and definitely don’t access sensitive information over those connections. Avoid using Wi-Fi provided by companies or organizations you aren’t familiar with, even though it can be tempting to check your email over lunch.
Take a close look at the permissions you’re giving to each app on your phone. Apple’s most recent updates are making this easier by directly stating the permissions for each app and allowing for granular control. Does your favorite mobile game need access to your camera or photos? Probably not! And if the app doesn’t function without that access, it is time to find an alternative app.
Android phones also make it clear what permissions you’re granting to a given app when it’s installed. You can also check per-app, and then delete or modify those permissions if necessary.
The latest headline in mobile security issues involves zero-click hacks of iPhones. There’s nothing security-conscious users can do at the moment, aside from noting any bizarre behaviors and continuing to exercise caution regarding sensitive information that is stored on or accessed by a mobile device. But this venue of attack seems to be on the rise. Installing OS updates as they roll out may be an effective deterrent to these attacks.
Divide and conquer. Designate separate emails accounts for separate purposes and don’t cross the streams. Don’t mix work and personal accounts, despite how tempting it may be! These two accounts are often approached with different security considerations and different contact lists. Beyond data gathering by email clients, email itself can increase risk to all of your cyber connections due to the abundance of phishing emails.
Your personal email can often be a heavier load on your digital footprint than your professional account. It is only human to occasionally let the security vigilance expected at work lapse during off hours.
Google has made clear they plan to roll out new privacy measures soon. These options will not only allow users to turn off features like smart reply but also to opt out of allowing their usage data to feed the algorithm used to make these features stronger.
While we wait for these changes to roll out, take a look at the privacy controls that already exist. If your Gmail account is tied to a Chrome browser login, those privacy controls can seriously impact the ads you see, the history that is logged, and what information is tied to your account for Google’s services. It may be wise to log out of your Google account before using services like the search engine or Google Maps.
Any account you log into can allow parties to track your browsing history. Check the settings of your email, social media, and even browser extensions before remaining logged in while browsing the web.
If you’ve been receiving “Your Daily Briefing” from Cortana and feel uncomfortable about your emails being read by AI, rest assured security is still in mind. According to Microsoft, Cortana meets the same rigorous security standards of Outlook itself. Information for these emails is stored only in that specific user’s mailbox. Cortana data is never reviewed by humans unless specifically requested by the person who owns that data. If the service isn’t helpful or continues to make you feel unnerved, it’s easy to unsubscribe from the emails, and even turn off Cortana’s search assistance in other aspects of your Microsoft account.
Regardless of what email service you use for personal or enterprise use, make sure that passwords meet best practices. Check Have I Been Pwned to see if previous (or current) accounts and passwords have been disclosed in any data breaches. Use different passwords for different email accounts, and don’t use those same passwords on other accounts or services.
Facebook is an incredible example of the sheer amount of data we hand over in exchange for free services. It is somewhat unique in the massive scope and importance Facebook places on finding new ways to gather and profit from your data.
The most basic setting you should consider is whether your profile is public or “friends only.” Who can post on your wall, tag you, search for you, or add you as a friend? Once you lock down your account, or at least continue with the knowledge of these settings, it is time to set aside an hour or so to really dive into Facebook’s settings and marvel at the apps and sites you’ve (often unknowingly) given access to, the profile of information Facebook has gathered on you based on your activity, and the browser data Facebook collects while you’re logged in.
Explore your Settings & Privacy, and drill down into each aspect, including Ads shown off Facebook and the tracking of your Off Facebook history. Consider designating a Legacy Contact—someone who will gain control of your account if something happens to you.
There are a lot of options to explore, and your decisions about these options will differ from everyone else’s, but do take the time to review them.
What information is required just to sign up? Has the platform had data breaches in the past? If paid, what organization is receiving your money? If free, what data and tracking are you giving away in exchange for using the service? Can you adjust who can see the content you post? In the Terms of Service, does the platform reveal that they claim ownership of everything posted there?
It is a good idea to know how much history and website data your browser holds at any given time. Using Private Browsing, Incognito, or similar private windows can help to control the flow of information, and each browser offers some degree of control over what data and how much of it is saved.
The Privacy & Security section has an option to prevent cross-site tracking, which will prevent those annoying re-marketing ads from sites you visit but don’t buy from. Help yourself identify shady websites by turning on Fraudulent Website Warning.
Your Chrome browser is most likely tied to your Google account. One benefit is that all of the tracking, ad settings, and user profile data is in one place. However, Google, Gmail, and Chrome default to a significant number of trackers, build detailed user profiles, and allow for tailored ads. With settings reviewed and extensions restricted, Chrome can be a powerful and safe browser for those watching their digital footprint, but out of the box it probably knows more about you than you’d like.
Edge & Firefox
These browsers come with default settings that block many trackers and ads, making them recommended by many security professionals.
Many of the less popular (in terms of sheer number of users) browsers do offer a stricter, more security-conscious approach to browsing the web. Always take the time to review the privacy and security settings for whichever browser you use, whether on your computer or mobile device, and whether for casual or professional use.
Advertising & Other Tips
Adding an ad-blocking extension is the only way to truly eliminate advertising in your digital life, but you should know that it can reduce functionality for some sites. Many sites cover costs with advertising, and may be not be accessible while an adblocker is in use. Be careful to use known and trusted developers when choosing these extensions. Malware can come disguised as legitimate plugins and extensions. Even if the program isn’t malware, you are still allowing any extension you add to view your data. You may be giving up some privacy in exchange for the service, so weigh the benefits before adding an adblocker.
Safari, Firefox, and Brave browsers all alert users when websites are using trackers. Some trackers are used to boost web performance. Others are intended for serving ads and could even be seen as invasive depending on how you feel about privacy.
Search for Yourself
While you are taking control of the information that browsers, email clients, and trackers gather about you, it’s important that you don’t forget about the information you share willingly, now or in the past. In a variety of search engines, take a moment to search for your name, any previous names or aliases, and even details like your phone number or address. Seeing the amount of detailed information available publicly online—much of which that you didn’t choose to share—can be frustrating.
If searches result in expired accounts, regain access and modify or delete the account. If a search reveals information that you want deleted—perhaps a youthful blunder or something you wrote that you no longer believe—you can query the hosting site and ask for removal. This can have mixed results, or often none at all, so when you spot something you can’t get rid of, focus on providing real and accurate information where you can. Update your LinkedIn profile, or create a simple website that identifies who you are and what you stand for. Don’t address or bring up other less flattering search results unless asked directly about them.
In the worst-case scenario, something pervasive is muddling your entire digital footprint. In this case, using a reputation or deletion service is understandable, but still may not be able to provide perfect results.
The Bottom Line
Shoshana Zuboff, author of The Age of Surveillance Capitalism identifies people, and our behavior, as the fodder tech feeds on.
“Businesses want to know whether to sell us a mortgage, insurance, what to charge us, do we drive safely? They want to know the maximum they can extract from us in an exchange. They want to know how we will behave in order to know how to best intervene in our behaviour,” she says, in an interview with The Guardian.
Users of technology, social media, and Internet of Things devices need to understand that, while our digital footprint can be adjusted, our data is, according to Zuboff, the primary currency.
Does this mean that you need to throw away your phone, your Fitbit, your computer in order to maintain your privacy? That is going to depend on the way you feel about the exchange of data for service.
The push and pull of privacy vs. convenience and connection is not going away any time soon.
Overall, the process of managing your digital footprint can be time consuming, and even costly, especially if you are starting the process for the first time. For the majority of users, the quicker process of toggling settings and hitting unsubscribe may be enough to satisfy the privacy itch until the next update or news story. But for the truly security conscious, it may be worthwhile to contact your IT support provider for additional tips specific to your situation.
If you’re looking for additional guides about your digital footprint, check out | <urn:uuid:dad2a57b-3051-4018-9eb7-9102bbd869da> | CC-MAIN-2022-40 | https://andersontech.com/opting-out-keeping-your-personal-data-private/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00343.warc.gz | en | 0.919531 | 2,626 | 2.703125 | 3 |
Many DBAs seem to have difficulty understanding exactly what clustering is. Following is a good working definition:
Microsoft Windows Failover Clustering is a high-availability option designed to increase the uptime of SQL Server instances. A cluster includes two or more physical servers, called nodes; identical configuration is recommended. One is identified as the active node, on which a SQL Server instance is running the production workload, and the other is a passive node, on which SQL Server is installed but not running. If the SQL Server instance on the active node fails, the passive node becomes the active node and begins to run the SQL Server production workload with some minimal failover downtime. Additionally, you can deploy a Windows Failover Cluster to have both nodes active, which means running different SQL Server instances where any SQL Server instances can failover to the other node.
This definition is straightforward, but it has a lot of unclear implications, which is where many clustering misunderstandings arise. One of the best ways to more fully understand what clustering can and cannot do is to drill down into the details.
What Clustering Can Do
Clustering is designed to improve the availability of the physical server hardware, operating system, and SQL Server instances but excluding the shared storage. Should any of these aspects fail, the SQL Server instance fails over. The other node in a cluster automatically takes over the failed SQL Server instance to reduce downtime to a minimum.
Additionally, the use of a Windows Failover Cluster can help reduce downtime when you perform maintenance on cluster nodes. For example, if you need to update hardware on a physical server or install a new service pack on the operating system, you can do so one node at a time. To do so, follow these steps:
1. First, you upgrade the passive node that is not running a SQL Server instance.
2. Next, manually failover from the active node to the now upgraded node, which becomes the active node.
3. Then upgrade the currently passive node.
4. After it is upgraded, if you choose, you can fail back to the original node. This cluster feature helps to reduce the overall downtime caused by upgrades.
When running an upgrade, you need to ensure that you do not manually failover to a node that has not been upgraded because that would cause instability since the binary would not have been updated.
A Windows 2003 Failover Cluster cannot be upgraded to a Windows 2008 Failover Cluster because architecturally the two versions are different. Instead, create a Windows 2008 Failover Cluster and migrate the databases.
What Clustering Cannot Do
The list of what clustering cannot do is much longer than the list of what it can do, and this is where the misunderstandings start for many people. Clustering is just one part of many important and required pieces in a puzzle to ensure high availability. Other aspects of high availability, such as ensuring redundancy in all hardware components, are just as important. Without hardware redundancy, the most sophisticated cluster solution in the world can fail. If all the pieces of that puzzle are not in place, spending a lot of money on clustering may not be a good investment. The section “Getting Prepared for Clustering” discusses this in further detail.
Some DBAs believe that clustering can reduce downtime to zero. This is not the case; clustering can mitigate downtime, but it can’t eliminate it. For example, the failover itself causes an outage lasting from seconds to a few minutes while the SQL Server services are stopped on one node then started on the other node and database recovery is performed.
Nor is clustering designed to intrinsically protect data as the shared storage is a single point of failover in clustering. This is a great surprise to many DBAs. Data must be protected using other options, such as backups, log shipping, or disk mirroring. In actuality, the same database drives are shared, albeit without being seen at the same time, by all servers in the cluster, so corruption in one would carry over to the others.
Clustering is not a solution for load balancing either. Load balancing is when many servers act as one, spreading your load across several servers simultaneously. Many DBAs, especially those who work for large commercial websites, may think that clustering provides load balancing between the cluster nodes. This is not the case; clustering helps improve only uptime of SQL Server instances. If you need load balancing, then you must look for a different solution. A possibility might be Peer-to-Peer Transactional Replication.
Clustering purchases require Enterprise or Datacenter versions of the Windows operating system and SQL Server Standard, Enterprise, or BI editions. These can get expensive and many organizations may not cost-justify this expense. Clustering is usually deployed within the confines of a data center, but can be used over geographic distances (geoclusters). To implement a geocluster, work with your storage vendor to enable the storage across the geographic distances to synchronize the disk arrays. SQL Server 2012 also supports another option: multi-site clustering across subnet. The same subnet restriction was eliminated with the release of SQL Server 2012.
Clustering requires experienced DBAs to be highly trained in hardware and software, and DBAs with clustering experience command higher salaries.
Although SQL Server is cluster-aware, not all client applications that use SQL Server are cluster-aware. For example, even if the failover of a SQL Server instance is relatively seamless, a client application may not have the reconnect logic. Applications without reconnect logic require that users exit and then restart the client application after the SQL Server instance has failed over, then users may lose any data displayed on their current screen.
Choosing SQL Server 2012 Clustering for the Right Reasons
When it comes right down to it, the reason for a clustered SQL Server is to improve the high availability of the whole SQL Server instances which includes all user/system databases, logins, SQL Jobs but this justification makes sense only if the following are true:
- You have experienced DBA staff to install, configure, and administer a clustered SQL Server.
- The cost (and pain) resulting from downtime is more than the cost of purchasing the cluster hardware and software and maintaining it over time.
- You have in place the capability to protect your storage redundancy. Remember that clusters don’t protect data.
- For a geographically dispersed cluster across remote data centers, you have a Microsoft certified third-party hardware and software solution.
- You have in place all the necessary peripherals required to support a highly available server environment (for example, backup power and so on).
If all these things are true, your organization is a good candidate for installing a clustered SQL Server, and you should proceed; but if your organization doesn’t meet these criteria, and you are not willing to implement them, you would probably be better with an alternative, high-availability option, such as one of those discussed next. | <urn:uuid:5df0b51e-ae66-4f21-8968-c06da734ab58> | CC-MAIN-2022-40 | https://logicalread.com/what-sql-server-clustering-can-and-cannot-do-w02/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00343.warc.gz | en | 0.916058 | 1,444 | 2.71875 | 3 |
March 26, 2015
Today, the Biological Weapons Convention (BWC)—the first treaty to ban an entire class of weapons—marks the 40th anniversary of its entry into force. Reflections on this milestone will examine the BWC’s successes and travails, such as its ratification by 173 countries, its lack of a verification mechanism, and what the future holds. Although not prominent in these discussions, the BWC relates to cybersecurity in two ways. First, the BWC is often seen as a model for regulating dual-use cyber technologies because the treaty attempts to advance scientific progress while preventing its exploitation for hostile purposes. Second, the biological sciences’ increasing dependence on information technologies makes cybersecurity a growing risk and, thus, a threat to BWC objectives.
The BWC as a Model for Cybersecurity
The BWC addresses a dual-use technology with many applications, including the potential to be weaponized. Similarly, cyber technologies have productive uses that could be imperiled with the development of cyber weapons. Those concerned about cyber weapons often turn to the BWC for guidance because of characteristics biology shares with cyber—the thin line between research and weaponization, the global dissemination of technologies and know-how, the tremendous benefits of peaceful research, and the need to adapt to new threats created by scientific and political change. | <urn:uuid:cda9a115-0016-4263-b146-91b708275b09> | CC-MAIN-2022-40 | https://www.cybersecurity-review.com/the-relationship-between-the-biological-weapons-convention-and-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00343.warc.gz | en | 0.950346 | 269 | 2.703125 | 3 |
Would you consider giving up personal data in exchange for a third-party app to show you what your ninety-year-old self will look like? The rise of social media brought lots of joy to consumers’ lives, allowing users to get glimpses of their future selves, to find out their horoscopes, and to communicate with others across the globe. All of this personal data, however, requires companies to take great care in its handling – and recent events have proved that social media doesn’t have a strong grip on consumer privacy. With national crisis like the Edward and the Cambridge Analytica scandal, people are skeptical when it comes to using social media.
Consumers want to know that their data is protected – and social media apps are finally recognizing this demand and implementing End-to-End Encryption (E2EE) strategies that give their users peace of mind. However, this implementation has the potential downside of making it much more difficult to monitor online criminal activity. Read more about what social media encryption strategies are and why it’s becoming more popular.
What is E2EE and Why is it Important?
E2EE is a form of secure communication where messages are encrypted all the way from the sending user to the receiving user so third parties have no way of accessing the plaintext data. The message is completely encrypted while in transit and while being received, and the receiver is the only one that can decrypt the message with special cryptographic keys.
E2EE is much stronger than other types of standard encryption because no other entity, including the service provider, has the capacity to decrypt the data. E2EE utilizes public key encryption, also known as asymmetric encryption, to store both public and private cryptographic keys solely in the endpoints of the communication. Public key encryption is much stronger than private, or symmetric key encryption, because only half of the decryption key is transmitted over the network. With symmetric encryption, the same key is used at all endpoints, making it much more vulnerable to unwanted third-party access.
Related Blog: The Importance of Encryption in Free Societies
E2EE is important because it keeps your data private and protects it from hackers. Even if the communication server gets compromised, your data is kept secure because no one possesses the decryption keys except you. A strong encryption strategy is especially important in today’s technological climate because of the amount of data breaches that occur on an almost daily basis.
Adopting an E2EE strategy makes personal data secure and builds trust with consumers, which has been proven very difficult to keep. Let’s dive into why E2EE strategies are so vital and why encrypted social media is all the hype right now:
Why Social Media Encryption is a Growing Demand
When Edward Snowden leaked information about the government collecting Verizon user data, people responded by changing the way they use technology to communicate. Some even switched their entire channels of communication.
According to the Pew Research Center, out of the 87% of adults who had heard of government surveillance programs, 18% changed their email usage, 15% changed their social media usage, and 13% changed the way they use their mobile apps. These numbers represent a significant shift in public attitude towards social media privacy. The public concern over surveillance and privacy was further increased following the Cambridge Analytica scandal. With millions of Facebook user data exposed, tons of social media sites began to feel the ripple effect and become untrustworthy in the eyes of consumers.
Another Pew Research Center survey revealed that the public has an astonishingly low confidence in the abilities of their service providers to protect their data. Only 27% of tech users feel very confident that their cellphone manufactures and credit card companies securely protect their data, and just 20% feel their email providers are trustworthy. This is an extremely low confidence level for social platforms that serve as the foundation for millions of personal data exchanges. Surprised? We aren’t; according to this survey, only 9% of social media users feel the sites they use are trustworthy. With this unsurprising revelation coming to light, social media companies have begun to standardize E2EE strategies throughout their development processes.
Social media data security has become a factor for users to decide whether or not they use an app. Encrypted social networks are on the rise in response to privacy concerns and the lack of protections.
Social Media Channels with E2EE Strategies
Many social media companies responded to this consumer distrust by implementing E2EE strategies that protects user data against specified threats. Some entrepreneurs detected a business opportunity and created new social platforms altogether. For instance, there’s the launch of ProtonMail, a secure email service with a sophisticated encryption system to deter would-be spies. Or, the creation of Wire by a group of former Skype technologists which relays communications through a network of cloud computers but stores user communications in an encrypted form on the users devices.
The Top 15 Encrypted Social Media Channels:
To get a sense of how prevalent this opportunity is, check out the list below showing just a few out of many social channels that have adopted an E2EE strategy:
Pros and Cons about Encrypted Social Media Usage
Perhaps the biggest benefit around social media encryption is the peace of mind it gives its users; people feel more comfortable using social channels that have a strong encryption strategy. Knowing that your private data isn’t going to be accessed by third parties is a benefitting factor when deciding which social media channels to use.
From a business perspective, adopting an encryption strategy is just as much a revenue generator as it is a security measure; people want to use what makes them feel secure. Social media giant Snapchat is the most recent popular app to adopt an E2EE strategy. Though it’s too early to tell, the likely outcome for businesses who have adopted an E2EE strategy will be more usage, which ultimately leads to more revenue.
Free Webcast: Controlling Security Vulnerabilities in Automotive IoT
There remains one downside to the heightened social media encryption usage – monitoring criminal activity. Although E2EE is great for blocking hackers’ access to consumer data, it also makes it nearly impossible for law enforcement to pinpoint suspicious online activity or monitor criminal engagement. What’s worse is that the rise of encrypted messaging apps gives terrorist groups more ways to connect. With a plethora of encrypted communication channels for criminals to use, law enforcement will have to rethink their strategy and come up with new ways to monitor online criminal activity.
Where Fornetix Can Help
The same lessons learned with E2EE for social media are applicable to any kind of sensitive data, whether it’s a small mom-and-pop business or a global enterprise. Having a rock-solid encryption strategy in place is a powerful failsafe in the event of a data breach. Even if your perimeter is compromised, any stolen data would appear as gibberish to the attacker. VaultCore by Fornetix breaks down the barriers that have historically inhibited organizations from utilizing encryption to its full potential. Smart automation and scheduling of cryptographic lifecycle tasks keeps your critical data locked down.
If you’re interested in how Fornetix can deploy an effective encryption strategy for your organization, we’d love to schedule a complimentary demo with you.
Note: This entry has been edited to reflect the ‘Key Orchestration’ solution name becoming ‘VaultCore’ | <urn:uuid:c5c630cb-103d-4654-bf0e-c094e39f62c8> | CC-MAIN-2022-40 | https://www.fornetix.com/articles/end-to-end-encryption-strategies-becoming-the-norm-for-social-media/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00343.warc.gz | en | 0.936601 | 1,529 | 2.734375 | 3 |
Significance of Software Testing in Autonomous Cars
Human civilization witnessed history in the 1800s when animal-driven carriages were replaced by carriages that no longer needed animals. Similarly, today we’re on the brink of another revolution which will see cars that require no manual intervention. In simple words, driverless cars. Industry giants such as Tesla, Google, and Uber are testing their autonomous driving technology and plan to roll them out in the near future.
But this isn’t something that we haven’t seen before. We’ve seen Tesla implement this concept on its cars that were commercially made available last year. Elon Musk, CEO of Tesla, was lauded for his ambitious efforts and achievement of this incredible milestone. But the world soon came to the conclusion that these cars were not ready for the real world when reports of fatal accidents involving autonomous cars emerged.
Public Perceptions and Real Issues
Surveys reveal that there is a huge trust deficit among the general public regarding autonomous vehicles. According to one of the surveys, 27% of the U.S adults consider autonomous vehicles ‘very unsafe’, 33% consider them ‘somewhat safe’ while only 8% think that they are ‘very safe’.
Experts believe that companies were so ambitious and optimistic that they failed to consider numerous real-life variables/factors which led to road accidents and public mistrust consequently. Turned out that it was much more than just throwing sensors and artificial intelligence. The real challenge was to build robust software that could drive these sensors and AI technologies, collate the data, imitate actions of a professional car driver according to the situation, and make quick decisions in real-time.
But even if the car is equipped with all the required technology, what’s the guarantee that it will make the correct decisions? What if it misinterprets all the available information regarding the distance from obstacles and pedestrians? What if the software is faulty? These are the questions companies should ask themselves before releasing autonomous cars in the market. If they were addressed before, accidents could’ve been avoided. Safety is not the only concern. Poorly designed self-driving cars can disrupt the flow of traffic. In the end, it all comes down to testing.
Role of Testing
Testing autonomous cars is the only way of evaluating that they are on par with safety and practical parameters. A software testing company can provide an unfiltered, clear image of the capabilities of self-driving cars in all aspects. Testing analyzes and provides certainty that all systems involved in the decision-making process are working in complete tandem with one another without any problems. Buggy software can also cause a car to exhibit unexpected behavior and so through preemptive testing, bugs present in the software can be identified before the car gets on the road. Cars must also be tested for super-fast connectivity since it’s imperative that all the cameras, radars, and sensors communicate smoothly. A software testing company can help test the AI systems in the autonomous vehicles ensuring that data fed to the system is properly interpreted and the offered predictions are accurate and viable. This is why software testing is of utmost importance for self-driving cars. | <urn:uuid:520576c3-6185-4fc2-824f-d57902807d22> | CC-MAIN-2022-40 | https://www.kualitatem.com/blog/significance-of-software-testing-in-autonomous-cars/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00343.warc.gz | en | 0.95796 | 656 | 2.8125 | 3 |
What is TLS? Secure Email 101
Transport Layer Security (TLS) is a widely used protocol in email security, the other being Secure Sockets Layer (SSL). Both are used to encrypt a communication channel between two computers over the internet.
An email client uses the Transport Control Protocol (TCP) – which enables two hosts to establish a connection and exchange data – via the transport layer to initiate a handshake with the email server before actual communication begins. The client tells the server the version of SSL or TLS it is running as well as the cipher suite (a set of algorithms that help in securing a network connection that uses SSL or TLS) it wants to use.
After this initial process, the email server verifies its identity to the client by sending a certificate the email client trusts. Once this trust is established, the client and server exchange a key, allowing messages exchanged between the two to be encrypted.
What parts of a message does TLS encrypt?
The protocol encrypts the entire email message, including the header, body, attachments, email header, sender and receiver. TLS does not encrypt your IP address, server IP address, the domain you are connecting to, and the server port. The visible metadata informs where you are coming from, where you are connecting to and the service you’re connecting with, such as sending email or accessing a website. This article explains what is really protected by TLS and SSL.
What is the purpose of SSL and TLS?
The purpose of the two protocols is to provide privacy, integrity and identification.
- TLS encrypts communication between the sender and recipient. The idea is to ensure that no third party can read or modify the data being exchanged. Without encryption, a middleman could access the contents of emails, such as personally identifiable information, medical billing information and other sensitive data. This information would be available for the middleman to see in plaintext, that is, a human readable format:
This is an email text message. We’re writing this email to let you know that our representative is available 9 to 5 Monday through Friday to assist you with any billing issues.
TLS makes information unreadable on its journey to the server, e.g.,
- As mentioned earlier, the protocol offers identification between corresponding entities: one or both parties know who they are communicating with. After a secure connection is established, the server sends its TLS certificate to the client, who refers to a Certificate Authority (a trusted third party) to validate the server’s identity.
How different is TLS from SSL?
The two terms are often used interchangeably, although they are actually distinct. TLS is an updated and more secure version of SSL. TLS v1.1, v1.2 and v1.3 are significantly more secure than SSL and address vulnerabilities in SSL v3.0 and TLS v1.0. Fallback to SSL v3.0 is disabled by Microsoft, Mozilla and Google for their Internet Explorer, Firefox and Chrome browsers block the many vulnerabilities present in SSL, such as the POODLE man-in-the-middle attack. If you are configuring an email program, you can choose either TLS or SSL so long as it is supported by your server (because in this context the term “SSL” does not refer to the old SSL v3 protocol, exactly, but how the TLS protocol will be initiated).
What level of TLS security is needed for HIPAA compliance?
Health and Human Services specifies that SSL and TLS usage should adhere to the details described in the National Institute of Standards and Technology (NIST) 800-52 recommendations. Encryption processes weaker than those this publication recommends are non-compliant. The key points to note from the NIST documents are: (a) you must never use SSL v2 and v3 (b) when interoperability with non-government systems is needed, TLS v1.0+ may be considered Ok, and (c) only certain ciphers are acceptable to use. For more information, please refer to this article.
What doesn’t TLS secure?
A message sent using TLS is not entirely secure. The risk starts brewing when your messages start their journey back and forth from your email provider’s servers and your correspondents’ email servers. One risk is that your message could be send insecurely (via plain text) from your email provider to your recipient. Another is that your recipient may insecurely access your message at their email provider. A third risk stems from potential changes to your message at your provider, in transit or at your recipient’s provider, or anywhere else not protected by TLS or some other encryption technology.
For optimal email security, you need end-to-end email encryption. S/MIME and PGP are the most secure protocols for authentication and privacy of messages over the internet. PGP does assure Pretty Good Privacy. You have a pair of keys — private and public; the former decrypts messages, the later encrypts them. Encrypted messages are safe as long as you keep your private key safe. Still, PGP (and S/MIME) are Pretty User-Unfriendly, as you have to use some technology and trade security keys ahead of time and everyone has to be configured and trained to use these technologies. A reliable escrow system is another option. Although in some ways it is not as secure as S/MIME and PGP can me, it does allow messages to be retracted after transmission. For a better understanding of enhanced email security for HIPAA compliance, check out this article.
Want to discuss how LuxSci’s HIPAA-Compliant Email Solutions can help your organization? Interested in more information about “smart hosting” your email from Microsoft to LuxSci for HIPAA compliance? Contact Us | <urn:uuid:10c6580e-3e19-4d9f-a22c-eee7f922e1d4> | CC-MAIN-2022-40 | https://luxsci.com/blog/what-is-tls-secure-email-101.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00343.warc.gz | en | 0.920464 | 1,289 | 3.8125 | 4 |
How to write data in an excel file?
In order to write data in an excel file, we make use of column name.
Lets say we have a sample excel file as shown below in which you want to write data at some specific cell.
If you want to write data as username or country at a specific index we will use its column name.
To achieve this please follow the steps below:
Step 1: Load the excel file using loadexcel function.
Once you load an excel file all the data of the file is now present in the alias name.
Step 2: Define a variable and use setcellvalue function to set the data at specific cell in the excel sheet using alias name defined in step one.
Here, Adam is the data that we want to write in the excel sheet at 3rd index. After completing this step and executing this test case it will write the data in the excel sheet.
Similarly you can set the data of any column based on its column name and index.
Note: When you are writing data in any excel file make sure excel file is closed. | <urn:uuid:d9856c34-88e3-45b9-adad-64e624b6b13c> | CC-MAIN-2022-40 | https://huloop.ai/docs/how-to-guide/how-to-write-data-in-an-excel-file/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00343.warc.gz | en | 0.896518 | 230 | 3.328125 | 3 |
The role of the Internet of Things (IoT) is extremely important in the agriculture industry. IoT provides farmers with leverage from big data to analyze, regulate, and monitor their crop systems. “Massive IoT” is essentially a large ecosystem of connected devices or sensors, and its use has expanded across smart farming sectors.
According to Insider Intelligence, the number of agricultural sensors is projected to increase to nearly 12 million globally by 2023. Evolving IoT technology has made efficient, precision farming possible for harvesters.
So, what kind of IoT technologies are included in “Massive IoT”? Water management, real-time monitoring, and fertigation are all great examples of applications that are improved by IoT technology. These solutions help bridge the gap between supply and demand issues with not only physical materials, such as animal feed, but also manual labor shortages.
Big data also aids data collection from smart greenhouses, climate conditions, and agricultural drones. IoT collects the data and accurately sends the information to all the monitoring systems across the farm.
Real-time monitoring transmits data from sensors to notify farmers on water, feed, and cleanliness levels. How does this improve farmers’ production system? By using smart systems, harvesters can cut time and money and reinvest those assets elsewhere.
For example, precision agriculture allows farmers to monitor livestock, bins, and crops from their handheld device, essentially eliminating the in-between manual steps. Ultimately, farmers can rely on their monitoring systems to provide them with accurate data anywhere, leading to safer, cost-effective farming practices due to IoT solutions.
BinSentry is a US-based agricultural technology company that provides digitalized solutions for farmers across the country. Their goal is to create efficient production and support sustainable solutions. Supporting sustainable connectivity can be tricky when using an abundance of devices. This is where Massive IoT plays a role. In order to leverage deployment, BinSentry needed an ecosystem that was manageable from anywhere.
KORE engaged BinSentry to seamlessly manage their platforms through seamless connectivity and one consolidated, award-winning platform, ConnectivityPro™. KORE provided the agricultural company with resilient technology which prevents data overages, cuts costs, and diagnoses connectivity issues rapidly.
To learn more about how Massive IoT aids smart agriculture, download our case study, “KORE Supports Smart Agriculture for BinSentry”.
KORE keeps you up to date on all things IoT.
Stay up to date on all things IoT by signing up for email notifications. | <urn:uuid:521340a7-46bd-4638-99d9-39266fb55bdf> | CC-MAIN-2022-40 | https://www.korewireless.com/news/massive-iot-and-agriculture | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00343.warc.gz | en | 0.930324 | 517 | 2.9375 | 3 |
The file inclusion vulnerability may be present whenever a web application allows users to provide input used as code by the target application. Therefore, this vulnerability is most commonly found in web applications that use scripting run time.
Keep reading to learn more about the file inclusion vulnerability, its different types, and how to prevent it.
File Inclusion Security Assessment
Types of file inclusion
Exploiting a file inclusion vulnerability is possible when an application allows user input to act as a command (also known as dynamic file inclusion). When this happens, an attacker may direct the application to build a path toward a file that contains malicious code and execute the file. Alternatively, it may allow attackers to access files on the server and steal sensitive data contained in them.
Programming languages under which file inclusion vulnerabilities frequently occur are PHP, JavaServer Pages (JSP), and Server Side Includes (SSI).
This vulnerability is part of the more general injection vulnerability in the OWASP Top 10 vulnerability list. An attack that uses this vulnerability can potentially lead to cross-site scripting (XSS), directory traversal, and remote code execution.
A file inclusion exploit arises from using the “include” statement or a similar filesystem functionality, such as the “require” statement. Developers typically utilize this functionality for several reasons.
- When specifying files to be parsed by the interpreter: to open a particular file containing code, its path must be specified so it will be parsed and interpreted.
- When printing to a page: to save time and avoid recoding, developers will sometimes reuse certain portions of code, such as headers. In addition, it allows them to specify a file from which contents should be copied and used in the file that contains the include statement.
- When including files that users will download: to make files available for download, instead of being opened in the web browser, a specific header is included in the request.
In any of the above cases, if user input is not handled correctly, it can open the door for attackers to include malicious code or gain access to sensitive data.
Local File Inclusion
An LFI vulnerability allows attackers to access or execute files hosted locally on the application server. This is possible in applications that allow the path to a file on the server to be used as user input and do not sanitize such input.
Attackers can then use the file and the “include” functionality to expose its contents or run its code. If the server runs with high privileges, it may expose sensitive data files, such as authentication details.
In some instances, applications may allow users to upload unauthorized files, allowing attackers to upload a file that contains malicious code, such as a web shell. Taken together with the inclusion vulnerability, this opens the door for attackers to execute such code if they know the path to their file.
Remote File Inclusion
The remote file inclusion (RFI) vulnerability is made possible by applications that dynamically reference external files or scripts without proper sanitization. By exploiting the vulnerability, an attacker forces the server to download and execute arbitrary files that are located remotely that can open backdoor shells.
These can lead to data being stolen or damaged, websites being defaced and having malware installed, or a full-server compromise and takeover.
What are the differences between LFI and RFI?
In essence, LFI and RFI exploits utilize the same strategy and rely on the same type of vulnerability.
The main difference between these two types of vulnerabilities is that when exploiting the LFI vector, attackers will target local file inclusion functions that do not perform proper validation of user-provided input parameters. When the RFI vector is controlled, attackers will use referencing functions that allow for remote file paths to be provided.
LFI attack example
There are different ways to demonstrate what an LFI may look like. One of the simplest examples of local file inclusion is the simple change or a URL that goes by without filtering. For example, say you have this URL:
If user input is not sanitized correctly, an attacker can edit the URL to something like this:
If the server has a file inclusion vulnerability, it will simply proceed to display the contents of the requested password file. In the same way, using characters like “/,” An attacker can traverse directories (also known as path traversal) to get to other files in the system, such as server log files.
Alternatively, if the server allows files to be uploaded but does not correctly check them, a user could upload something like an image that contains code. They would then provide it as input for the parser, making it run the code. An example of this could be:
RFI attack example
Default server configurations and statements can make PHP scripts vulnerable to RFI exploits. Once an attacker spots code that allows for remote file inclusion based on user input, they can exploit this to include an external file.
For example, in the following instance, the “testfile” value is open to user input:
If the instructions in the code feature an include statement, such as:
$test = $_REQUEST[“testfile”]; Include($.”.php”);
The result would be that an attacker can exploit this instance to include a remote malicious file parameter. This could look like this:
This would include the “malicious_page” in the vulnerable page “abc,” which would execute it whenever the latter is accessed.
How to prevent LFI and RFI?
You can approach mitigating LFI and preventing RFI exploits in many ways. Proper input validation and sanitization play a part in this, but it is a misconception that this is enough. Ideally, you would best implement the following measures to prevent file inclusion attacks best.
- Sanitize user-supplied inputs, including GET/POST and URL parameters, cookie values, and HTTP header values. Apply validation on the server side, not on the client side.
- Assign IDs to every file path and save them in a secure database to prevent users from viewing or altering the path.
- Whitelist verified and secured files and file types, checked file paths against this list, and ignored everything else. Don’t rely on blacklist validation, as attackers can evade it.
- Use a database for files that can be compromised instead of storing them on the server.
- Restrict execution permissions for upload directories as well as upload file sizes.
- Improve server instructions such as sending download headers automatically instead of executing files in a specified directory.
- Avoid directory traversal by limiting the API to allow file inclusions only from a specific directory.
- Run tests to determine if your code is vulnerable to file inclusion exploits.
What are file inclusion vulnerabilities?
Whether local or remote, file inclusion vulnerabilities arise due to “include” or “require” statements that allow for unvalidated user input to be provided. These statements are necessary and valuable, but they create vulnerabilities when they are not secured. To exploit them, attackers must find the location of the vulnerability exposure and provide malicious input to be executed.
What is the impact of a file inclusion vulnerability exploit?
At its most basic, exploiting this vulnerability may lead to sensitive data exposure, such as authentication details or server logs, being exposed and stolen. They may also lead to a website being hijacked with malware or defaced, ultimately to an entire server compromise and takeover. Additionally, attacks such as remote code execution, cross-site scripting, and others are possible with such vulnerabilities. | <urn:uuid:9e6cadc9-7d2c-432a-9e2f-115a2d7f8dee> | CC-MAIN-2022-40 | https://staging.crashtest-security.com/file-inclusion/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00543.warc.gz | en | 0.903359 | 1,657 | 2.90625 | 3 |
Is IoT secure?
IoT involves introducing internet connectivity to an ecosystem of unconnected devices, assets, and processes. By 2025, the number of connected devices is projected to touch 41.6bn—with enhanced computation capabilities and reduced costs for connectivity and data storage—generating close to 79.4ZB data.
However, the explosive growth in connectivity would expose IoT devices, sensors, and platforms to massive security risks. As enterprises step up efforts to properly integrate their legacy OT systems with their IT systems, several IoT security framework have been created to ensure that security breaches, resulting data, and financial losses are reduced. The data privacy laws, endpoint security measures, private networks, and secure platforms are being utilized to ensure that the rising number of security breaches does not hamper the exchange of information.
While the one-size-fits-all approach is not going to work as the complexity in the IoT architecture that’s being adopted by various enterprises is increasing, an end-to-end, secure IoT framework—explicitly devised to address the connectivity needs of an enterprise—can help immensely.
Read more about IoT security in this blog. | <urn:uuid:2d11cc11-09e3-4b6d-b9a8-3a5e1874ce77> | CC-MAIN-2022-40 | https://www.hcltech.com/technology-qa/is-iot-secure | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00543.warc.gz | en | 0.943883 | 246 | 2.9375 | 3 |
The Future Of Smartwear Technology
Recent advances in smart devices and ubiquitous computing devices have fostered a dramatic growth of interest for wearable technology. “Wearable technology is defined as the intersection of the fields ubiquitous computing and functional clothing design”. To put it simply, wearable technology is a category of technology devices that can be worn by a consumer and can be networked. Wearable technologies typically contain a range of different sensors that can gather, store data, and can transfer information to other devices. Wearable devices are designed based on three goals. The first and most obvious is that they must be mobile. The second goal is to enhance the real environment. The third goal is to provide context sensitivity to exploit the intimacy between human, computer, and environment. Wearable technology consists of many different forms of body mounted technology such as wearable computers, functional clothing, and smart clothing.
According to Barfield and Caudell (2001), a wearable computer is defined as a “fully functional, self-powered, self-contained computer that is worn on the body, provides access to information, and interaction with information, anywhere and at any time”. When we hear wearable computing, Google Glass which is complex, multifunctional device probably comes to our mind. This amazing gadget has various interesting functions such as showing text messages, finding information easily, taking video and snapshot, broadcasting the live video, and translating languages. Moreover, other wearable devices like fitness bands or heart-rate monitors focus on a narrower range of purpose with a limited set of features. These smart devices increase self-awareness of the wearer to determine their health, fitness, or peak performance. In general, wearable computers are especially useful for applications (e.g. military applications, industrial applications and developer applications) that require more complex computational support than just hardware coded logics. Consistency and the ability to multi-task are the main characteristics of a wearable computer.
Functional clothing consists of all types of clothing or assemblies that are specifically engineered to deliver a pre-defined performance or functionality to the user, over and above its normal functions. Functional clothing provides special functionality to the wearer. In other words, it is worn for special functional needs and can be categorized into several classes which are Protective-functional, Medical-functional, Sports-functional, Vanity-functional, Cross-functional assemblies, and Clothing for special needs.
Smart clothing or intelligent clothing integrates functional clothing design and portable technology. It can provide interactive reactions by sending signals, processing information, and actuating the responses. According to Ariyatum & Holland (2003), the major applications of smart clothing can be categorized into military, medical, communication, entertainment and particularly sports. OMSignal’s Biometric Smartwear , for example, is an amazing smart cloth which has all the sensors needed to track and monitor not only heart rate, breathing and steps during workout but also health, weight, activity, and stress during the day. In fact, this data can help us get healthier and fitter.
Wearable technology presents many new challenges to designers. Designer of wearable technology should understand not only human interaction with computing devices but also human interaction with clothing for successful design. In general, “product strategists must embrace a human-centric approach to design — the person is the focus of innovation, not the device.” In fact, devices and garments hold very different cultural roles in terms of duration and frequency of use, range of usage situations, product life cycle, price point, care, cleaning, and many other factors. Hence, a team including textile technologist, electronic experts, garment engineers, biologist, computer scientists, and multimedia experts should work effectively together to design a new product.
By Mojgan Afshari
Mojgan Afshari is a senior lecturer in the Department of Educational Management, Planning and Policy at the University of Malaya. She earned a Bachelor of Science in Industrial Applied Chemistry from Tehran, Iran. Then, she completed her Master’s degree in Educational Administration. After living in Malaysia for a few years, she pursued her PhD in Educational Administration with a focus on ICT use in education from the University Putra Malaysia. She currently teaches courses in managing change and creativity and statistics in education at the graduate level. | <urn:uuid:144d63bd-b5df-43ed-8164-f2583287a560> | CC-MAIN-2022-40 | https://cloudtweaks.com/2014/06/future-smartwear-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00543.warc.gz | en | 0.944232 | 884 | 3.1875 | 3 |
A blockchain specialist’s in-depth explanation of blockchain technology and why enterprises should care about it.
Guest article by Dominic Steil, co-founder and CTO of Dapps.ai
Blockchain Technology has emerged as the greatest technological revolution since the internet. It’s arrival via the digital currency Bitcoin has sparked a new era of computing that is fundamentally changing the way that systems are built on the web. Blockchain technology is a system that provides an immutable, replicated transaction ledger across a network of peers. For the first time, this technology enables the true transfer of unique data over the internet; not just copying data, but leveraging public private key cryptography truly transferring ownership of digital assets. It is information creation in its purest form.
Blockchain technology is a system that enables distributed consensus protocols; verifiable, tamperproof, immutable state machines that can be used to create decentralized application networks. The first decentralized application network was created for a global digital cash and transactional store of value layer, Bitcoin.
The Bitcoin protocol facilitates the transfer of digital cryptocurrency payments by using a (Unspent Transaction Output) UTXO model that is globally accessible and driven by proof of work and the largest deployment of public/private cryptography in human history. It is open source, censorship resistant, and is checked and balanced by a cryptoeconomic incentivized system that is operated by users, developers and miners.
Other decentralized application networks have been created to facilities use cases other than digital value and payments. One of these protocols is the Ethereum Blockchain. Ethereum is a world computer, a generalized computational singleton that enables applications to be built and deployed to update the state of this one global virtual machine. Rent is extracted to update and run your application on this machine in the form or Ether, a gas to the platform that fuels computational resource extensive operations. Ethereum has enabled developers and users to create tokens on this global machine and these tokens can be offered in crowd sales known as in an initial coin offering. Private implementations of the Ethereum Blockchain can be used and there are many forked or modified version of the protocol as well with various tradeoffs of decentralization, security and scalability. Distributed Ledgers that have similar consensus properties to the Bitcoin and Ethereum Blockchain ledgers have been created with more of a focus on commercial and enterprise infrastructure level applications. One of these protocols is the Hyperledger Fabric Blockchain protocol. Hyperledger is a permissioned blockchain ledger focused on validating and ordering business transactions between multiple peers in a blockchain based business network. These are just a few of the Blockchain technology protocols that are going to continue to revolutionize the way that digital assets, identity systems and transactions are created and verified on the internet.
Why should the enterprise care about a technology that enables a digital transfer of ownership on the web? How can companies leverage their existing networks and turn them into markets using this technology? What if the enterprise adopts blockchain technology for all types of general ledgers; operating against information that is logically centralized but organizationally decentralized amongst partners and customers. We are able to leverage this technology to remove data siloes within organizations. This is bringing about a new business model where the enterprise has trusted data that can be shared and operated on. There is a shift that is happening that is being driven purely be decentralized blockchain networks. This type of shift is rooted in who owns the data, where the data lives, and how can the data be changed. For the first time there is verifiable transparency that is driven by these distributed ledgers. The applications of the technology in verticals where there is opacity cannot be understated. By providing a holistic view on trusted data, companies will provide a better customer experience ultimately leading to top line growth.
Dapps Inc. is the premier Enterprise Blockchain Company for Salesforce Customers to swiftly build next-generation operating models. The San Francisco-based ISV Partner, delivers enterprise-grade blockchain enabled solutions, tools and infrastructure that integrate Salesforce with blockchain technology, artificial intelligence, advanced data and analytics. Companies — from SMBs to Enterprises —leverage any one or all of the managed-packages, Blockchain-as-a-Service (Baas) licensed products, or patent-pending middleware between Force.com and the blockchain to accelerate Dapp development on the Force platform. The DappServices™ team is leading and defining design thinking for global companies interested in developing and deploying their own proprietary next-generation operating lens into the end-to-end customer journey. DappSuite™ and DappSolutions™are available on the AppExchange and consulting services, DappServices™ are available by directly contacting your nearest Dapps office. Dapps Inc. was founded in 2017 by industry leaders in Blockchain technology, CRM and international business with additional offices in New York, Barcelona and India.
DappSolutions™ powered by Hyperledger Fabric allows users to rapidly design and develop solutions between Salesforce platform and the blockchain.
DappSuite™ is the SDK (software development kit) that allows companies to build and manage proprietary blockchain applications on Ethereum. The product is available on the AppExchange.DappScape™ (Coming Soon) is a proprietary search-engine for permissioned blockchain networks.
Sign up for blog updates via email.Subscribe | <urn:uuid:292b44f7-71c4-4c79-b87f-167055377936> | CC-MAIN-2022-40 | https://www.actifio.com/company/blog/post/what-is-blockchain/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00543.warc.gz | en | 0.921398 | 1,078 | 3.046875 | 3 |
A file contains a sequence of bytes. Any file on your computer can be uploaded to B2 and stored in a Cloud Storage, as long as it's not too big. Files can range in size from 0 bytes to 5 GiB (5 x 230 or 5,368,709,120 bytes).
Once a file is uploaded, you can download it right away, or wait years and then download it. You can download it once, or give the URL to all of your friends and they can all download it.
Uploading the same file name more than once results in multiple versions of the same file. This can be useful for tracking the history of a document. See File Versions for more details.
The API calls related to files are:
b2_delete_file_version- deletes one version of one file
b2_download_file_by_id- downloads a specific version of a file
b2_download_file_by_name- downloads the most recent version of a file
b2_get_file_info- returns information about a file
b2_hide_file- hides a file, without deleting its data
b2_list_file_names- lists the file names in a bucket
b2_list_file_versions- lists all of the file versions in a bucket
b2_upload_file- uploads a new file (or version of a file)
Files have names, which are set when a file is uploaded. Once a file is uploaded, its name cannot be changed. You can then download a file if you know its name.
Names can be pretty much any UTF-8 string up to 1024 bytes long. There are a few picky rules:
- No character codes below 32 are allowed.
- DEL characters (127) are not allowed.
These are all valid file names:
When downloading or exporting files, be aware that the file name requirements above are fairly permissive and may allow names that are not compatible with your disk file system.
In addition to a name, each file uploaded has a unique ID that identifies that specific version of that file. A File ID will not be more than 200 characters. If you want to download an older version of a file, you'll need to know its File ID. File IDs look like this:
If you have uploaded a file called
cats/kitten.jpg to a
cute_pictures, you'll be able to view
the file in a browser with a URL that looks like this:
The first part of the URL comes is the download URL that you get from
call. Then comes
/file/, then the bucket name, another
and then the file name.
Folders (There Are No Folders)
A bucket holds files. There is no hierarchy of folders, just one long, flat list of file names. For example, a bucket might have four files in it, with these four names:
cats/cat.jpg cats/kitten.jpg dogs/dog.jpg dogs/puppy.jpg
There are no folders. The name of the first file is
and the name of the second file is
is nothing called just
Even though there are no folders, many of the tools that work with files
in a bucket act like there are folders. The file browser on the Backblaze
web site acts like there are folders, and so does the
command-line tool. Under the covers, they both just
scan through the flat list of files and pretend. Here's an example of using
the command-line tool:
$ b2 ls my_bucket cats/ dogs/ $ b2 ls my_bucket cats cat.jpg kitten.jpg
We recommend that you use "/" to separate folder names, just like you would for files on your computer. (Or just like you would use "\" if you use Windows.) That way the tools can figure out the implied folder structure.
To ensure the integrity of your data, when you upload a file you must provide a SHA1 checksum of the data. This ensures that if any of the data is corrupted in the network on its way to B2, it will be detected before the file is stored. When you download a file, the SHA1 checksum is attached so that you can verify that the data you receive is intact.
When you upload a file, you also provide a MIME type for the file, which
will be used when a browser downloads the file so that it knows what
kind of file it is. For example, if you say that your file
has a MIME type of
image/jpeg, then a browser that downloads
the file will know that it's an image to be displayed.
Each file has information associated with it, in addition to the sequence of bytes that the file contains. Every file has a size (the number of bytes in the file), a MIME type, and a SHA1 checksum. You can also add your own custom information.
You can add key/value pairs as custom file information. Each key is a UTF-8 string up to 50 bytes long, and can contain letters, numbers, and the following list of special characters: "-", "_", ".", "`", "~", "!", "#", "$", "%", "^", "&", "*", "'", "|", "+". Each key is converted to lowercase. Names that begin with "b2-" are reserved. There is an overall 7000-byte limit on the headers needed for file name and file info, unless the file is uploaded with Server-Side Encryption, in which case the limit is 2048 bytes. (See next section.)
For names that don't start with "b2-", there is no limit on the size or content of the values, other than the overall size limit.
Names that start with "b2-" must be in the list of defined "b2-" names and their values must be valid. See the list below for details. B2 rejects any upload request with an unexpected "b2-" file info name. B2 also rejects any upload with a "b2-" file info name whose value doesn't meet the specified format for that name.
If this is present, B2 will use it as the value of the 'Content-Disposition' header when the file is downloaded (unless it's overridden by a value given in the download request). The value must match the grammar specified in RFC 6266. Parameter continuations are not supported. 'Extended-value's are supported for charset 'UTF-8' (case-insensitive) when the language is empty. Note that this file info will not be included in downloads as a x-bz-info-b2-content-disposition header. Instead, it (or the value specified in a request) will be in the Content-Disposition header.
If this is present, B2 will use it as the value of the 'Content-Language' header when the file is downloaded (unless it's overridden by a value given in the download request). The value must match the grammar specified in RFC 2616. Note that this file info will not be included in downloads as a x-bz-info-b2-content-language header. Instead, it (or the value specified in a request) will be in the Content-Language header.
If this is present, B2 will use it as the value of the 'Expires' header when the file is downloaded (unless it's overridden by a value given in the download request). The value must match the grammar specified in RFC 2616. Note that this file info will not be included in downloads as a x-bz-info-b2-expires header. Instead, it (or the value specified in a request) will be in the Expires header.
If this is present, B2 will use it as the value of the 'Cache-Control' header when the file is downloaded (unless it's overridden by a value given in the download request), and overriding the value defined at the bucket level. The value must match the grammar specified in RFC 2616. Note that this file info will not be included in downloads as a x-bz-info-cache-control header. Instead, it (or the value specified in a request) will be in the Cache-Control header.
If this is present, B2 will use it as the value of the 'Content-Encoding' header when the file is downloaded (unless it's overridden by a value given in the download request). The value must match the grammar specified in RFC 2616. Note that this file info will not be included in downloads as a x-bz-info-b2-content-encoding header. Instead, it (or the value specified in a request) will be in the Content-Encoding header.
You provide the File Info with the
call for regular files, and
b2_start_large_file for large files.
It is set when the file is uploaded and cannot be changed. The
b2_get_file_info call returns the
information about a file. The information is also returned in the HTTP
headers when you download a file.
Recommended File Info key/value: If the original source of the file being uploaded has a last modified
time concept, Backblaze recommends using
src_last_modified_millis as the key,
and for the value use a string holding the base 10 number number of milliseconds since
midnight, January 1, 1970 UTC. This fits in a 64 bit integer such
as the type "long" in the programming language Java. It is intended
to be compatible with Java's time long. For example, it can be passed
directly into the Java call Date.setTime(long time).
Recommended File Info key/value: If this is a large file (meaning the caller is using b2_start_large_file)
and if the caller knows the SHA1 of the entire large file being uploaded,
Backblaze recommends using
large_file_sha1 as the key,
and for the value use a 40 byte hex string representing the SHA1.
HTTP Header Size Limit
The file name and file info must fit, along with the other necessary headers, within an 8KB limit imposed by some web servers and proxies. To ensure this, both now and in the future, B2 limits the combined header size for all file info. There are two possible limits depending on the features in use for a file.
- In most cases, B2 limits the combined header size for the file name and all file info to 7,000 bytes. This limit applies to the fully encoded HTTP header line, including the carriage-return and newline. The header line below is counted as 40 bytes.
- Newer features of the B2 API require additional headers. For files encrypted with Server-Side Encryption and/or in Object Lock-enabled buckets, the limit is reduced to 2,048 bytes to ensure sufficient space for additional response headers. This limit is on the file info header names and values only. The header line below is counted as 36 bytes. | <urn:uuid:975aaee0-2408-4c97-9798-95e1f18b56d6> | CC-MAIN-2022-40 | https://www.backblaze.com/b2/docs/files.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00543.warc.gz | en | 0.883443 | 2,611 | 2.640625 | 3 |
Protecting your accounts even if your password is compromised.
Two-factor authentication (or “2FA”) is a way to let a user identify him or herself to a service provider by requiring a combination of two different authentication methods. These may be something that the user knows (like a password or PIN), something that the user possesses (like a hardware token or mobile phone), or something that is attached to or inseparable from the user (like their fingerprints).
You probably already use 2FA in other parts of your life. When you use an ATM to withdraw cash, you must have both your physical bankcard (something you possess) and your PIN (something that you know). Right now, however, many online services only use one factor to identify their users by default—a password.
How does 2FA work online?
Several online services—including Facebook, Google, and Twitter—offer 2FA as an alternative to password-only authentication. If you enable this feature you’ll be prompted for both a password and a secondary method of authentication. This second method is typically either a one-time code sent by SMS or a one-time code generated by a dedicated mobile app that stores a secret (such as Google Authenticator, Duo Mobile, the Facebook app, or Clef). In either case, the second factor is your mobile phone, something you (normally) possess. Some websites (including Google) also support single-use backup codes, which can be downloaded, printed on paper, and stored in a safe location as an additional backup. Once you’ve opted-in to using 2FA, you’ll need to enter your password and a one-time code from your phone to access your account.
Why should I enable 2FA?
2FA offers you greater account security by requiring you to authenticate your identity with more than one method. This means that, even if someone were to get hold of your primary password, they could not access your account unless they also had your mobile phone or another secondary means of authentication.
Are there downsides to using 2FA?
Although 2FA offers a more secure means of authentication, there is an increased risk of getting locked out of your account if, for example, you misplace or lose your phone, change your SIM card, or travel to a country without turning on roaming.
Many 2FA services provide a short list of single-use “backup” or “recovery” codes. Each code works exactly once to log in to your account, and is no longer usable thereafter. If you are worried about losing access to your phone or other authentication device, print out and carry these codes with you. They’ll still work as “something you have,” as long as you only make one copy, and keep it close. Remember to keep the codes secure and ensure that no one else sees them or has access to them at any time. If you use or lose your backup codes, you can generate a new list next time you’re able to log in to your account.
Another problem with 2FA systems that use SMS messages is that SMS messaging isn’t that secure. It’s possible for a sophisticated attacker who has access to the phone network (such as an intelligence agency or an organized crime operation) to intercept and use the codes that are sent by SMS. There have also been cases where a less sophisticated attacker (such as an individual) has managed to forward calls or text messages intended for one number to his or her own, or accessed telephone company services that show text messages sent to a phone number without needing to have the phone.
If you’re worried about this level of attack, turn off SMS authentication, and only use authenticator apps like Google Authenticator or Authy. Unfortunately this option is not available with every 2FA-enabled service.
In addition, using 2FA means you may be handing over more information to a service than you are comfortable with. Suppose you use Twitter, and you signed up using a pseudonym. Even if you carefully avoid giving Twitter your identifying information, and even if you access the service only over Tor or a VPN, if you enable SMS 2FA, Twitter will necessarily have a record of your mobile number. That means that, if compelled by a court, Twitter can link your account to you via your phone number. This may not be a problem for you, especially if you already use your legal name on a given service, but if maintaining your anonymity is important, think twice about using SMS 2FA.
Finally, research has shown that some users will choose weaker passwords after enabling 2FA, feeling that the second factor is keeping them secure. Make sure to still choose a strong password even after enabling 2FA. See our creating strong passwords guide for tips.
How do I enable 2FA?
This differs from platform to platform, as does the terminology used. An extensive list of sites supporting 2FA is available at https://twofactorauth.org/. For the most common services, you can refer to our 12 Days of 2FA post, which shows how to enable 2FA on Amazon, Bank of America, Dropbox, Facebook, Gmail and Google, LinkedIn, Outlook.com and Microsoft, PayPal, Slack, Twitter, and Yahoo Mail.
If you want better protection against stolen passwords, read through this list and turn on 2FA for all of the important web accounts you rely on. | <urn:uuid:f33dd008-7146-42d5-b4a8-d3a8c07cf3af> | CC-MAIN-2022-40 | https://goaskrose.com/guide-2fa/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00543.warc.gz | en | 0.935786 | 1,148 | 3.21875 | 3 |
SysAdmin 6: Info, Help & Man command in Linux
Most of tools and utilities in Linux have own documentation, this documents can be displayed by using info command in Linux, or man command in Linux. Help command in Linux also provides a short help documentation about the related command (tools).
In this video, we will cover how you can Read, and use system documentation including man, info, in Linux.
Linux tutorial for Beginners – Understand and use essential tools
- Create and edit text files
- Create, delete, copy, and move files and directories
- read, and use system documentation including man, info, files | <urn:uuid:720270d5-8246-433d-a3f3-5442e34d55fc> | CC-MAIN-2022-40 | https://www.cyberpratibha.com/blog/read-and-use-system-documentation-including-man-info-in-linux/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00743.warc.gz | en | 0.803477 | 154 | 2.6875 | 3 |
MITMA’s stands for Man in The Middle Attacks, and is the term used to describe one the oldest but still exceptionally popular forms of attack. In this attack, a hacker intercepts an unsecure wireless connection, and places themselves between two computers/devices that are in communication with one another. Once the hacker is on the network, the attacker then impersonates both sides to steal information and sensitive data, to hijack emails, SSL hijack, eavesdrop into conversations.
In this type of attack, not only can a man in the middle collect and snoop into private conversations and communication, steal data including credentials and passwords, but they can also modify the traffic sent between the two parties to sabotage information.
‘Man-in-the-Middle attacks are incredibly common primarily because it’s an easy attack vector. According to IBM’s X-Force Threat Intelligence Index, 35% of exploitation activity involves Man-in-the-Middle Attacks. One of the prime reasons that MITM have become such a common attack vector is that Wi-Fi is a vulnerable technology.’- IBM
If the hacker has done their research, often through social engineering to learn about the two devices/targets involved, it can be exceedingly hard to detect such an attack. Once the attack is in place, links that look genuine can be sent to reroute victims to malicious sites instead. Once on such sites, the victim can fall prey to phishing campaigns, which can lead to larger attacks, including ransomware attacks.
“MitM attacks are attacks where the attacker is actually sitting between the victim and a legitimate host the victim is trying to connect to. So, they’re either passively listening in on the connection or they’re actually intercepting the connection, terminating it and setting up a new connection to the destination” – Johannes Ullrich, dean of research at SANS Technology Institute.
MITMA Attacks in the Finance Sector
MITMA can affect both businesses and personal devices alike. So, say that you wanted to make a bank transfer from your phone. If a man in the middle attack takes place, then the attack would be able to see the transfer being made and, in response, change the account number so that the destination differed as well as the amount transferred. Not only could numbers be altered, but the bad actor could also harvest the data, including login credential and use or sell those, and if anything was being downloaded or updated then a compromised version filled with malware could also be injected into the system.
There have been many MITMA’s reported within the financial sector, particularly within banking Apps. According to TrendMicro ‘The security flaw lies in the verification process of certificates used by the applications’ and flaws have been seen in apps ‘including those from Bank of America, Meezan Bank, Smile Bank, and HSBC, and VPN app TunnelBear’.
The Equifax data breach was a notable attack involving a man in the middle attack in which communication was intercepted by a malicious third party who launched the attack that the users of Equifax windows to enter their data, including personal log in details and credentials.
But how did the attackers get into the network? Well, Equifax used tools purchased via third parties that needed to be renewed on an annual basis. They had, however, failed to renew the certificate that would help search for data exfiltration in their network. Infact, they had forgotten to renew this for the greater part of ten months. This meant that for ten months the traffic that had been encrypted was not being inspected, giving attacks ample time to insert themselves, steal data, commit fraud and obfuscate their activity. It was only once the company realised their mistake with the renewal, that the breach became evident.
‘Considering the Equifax attack scenario, it could have been easily avoided if there was full digital footprint monitoring including their third-party and supplier using external web application scanning/patching and third-party risk monitoring. Such solutions can help in pre-empting future breaches by detecting such easily forgotten enablers of compromise. Post a successful MITMA, having a continuous dark-web monitoring capability is extremely important to limit its implications by detecting sensitive information that could have successfully been leaked and made it into any of the dark web forums. MITMA’s are here to stay simply due to their effectiveness and ease of deployment, especially with the recent cloud adoption and continuous digitisation having tens of millions of connections going to the cloud and IoT, accompanied with the lack of having adequate security controls in place for mitigating different forms of MITMA.’ – Islam Rashad, Cyber Security Solutions Presales Consultant, SecurityHQ
5 Recommendations to Reduce MITMA’s
In the case of MITMA’s attacks, focus on prevention is often a better strategy than trying to clean up after an attack. MITMA’s are hard to detect and even harder to remediate. Follow these 5 steps to increase your security posture.
- Ensure that encryption protocols are used within business accounts to protect the privacy of all devices, prevent attacks such as ransomware and identify theft, and to know that if devices are lost or stolen then the data is encrypted to reduce infiltration.
- Do not use public or open wi-fi, only use secure networks. If you are on an open wi-fi network, it is very easy for bad actors to enter the same network and view your activities. What adds an additional level of risk is if you are accessing work/business documents and emails via an unsecure network. This makes it easier for MITMA’s to take place and hijack communications.
- Use VPNs to secure connections. Once on a secure Wi-Fi connection, secure your network by using a VPN which will hide your private information, give you the ability to use Geo-blocking services, prevent data theft and bandwidth throttling.
- Use multifactor authentications for all accounts, both work and private. Multifactor authentication must combine the use of both known elements of the user, so for instance the user will know their username/email and password, combined with a device of the user, so a text can be sent to the persons mobile device, and then an additional level of protection can be included using biometrics such as face recognition, fingerprint, or retina scan. Sometimes, if an account is especially valuable, the access can only be granted at a specific place or time. But for everyday accounts, multifactor by combing two or more of the above factors will suffice.
- Ensure that you have the right network security protocols in place via your MSSP. And that you are using the right Endpoint Protection and MDR, Managed Detection and Response to detect sneaky and often devastating attacks.
Most importantly, if you suspect any rouge activity, report it to your security team and if suspect suspicious communication from otherwise genuine sources, reach out to them to make sure they have sent what you have received and vice versa.
SecurityHQ prides itself on its global reputation as an advanced Managed Security Service Provider, delivering superior engineering-led solutions to clients around the world. By combining dedicated security experts, cutting-edge technology and processes, clients receive an enterprise grade experience that ensures that all IT virtual assets, cloud, and traditional infrastructures, are protected.
Eleanor Barlow (Content Manager, SecurityHQ)
For media enquiries please contact Eleanor Barlow, +44-(0)20-332-706-99, firstname.lastname@example.org | <urn:uuid:cc02f091-e6f6-4b05-acdd-6284970d136b> | CC-MAIN-2022-40 | https://www.cyberdefensemagazine.com/5-recommendations/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00743.warc.gz | en | 0.94906 | 1,599 | 3.5 | 4 |
AI, virtual reality and games build a power-hungry metaverse
While the metaverse continues to generate headlines and business plans around the world, academics and industry analysts say more should be done to consider the real energy and power requirements demanded by the endeavour.
And while the world waits for companies to catch-up and take concerted action at the global scale, individual consumers should consider changing their relationship with technology in order to help build a more sustainable virtual reality for the masses.
Neural networks are going to be required at enormous scale in order to deliver on the promises being made by metaverse developers. These networks have enjoyed a period of significant advances, with progress in hardware and methodologies leading to a new generation that has been trained using huge datasets.
These networks have seen significant advances in terms of accuracy, but improvements like this depend on the availability of huge computational resources that require enormous energy consumption, explain Emma Strubell, Ananya Ganesh and Andrew McCallum, of the College of Information and Computer Sciences, University of Massachusetts Amherst, in their paper Energy and Policy Considerations for Deep Learning in NLP.
As a result, these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware.
One AI model could generate five cars worth of greenhouse gases
“Training just one AI model could generate 626,000 pounds of carbon dioxide, which is more than five times the amount of greenhouse gases emitted by a car in its lifetime,” says Lizzy Rosenberg, Associate Editor of Distractify in her article published in association with the World Economic Forum. “Cloud gaming, which is necessary for VR, could also raise carbon emissions by 2030. And, it will increase the necessity for high-res images, which only increases the need for more energy.”
As the metaverse will encourage users to buy new VR technology and other hardware, this could also lead to an increase in “e-waste”, which – if not recycled properly – can pollute soil, groundwater and landfills.
In their paper, Strubell, Ganesh and McCallum recommended a concerted effort by industry and academia to promote research into more efficient algorithms and hardware. There is already a precedent for NLP software packages prioritising efficient models, they say, and software developers could provide easy-to-use APIs to reduce computational requirements.
Writing for the World Economic Forum, Rosenberg says the onus is currently on major corporations to find eco-friendly means of building virtual realities, but consumers can hold themselves accountable my making a commitment to recycle e-waste and even shop for second-hand electronics.
“Also try to stream in SD [standard definition]— not HD [high definition] — when using your phone to interact with the metaverse, as HD has a higher environmental impact and releases more carbon emissions,” says Rosenberg. “Large corporations should be held accountable for this type of impact, but playing your part is important, too.” | <urn:uuid:f7523c55-3042-4c92-9afa-9e14370206cf> | CC-MAIN-2022-40 | https://aimagazine.com/articles/ai-virtual-reality-and-games-build-a-power-hungry-metaverse | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00743.warc.gz | en | 0.932314 | 642 | 2.953125 | 3 |
As with most changes in life, there will be positive and negative impacts on society as artificial intelligence continues to transform the world we live in. How that will balance out is anyone’s guess and up for much debate and for many people to contemplate. As an optimist at heart, I believe the changes will mostly be good but could be challenging for some. Here are some of the challenges that might be faced (and we should be thinking about how to address them now) as well as several of the positive impacts artificial intelligence will have on society.
Challenges to be faced
Artificial intelligence will definitely cause our workforce to evolve. The alarmist headlines emphasise the loss of jobs to machines, but the real challenge is for humans to find their passion with new responsibilities that require their uniquely human abilities. According to PwC, 7 million existing jobs will be replaced by AI in the UK from 2017-2037, but 7.2 million jobs could be created. This uncertainty and the changes to how some will make a living could be challenging.
The transformative impact of artificial intelligence on our society will have far-reaching economic, legal, political and regulatory implications that we need to be discussing and preparing for. Determining who is at fault if an autonomous vehicle hurts a pedestrian or how to manage a global autonomous arms race are just a couple of examples of the challenges to be faced.
Will machines become super-intelligent and will humans eventually lose control? While there is debate around how likely this scenario will be we do know that there are always unforeseen consequences when new technology is introduced. Those unintended outcomes of artificial intelligence will likely challenge us all.
Another issue is ensuring that AI doesn’t become so proficient at doing the job it was designed to do that it crosses over ethical or legal boundaries. While the original intent and goal of the AI is to benefit humanity, if it chooses to go about achieving the desired goal in a destructive (yet efficient way) it would negatively impact society. The AI algorithms must be built to align with the overarching goals of humans.
Artificial intelligence algorithms are powered by data. As more and more data is collected about every single minute of every person’s day, our privacy gets compromised. If businesses and governments decide to make decisions based on the intelligence they gather about you like China is doing with its social credit system, it could devolve into social oppression.
Positive Impacts of Artificial Intelligence on Society
Artificial intelligence can dramatically improve the efficiencies of our workplaces and can augment the work humans can do. When AI takes over repetitive or dangerous tasks, it frees up the human workforce to do work they are better equipped for—tasks that involve creativity and empathy among others. If people are doing work that is more engaging for them, it could increase happiness and job satisfaction.
With better monitoring and diagnostic capabilities, artificial intelligence can dramatically influence healthcare. By improving the operations of healthcare facilities and medical organisations, AI can reduce operating costs and save money. One estimate from McKinsey predicts big data could save medicine and pharma up to $100B annually. The true impact will be in the care of patients. Potential for personalised treatment plans and drug protocols as well as giving providers better access to information across medical facilities to help inform patient care will be life-changing.
Our society will gain countless hours of productivity with just the introduction of autonomous transportation and AI influencing our traffic congestion issues not to mention the other ways it will improve on-the-job productivity. Freed up from stressful commutes, humans will be able to spend their time in a variety of other ways.
The way we uncover criminal activity and solve crimes will be enhanced with artificial intelligence. Facial recognition technology is becoming just as common as fingerprints. The use of AI in the justice system also presents many opportunities to figure out how to effectively use the technology without crossing an individual’s privacy.
Unless you choose to live remotely and never plan to interact with the modern world, your life will be significantly impacted by artificial intelligence. While there will be many learning experiences and challenges to be faced as the technology rolls out into new applications, the expectation will be that artificial intelligence will generally have a more positive than negative impact on society. | <urn:uuid:5e91a93c-b7e2-4078-95ea-85f22c0741c8> | CC-MAIN-2022-40 | https://bernardmarr.com/what-is-the-impact-of-artificial-intelligence-ai-on-society/?paged1119=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00743.warc.gz | en | 0.946824 | 861 | 2.9375 | 3 |
What is Green Technology?
Green technology is all about using science to find a way to lower carbon footprints, conserve resources and minimize overall waste. It is one of the most important technologies that will allow us to have a sustainable future.
Some of the main goals of green technology are:
- Lower carbon footprint
- Energy efficiency
- Renewable energy sources
- Eco-friendly materials
- Smart power management
- Efficient manufacturing processes
- Recycling old technology
We will dive into each:
Lower carbon footprint
The production of goods and services has been linked to high levels of carbon dioxide emissions since the Industrial Revolution. The combustion process produces CO2, which is a greenhouse gas that contributes to climate change and global warming.
Green technologies aim to reduce these emissions by using alternative manufacturing processes that produce fewer emissions.
Cutting down on energy consumption is another way companies can implement green technologies. Energy-efficient building materials such as LED lighting or insulation made from recycled materials can help lower your electricity bill while reducing your carbon footprint at the same time.
Renewable energy sources
Structures and machinery can be designed with renewable energy in mind. They can incorporate solar panels, wind turbines, and other energy-generating equipment, which will enable them to work autonomously. This way, buildings and vehicles can become truly sustainable.
A lot of materials used for production are not eco-friendly. They might be toxic or non-degradable. Therefore, it is essential to design products that can be produced from natural materials that are safe for the environment.
Smart power management
To avoid wasting energy, devices should automatically turn on and off when they need to conserve power. For example, a smart air conditioning system should only run when the room temperature goes above a certain level or when it detects motion in the room within a specific time interval.
Efficient manufacturing processes
Organizations and individual designers should use technology that reduces their carbon footprint during production. For example, 3D printers can minimize waste by creating designs according to precise dimensions. Thus, manufacturers do not have to throw away excess material after making a product.
Check out the recording of our webinar "The Shift to Earth 4.0", where our panel of experts shared their knowledge and perspectives on the state of the environment and how Industry 4.0 will impact the earth.
What are Examples of Green Technology?
Recycling is a green technology process that is necessary for the survival of our planet. Recycling is the process of converting waste into reusable objects to prevent waste of potentially useful materials, reduce the consumption of fresh raw materials, reduce energy usage, reduce air pollution (from incineration) and water pollution (from landfilling) by reducing the need for "conventional" waste disposal and lower greenhouse gas emissions as compared to plastic production.
Recycling is one of the most important actions currently available to reduce these impacts and represents one of the most dynamic areas in the plastics industry today.
Solar energy is radiant light and heat from the sun that is harnessed using a range of ever-evolving technologies such as solar heating, photovoltaics, solar thermal energy, solar architecture, molten salt power plants, and artificial photosynthesis.
It is an important source of renewable energy. Its technologies are broadly characterized as either passive or active solar, depending on how they capture and distribute solar energy or convert it into solar power. Active solar techniques include photovoltaic systems, concentrated solar power, and solar water heating to harness the energy.
Wind power is energy that is harvested by converting wind into electricity. The wind turns the turbine's blades and spins an internal shaft connected to a generator. There are many different types of wind turbines used to harness this energy according to the area they will be placed in. The electricity can then be stored or sent directly to an electrical grid.
Hydroelectricity is a method of generating electrical power using the gravitational force of falling water. In this process, flowing water rotates turbines, which activate generators that produce electricity.
Hydroelectric power plants generally have one or more dams built across a river; it generates electricity when water from the river flows through the dam's turbines. This process does not pollute the air and produces no greenhouse gases. It is considered one of the most efficient renewable energy sources in cost per unit of power generated.
This is the most common form of green technology nowadays. It makes use of the earth's heat to produce power. It works by using the heat and steam from the earth's core to power an electrical generator that produces electricity.
These are high-tech buildings that adapt to their environment and occupants. They use efficient sensors, actuators, and controllers that allow a building to self-regulate its temperature and lighting for maximum efficiency. These buildings cut down on wasted electricity, water, and gas.
Vertical Gardens or Farms
These are also known as living walls or green walls and are structures with plants growing vertically from them. They can be attached to existing structures or be freestanding structures independently.
Carbon filters are devices that remove carbon dioxide from the air. These filters work well in confined areas like vehicles since they can remove carbon dioxide emissions produced by cars and trucks, thus making it possible for people to breathe fresh air even while inside their vehicles.
The Benefits of Green Technology for Business
We can't afford to ignore the power of green technology in our daily lives. It has become a crucial component of not only our health, but also our future. Businesses are no different than the rest of us; they should be focusing on green technology to ensure their companies are prepared for natural disasters and other business challenges.
It's no secret that lower energy costs can provide significant financial benefits for businesses, but there are many other benefits to consider when implementing green technology at work. Consider these:
Reduced water usage
Water is a valuable commodity, and companies should use it wisely. Companies can reduce their water usage by taking basic measures such as adjusting their HVAC systems to keep humidity levels low.
Green-tech improves the efficiency of your business, saving you money and time in the long run. Recognize the importance of energy-efficient technology, such as light bulbs, appliances, and cooling systems. You can also consider solar panels for added savings and less reliance on fossil fuels.
Improved disaster readiness
Being green doesn't just mean going paperless; it also means keeping all your information backed up safely. This can help prevent disaster if a natural or man-made occurrence strikes your office. Have computerized records and stored data in a place where you can easily access it remotely when needed.
The federal government wants to encourage businesses to be greener because it's good for the environment. There are tax breaks available for those who update their facilities to include more energy-efficient features. These benefits and instructions can be found on the IRS website.
Brand love and more customers
People are more likely to buy from a company if they know it's environmentally friendly. Indeed, YouGov reports that consumers are more likely to buy from companies that promote themselves as environmentally conscious. The benefits of green technology for business include building a positive brand image to impress customers and staff alike.
Businesses have a responsibility to consider their impact on the environment. Green technology is simply one way your business can reduce its carbon footprint and make the world a better place for future generations.
Helps win contracts
The government has a goal to reduce greenhouse gas emissions by 100% by 2050. And some big businesses have pledged similar commitments, which means that green technology will be an important factor in winning contracts with both public and private sector organizations.
The Role of IoT
The Internet of Things (IoT) is a new reality, and it's changing more than just our devices. IoT is also expanding into other industries, including green technology. Here we look at how IoT changes the way we live and what we can do to make these changes more sustainable.
As the world becomes more aware of climate change, the need for businesses to cut carbon emissions has become a priority. This can be done by using smart technologies, such as sensors and internet-connected devices. Several startups have developed products that help businesses better manage their heating, ventilation, and air conditioning (HVAC) systems to reduce their energy consumption.
Consumers are also taking part in making their homes more environmentally friendly. Using smart technologies, homeowners can monitor and control their energy usage. This includes watering lawns, controlling temperature settings, and monitoring lighting usage.
Startups are developing products that make it easier for consumers to monitor their homes, especially when they're away from the premises.
Precision farming techniques are already being used to optimize irrigation levels and improve crop yields in agriculture. New technologies such as drones and aerial imaging are also helping farmers monitor their crops more effectively.
Smart waste management
Companies such as SmartBin rely on IoT to help with smart waste management. By equipping dumpsters with sensors, they can improve collection routes and save millions of dollars in fuel costs each year.
We live in a cynical world, where people lose faith in innovation and progress. However, the future of green technology will see a paradigm shift in how we power our homes, transport our people and provide our industries with their energy needs. | <urn:uuid:15abfeb7-1bca-46ca-b4d6-db0263ba8424> | CC-MAIN-2022-40 | https://iotmktg.com/what-you-should-know-about-green-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00743.warc.gz | en | 0.942227 | 1,959 | 3.53125 | 4 |
VRRP (Virtual Router Redundancy Protocol) is a standard based First Hop Redundancy Protocol. It eliminates the single point of failure by using multiple optional devices. Traffic prefers going through the selected Master device. You can reaach the related rfc here.
The configuration of Virtual Router Redundancy Protocol is like another First Hop Redundancy Protocol, Cisco proprietary HSRP. There are only small differences in configuration and operation. We will talk about these difference below.
The devices that are configured with VRRP, have two roles. These roles are:
The Active or selected device that the traffic will flow through has the role Master.
The remainning devices has the role Backup.
There is one Master,but there can be one more Backups.
Here, the principle is similar to HSRP. If the Master go down, then one of the Backups takes the Master role.
In Virtual Router Redundancy Protocol, preempt feature is enabled by default. What was preempt? Preempt is the process of “taking the Master role back”. If the failedd Master come back, it can take its role back again. This is enabled by default. | <urn:uuid:968b906e-67b5-4bec-9834-d7f8d82429c0> | CC-MAIN-2022-40 | https://ipcisco.com/vrrp-virtual-router-redundancy-protocol-part-4/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00743.warc.gz | en | 0.886527 | 255 | 2.75 | 3 |
Until a few years ago, deep learning was a tool that could hardly be used in real projects and only with a large budget to afford the cost of GPUs in the cloud. However, with the invention of TPU devices and the field of AI at the Edge, this completely changed and allows developers to build things using deep learning in a way that has never been done before.
Copyright: blog.idatha.com – “How Deep Learning can help save human lives at a container terminal”
In this post, we will explore the use of Coral devices, in conjunction with deep learning models to improve physical safety at one of the largest container terminals in Uruguay.
If any of this sounds interesting to you, sit back, and let’s start the journey!
The Port of Montevideo is located in the capital city of Montevideo, on the banks of the “Río de la Plata” river. Due to its strategic location between the Atlantic Ocean and the “Uruguay” river, it is considered one of the main routes of cargo mobilization for Uruguay 🧉 and MERCOSUR 🌎. Over the past decades, it has established itself as a multipurpose port handling: containers, bulk, fishing boats, cruises, passenger transport, cars, and general cargo.
MERCOSUR or officially the Southern Common Market is a commercial and political bloc established in 1991 by several South American countries.
Moreover, only two companies concentrate all-cargo operations in this port: the company of Belgian origin Katoen Natie and the Chilean and Canadian capital company Montecon. Both companies have a great commitment to innovation and the adoption of cutting-edge technology. This philosophy was precisely what led one of these companies to want to incorporate AI into their processes and that led them to us.
The client needed a software product that would monitor security cameras in real-time, 24 hours a day. The objective was to detect and prevent potential accidents as well as alert them to the designated people. This AI supervisor would save lives by preventing workplace accidents while saving the company money in potential lawsuits.
In other words, this means real-time detection of people and vehicles doing things that can cause an accident. Until now, this was done by humans who, observing the images on the screens, manually detected these situations. But humans are not made to keep their attention on a screen for long periods, they get distracted, make mistakes and fall asleep. That is why AI is the perfect candidate for this job: it can keep its attention 24 hours a day, it never gets bored and it never stops working.[…]
Read more: www.blog.idatha.com | <urn:uuid:ce3cff5c-e2cf-438c-8023-2c80b85b7add> | CC-MAIN-2022-40 | https://swisscognitive.ch/2022/09/05/how-deep-learning-can-help-save-human-lives-at-a-container-terminal/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00743.warc.gz | en | 0.960378 | 555 | 2.65625 | 3 |
When Jean-Jacques Rousseau wrote The Social Contract in 1762, he argued that only humans possess sovereign power, and that they alone may choose which freedoms they surrender in exchange for the benefits and stability of government. Now, for the first time in more than a century, we are debating amending or rebalancing aspects of the social contract in order to deal with a deadly pandemic.
One of the key challenges associated with containing the spread of the coronavirus that causes COVID-19 is contact tracing: identifying other individuals and groups with whom a COVID-19-positive individual may have been in contact. Under normal circumstances, the mere idea of using any form of mobile phone data to track users en masse for a purpose they never consented to would be anathema to the spirit of regulations like GDPR and CCPA. But, of course, these are not normal circumstances.
COVID-19 contact tracing is different in that complete anonymization is not possible when identifying COVID-19-positive individuals. To protect others, health systems already track COVID-19 cases and do everything in their power to perform contact tracing. The question is: How can technology help in a way that doesn't fundamentally violate our expectations around privacy?
Read the full article published May 12, 2020 here: https://www.darkreading.com/endpoint/coronavirus-data-privacy-and-the-new-online-social-contract/a/d-id/1337769 by Dark Reading. | <urn:uuid:1bf2fbe4-b5e8-4536-a7f7-c94aa74b40a9> | CC-MAIN-2022-40 | https://www.f5.com/labs/articles/bylines/coronavirus-data-privacy-the-new-online-social-contract | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00743.warc.gz | en | 0.912191 | 314 | 3.078125 | 3 |
A computer science engineer at Michigan State University has a word of advice for the millions of bitcoin owners who use smartphone apps to manage their cryptocurrency: don’t. Or at least, be careful. Researchers are developing a mobile app to act as a safeguard for popular but vulnerable “wallet” applications used to manage cryptocurrency.
“More and more people are using bitcoin wallet apps on their smartphones,” said Guan-Hua Tu, an assistant professor in MSU‘s College of Engineering who works in the Department of Computer Science and Engineering. “But these applications have vulnerabilities.”
Smartphone wallet apps make it easy to buy and trade cryptocurrency, a relatively new digital currency that can be challenging to understand in just about every way except one: it’s very clearly valuable. Bitcoin was the most valuable cryptocurrency at the time of writing, with one bitcoin being worth more than $55,000.
But Tu and his team are uncovering vulnerabilities that can put a user’s money and personal information at risk. The good news is that the team is also helping users better protect themselves by raising awareness about these security issues and developing an app that addresses those vulnerabilities.
The Bitcoin Security Rectifier
The researchers showcased the Bitcoin Security Rectifier. In terms of raising awareness, Tu wants to help wallet users understand that these apps can leave them vulnerable by violating one of Bitcoin’s central principles, something called decentralization.
Bitcoin is a currency that’s not tied to any central bank or government. There’s also no central computer server that stores all the information about bitcoin accounts, such as who owns how much.
“There are some apps that violate this decentralized principle,” Tu said. “The apps are developed by third parties. And, they can let their wallet app connect with their proprietary server that then connects to Bitcoin.”
How Bitcoin Security Rectifier works
In essence, Bitcoin Security Rectifier can introduce a middleman that Bitcoin omits by design. Users often don’t know this and app developers aren’t necessarily forthcoming with the information.
“More than 90% of users are unaware of whether their wallet is violating this decentralized design principle based on the results of a user study,” Tu said. And if an app violates this principle, it can be a huge security risk for the user. For example, it can open the door for an unscrupulous app developer to simply take a user’s bitcoin.
Tu said that the best way users can safeguard themselves is to not use a smartphone wallet app developed by untrusted developers. He instead encourages users to manage their bitcoin using a computer — not a smartphone — and resources found on Bitcoin’s official website, bitcoin.org. For example, the site can help users make informed decisions about wallet apps.
But even wallets developed by reputable sources may not be completely safe, which is where the new app comes in.
Most smartphone programs are written in a programming language called Java. Bitcoin wallet apps make use of a Java code library known bitcoinj, pronounced “bitcoin jay.” The library itself has vulnerabilities that cybercriminals could attack, as the team demonstrated in its recent paper.
These attacks can have a variety of consequences, including compromising a user’s personal information. For example, they can help an attacker deduce all the Bitcoin addresses that wallet users have used to send or receive bitcoin. Attacks can also send loads of unwanted data to a user, draining batteries and potentially resulting in hefty phone bills.
The app runs at the same time on the same phone as a wallet
Tu’s app is designed to run at the same time on the same phone as a wallet, where it monitors for signs of such intrusions. The app alerts users when an attack is happening and provides remedies based on the type of attack, Tu said. For example, the app can add “noise” to outgoing Bitcoin messages to prevent a thief from getting accurate information.
“The goal is that you’ll be able to download our tool and be free from these attacks,” Tu said.
The team is currently developing the app for Android phones and plans to have it available for download in the Google Play app store in the coming months. There’s currently no timetable for an iPhone app because of the additional challenges and restrictions posed by iOS, Tu said.
In the meantime, though, Tu emphasized that the best way users can protect themselves from the insecurities of a smartphone bitcoin wallet is simply by not using one, unless the developer is trusted.
“The main thing that I want to share is that if you do not know your smartphone wallet applications well, it is better not to use them since any developer — malicious or benign — can upload their wallet apps to Google Play or Apple App Store,” he said. | <urn:uuid:f0d220e8-538d-491d-aa91-2e26672bcd57> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2021/05/10/bitcoin-security-rectifier/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00743.warc.gz | en | 0.938476 | 1,009 | 2.828125 | 3 |
Knowledge graphs are becoming an increasingly important tool that organizations are using to manage the vast amounts of data they collect, store, and analyze. An enterprise knowledge graph’s representation of an organization’s content and data creates a model that integrates structured and unstructured data, and leverages semantic and intelligent qualities to make them “smart.”
Data Summit Connect 2020 featured a full day of pre-conference workshops, followed by a free 3-day series of data-focused webinars. As part of the virtual conference hosted by DBTA and Big Data Quarterly, Joe Hilger and Sara Nash presented a workshop, titled "Introduction to Knowledge Graphs."
Hilger, who is COO and co-founder of Enterprise Knowledge, LLC, and Nash, who is a technical analyst with the consultancy, covered what a knowledge graph is, how it is implemented, and how it can be used to increase the value of data.
The wide-ranging and interactive presentation covered how to build a business case for knowledge graphs and enterprise AI; the foundations and technical infrastructure to make knowledge graphs a reality—including commonly used terms and concepts such as triples, RDFs, and virtual mapping; practical use cases for knowledge graphs; and where to begin in knowledge graph development—developing an ontology.
What is a Knowledge Graph?
A knowledge graph is a specialized graph or network of the things we want to describe and how they are related, said Nash. It is a semantic model since we want to capture and generate meaning with the model. According to Nash, a simple way to think of a knowledge graph is: ontology + data/content = knowledge graph. "That, I think, is a really helpful way to understand what a knowledge graph is."
Knowledge graphs are built on graph databases which are very good at modeling relationships, said Hilger, noting that, ironically, "relational" databases are not as effective as modeling relationships.
Where Knowledge Graphs Excel
"What knowledge graphs are best at is aggregating multiple different types of information—including datasets—categorizing them, identifying relationships, and then integrating them together, but not necessarily moving the information," Hilger explained. "This is something that is important and powerful: We are not moving information out of its core, original dataset. We are just describing how it comes together. So you can take information from a whole bunch of systems that you already have."
The way to approach it, said Hilger, is to figure out what your data sources are, and define the methods of categorization for information for efficient and effective reporting, including synonyms for terms. The power of knowledge graphs is that, in the long term, when people want to ask about information, when they are starting to query, they want it in a way that aligns with the way they think and perceive those information assets.
The knowledge graph and ontology maps information in a way that is much more aligned with the way people think and ask questions, he said. In this way, people can take a complex data lake or complex information set and then start to model it in a way that is much more aligned with the way people are going to ask questions of the dataset. "That is what we are doing: We are taking data from multiple sources, categorizing it in a way that makes sense, organizing or modeling it in a way that aligns with the way the people think about the business or the organization, and then storing it in a way that will point to the original sources—but have it organized in a way that makes sense. This is what we are talking about when we are pulling together a knowledge graph and this is why you hear all the talk about it these days." While some data can be pulled in directly to the knowledge graph, other times virtual mapping may be used, leaving the original set in place because, particularly when there are large amounts of data, pulling it in and moving it does not make sense, but mapping and organizing it in a way that allows you to pull from that set dynamically is extremely effective, said Hilger.
Knowledge Graph Use Cases
Hilger and Nash presented four case studies in which the use of knowledge graphs helped prominent organizations they worked with achieve their goals.
Use case #1: Recommendation Engine—In the first case, a global bank that focuses on providing development financing for projects in Latin America and the Caribbean needed a better way to disseminate information and expertise to its staff so it could work more efficiently without knowledge loss or duplication of work. Using knowledge graphs based on a linked data strategy enabled the bank to connect all of its knowledge assets to increase the relevancy and personalization of search, allowed employees to discover content across unstructured data sources, and further facilitated connections between people who share similar interests, expertise, or locations.
Use case #2: Natural Language Querying on Structured Data—In the second example that Hilger and Nash highlighted, a large supply chain needed to provide its business users with a way to obtain quick answers based on very large and varied datasets that were stored in a large RDBMS data warehouse with virtually no context available. "If you put all your data in a data lake without a strong metadata strategy, it can be very difficult to get data out," said Nash. The company implemented a knowledge graph, incorporating natural language querying on structured data using SPARQL to allow non-technical users to uncover answers to critical business questions.
Use case #3: Relationship Discovery through Unstructured Data—In the third case study that Hilger and Nash presented, a federally funded research and development center had an extensive project library with technical documents, certifications, and reports related to engineering projects but the library did not offer much metadata and the information was difficult to search. Using a knowledge graph that connected documents and individuals and a semantic search platform, the center is now able to browse documents by person, project, and topic, and analyze relationships between people and projects directly.
Use case #4: Data Management—Finally, when data scientists and economists at a federal agency were having difficulty connecting siloed data sources to access, interpret, and track data to provide context, the use of a knowledge graph and advanced semantic metadata modeling allowed them to access data in a way that is more intuitive, according to Hilger and Nash. Data scientists and economists can now access the agency's data resources through a single tool that makes data stored in multiple locations available without moving or copying data, and they spend less time tracking or processing data for non-technical users who can now directly access and explore the data for decision making.
For more information about Enterprise Knowledge, LLC, go to https://enterprise-knowledge.com.
Webcast replays of Data Summit Connect presentations are available on the DBTA website at www.dbta.com/DBTA-Downloads/WhitePapers. | <urn:uuid:5244fb47-ea74-4b56-a7be-10d27df1b0bc> | CC-MAIN-2022-40 | https://www.dbta.com/Editorial/News-Flashes/Data-Summit-Connect-2020-Presents-an-Introduction-to-Knowledge-Graphs-Pre-Conference-Workshop-141180.aspx?utm_source=CloudQuant&utm_campaign=nlp&utm_content=MachineLearning | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00143.warc.gz | en | 0.942277 | 1,413 | 2.5625 | 3 |
Awareness is the key to make everything better in our lives. While everyone knows how technology and connected devices play a significant role in everyday routines, only a few are aware of the security implications of the technology if not appropriately managed. The National Cybersecurity Awareness Month sheds light on various security vulnerabilities, and offers actionable guidance to users and organizations defending against evolving cyberthreats in cyberspace.
Initiated by the National Cyber Security Alliance (NCSA) and the U.S. Department of Homeland Security (DHS) in October 2004, the Cybersecurity Awareness Month continues to raise awareness among users and organizations, to own their role in protecting their part of cyberspace and the importance of taking proactive steps to enhance cybersecurity.
In its 18th year, Cybersecurity Awareness Month continues using last year’s theme – Do Your Part. #BeCyberSmart.
The Cybersecurity and Infrastructure Security Agency (CISA) and NCSA stated that 2021 Cybersecurity Awareness Month will focus on multiple areas, which include:
- Week of October 4: Be Cyber Smart.
- Week of October 11: Phight the Phish!
- Week of October 18: Explore. Experience. Share. – Cybersecurity Career Awareness Week
- Week of October 25: Cybersecurity First
The agencies also offered cybersecurity technical and non-technical resources to help users improve their cybersecurity posture and mitigate security risks. “Use the hashtag #BeCyberSmart before and during October to promote your involvement in raising cybersecurity awareness,” CISA said.
POTUS Proclamation on Cybersecurity Awareness Month
This year’s Cybersecurity Awareness Month has become crucial for the U.S. government as the country sustained a series of cyber and ransomware attacks affecting its critical infrastructures. POTUS Joe Biden asked people, businesses, and institutions in the U.S. to recognize the importance of cybersecurity and protect against cyberthreats in support of national security and resilience.
“Our Nation is under a constant and ever-increasing threat from malicious cyber actors. Ransomware attacks have disrupted hospitals, schools, police departments, fuel pipelines, food suppliers, and small businesses, delaying essential services and putting the lives and livelihoods of Americans at risk. During Cybersecurity Awareness Month, I ask everyone to Do Your Part. Be Cyber Smart. All Americans can help increase awareness on cybersecurity best practices to reduce cyber risks,” Biden said. | <urn:uuid:0960e245-800a-46d4-9d59-8cacbff5309b> | CC-MAIN-2022-40 | https://cisomag.com/do-your-part-becybersmart-this-2021-cybersecurity-awareness-month/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00143.warc.gz | en | 0.94421 | 497 | 2.8125 | 3 |
Scanrail - Fotolia
Snapshots are a well-known and mature technology used for data protection, historically found in storage arrays.
With the advent of virtualisation, the hypervisor now provides an alternative location from which to execute the snapshot process.
Having two places in which to protect data via snapshots raises an obvious question: what is the best location to perform a snapshot and what are the pros and cons of each?
A snapshot is a point-in-time copy of data that represents an image of a volume or a LUN (logical unit number) that can be used as a backup and for data recovery. There is continual debate within the industry as to whether a snapshot is a true backup, because an individual snapshot depends on the source volume from which it derives, and so does not protect against hardware failure.
However, a snapshot can be used to recover anything from individual files to an entire virtual machine or application server.
Snapshots work by manipulating the metadata that is used to map a logical LUN or volume to its physical location on disk or flash. A logical volume will typically be divided into blocks from 4KB upwards in size. The snapshot process copies these metadata pointers, allowing a snapshot to represent a point-in-time copy of the volume.
Snapshots fall into three types:
- Changed-block snapshots. These implementations fall into two categories: copy-on-write and redirect-on-write. A copy-on-write snapshot maintains a snapshot image by copying updates to the volume (made after the snapshot is taken) to another location, typically a dedicated snapshot area. Volume updates are made “in place”, updating the same physical disk location. Redirect-on-write snapshots direct updates to a block within a volume to unused space on disk. Updates are always written to free space.
- Clones. This implementation copies the entire volume to new physical space on disk. Although the snapshot is expensive in terms of the additional space required (and the overhead of moving the data), a clone does provide a degree of physical protection when copied to a separate set of physical media.
- CDP. Continuous data protection is a different approach to protecting data that tracks all updates to a volume. Theoretically, this means a volume can be reverted to any point in time, usually at the level of individual block updates. CDP systems can be expensive in terms of additional disk space, but do provide a high level of granularity on restores.
Snapshots in the hypervisor
Hypervisor-based snapshots provide a way to take an image copy of a virtual machine (VM), either to access and restore individual files, provide a rollback point to restore the VM, or to clone the VM to another virtual machine. VMs are simply files (VMDKs in the case of VMware vSphere and VHD files on Microsoft Hyper-V), which means creating and managing snapshots is a case of manipulating these image files.
Both vSphere and Hyper-V manage snapshots by using secondary files associated with the VMDK/VHD to store updates to a VM after a snapshot is taken. These updates accumulate until the snapshot is deleted, at which time the secondary files are integrated back into the original VMDK/VHD.
Snapshots in the storage array
As already mentioned, snapshots in the storage array are managed by manipulating metadata used to track the logical-to-physical relationship of LUNs/volumes to data on disk. When a snapshot copy is taken, the array replicates the metadata that maps the physical layout on disk/flash. At this point, one or more snapshots could reference the same physical data on disk.
As the source volume continues to be updated, changed blocks are either moved out or written to new free space, depending on the snapshot technique. When a snapshot is no longer required, the metadata is simply deleted and unique blocks “owned” by the snapshot are released.
Pros and cons
Array-based snapshots are typically very quick to take, as they are simply a copy of metadata, usually stored in memory, but there can be a small impact on I/O performance while the copy process executes. The number of supported snapshots varies by platform, with some suppliers providing support for thousands of snapshots per system. Most suppliers offer advanced scheduling to automate the snapshot process.
An array-based snapshot is a copy of the image of a running VM or application server at a specific point in time and, as a result, the snapshot will appear as a “crash copy” of that VM/application if it is fully restored and accessed. Remember also that snapshots on the array are based on a LUN or volume (which, in turn, will map to a datastore in the hypervisor).
Read more on snapshots and data protection
- Best-practice data protection strategy combines backup with snapshots, CDP and replication for different levels of recovery.
- Virtual machine backup is a vital task for IT departments, but pitfalls abound. We look at the top five issues.
This means that array-based snapshots may contain many VMs, making it difficult to build schedules around protecting individual virtual machines. This is expected to change with the introduction of VVOLs.
Hypervisor-based snapshots, on the other hand, operate at the VM level, allowing a snapshot policy to be applied to each VM individually. Also, where integration tools have been deployed to a VM, the snapshot process can be synchronised with quiescence or suspension of I/O at the VM/application level, to provide a more consistent image rather than a “crash copy”.
The disadvantage of using hypervisor-based snapshots is in the overhead of writing to separate VMDK files and integrating those updates back when the snapshot is deleted. This process can be time-consuming and have a direct impact on performance.
Choosing the right approach
Hypervisor-based snapshots are a good choice where application consistency is essential, and are the only choice if the underlying storage platform has no snapshot support. The hypervisor-based approach is more efficient where datastores are built from large LUNs because there is no additional data retained, as there is with array-based copies.
But array-based solutions have the performance edge and, as a result, one solution used by backup suppliers is to apply a combination of hypervisor- and array-based snapshots at the same time.
The process works by initiating a hypervisor snapshot to suspend I/O for consistency, followed by taking an array-based snapshot for flexibility/performance. The hypervisor snapshot can then be released almost immediately, resulting in very little data to reintegrate into the VMDK.
This solution gives the best of both worlds – data integrity with the flexibility and performance of hardware-based protection. | <urn:uuid:c4ec9f54-417a-4859-ad0a-a487ed9f0ce8> | CC-MAIN-2022-40 | https://www.computerweekly.com/feature/Snapshots-Hypervisor-vs-Array | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00143.warc.gz | en | 0.913083 | 1,400 | 3.078125 | 3 |
Reducing Carbon Emissions through the Data Tape Ecosystem
If there was ever a time for industries and governments around the world to come together and finally take steps to mitigate climate change, now would seem to be it. The return of the United States to the Paris Climate Agreement and the recent U.S. – China talks on climate change are all positive signs when it comes to moving the needle forward on sustainability initiatives. While fighting COVID-19 took center stage in 2020 and early 2021, our future depends on what we do collectively to reduce our environmental impact now and in the immediate years ahead.
It’s Hard to Deny Global Warming and Climate Change
According to an article that appeared in the Wall Street Journal earlier this year, NASA has ranked 2020 as tied with 2016 for the warmest year since record-keeping began in 1880. In a separate assessment, NOAA (National Oceanic and Atmospheric Administration), which relies on slightly different temperature records and methods, calculated that the global average temperature last year was the second-highest to date – just 0.04 degrees Fahrenheit shy of tying the record set in 2016.
On top of the record number of hurricanes and the wildfires out west, the recent Texas deep freeze, which caused widespread power outages and other weather-related tragedies and calamities, seems to be just one more example of climate change. Weather patterns are becoming more unpredictable, which can result in extreme heat, cold and increased intensity of natural disasters.
It is widely acknowledged that global temperatures have been rising especially in the north polar region where we have seen a dramatic shrinking of the polar ice cap. When Arctic air warms, it sets off an atmospheric phenomenon that weakens the polar vortex (the normal jet stream of wind that keeps frigid air to the north) and allows cold air to fall…as far as Texas.
Data Center Energy Consumption and the Advantage of Modern Tape Technology
The key to mitigating the worst impacts of climate change is a reduction in the amount of greenhouse gases produced by humans. Producing energy is extremely resource-intensive, so reducing the amount of energy we consume in all aspects of our lives is of critical importance.
Data centers are significant consumers of energy accounting for as much as 2% of global demand and rising to 8% by some estimates. Data centers can do their part to reduce energy consumption in many ways by becoming more energy-efficient, including simply migrating the vast amounts of still valuable, but rarely accessed, “cold data”.
This means moving cold data from energy-intensive tiers of storage like constantly spinning HDDs to energy-efficient tape systems that consume zero energy unless actively writing or reading data tapes. Because data tape is inherently removable and portable, it can be easily exported from its automated library environment to a secure offsite location for long-term retention and protection against any disaster, or unauthorized network intrusion by hackers. This is commonly referred to as “air gap” protection against the constant threat of ransomware. Thanks to tape’s lowest cost per GB and lowest energy consumption, keeping a second or third copy of data offsite is not only a 3-2-1 data protection best practice but affordable as well.
When data tape libraries are used to keep data online in the role of an active archive, it has a tremendous energy consumption advantage over equivalent amounts of constantly spinning hard disk drives. In a recent white paper published by Brad John’s Consulting, entitled “Reducing Data Center Energy Consumption and Carbon Emissions with Modern Tape Storage“, Brad shows that storing 10 PB of cold data with a 35% annual growth rate for ten years on a tape storage solution consumes 87% less energy and produces 87% less CO2 emissions than equivalent amounts of HDD storage. That’s 3,013 tons of CO2 for disk, 383 tons of CO2 for tape. At the same time, total cost of ownership for tape is 86% less than HDD. That means tape is good for the environment and the bottom line.
The Hyper Scalers Embrace Climate Change Initiatives
We are all familiar with U.S. internet giants: Google, Amazon, Facebook and Microsoft, otherwise known as the “hyper scalers”.
The hyper scalers run massive data centers all over the world in support of billions of customers. If a traditional non-hyper scale data center is big, it might be the size of a football field. A hyper scale data center, however, can easily cover more than 18 football fields.
At this scale, the amount of energy consumed by hyper scalers to serve, store and cool exabytes of data, is staggering. In fact, hyper scalers have been the target of folks like Greenpeace to clean up their energy use and negative impact on the environment. The hyper scalers have been listening. Below are excerpts of recent sustainability targets declared by these internet giants:
“The science is clear: The world must act now if we’re going to avert the worst consequences of climate change. We are the first major company to make a commitment to operate on 24/7 carbon-free energy in all our data centers and campuses worldwide by 2030″. – Sundar Pichai, CEO
“Amazon is helping fight climate change by moving quickly to power our businesses with renewable energy. Amazon is now the biggest corporate buyer of renewable energy ever. We are on a path to running 100% of our business on renewable energy by 2025 — five years ahead of our original target of 2030″. – Jeff Bezos, Founder and CEO
“Climate change is real. The science is unambiguous and the need to act grows more urgent by the day. Beyond our goal of reducing our operational greenhouse gas emissions by 75% this year (2020), we will achieve net zero emissions for our operations in 2030″. – From Facebook News
“At Microsoft, we believe the science on climate change is clear, and that the world must reach “net zero” emissions, removing as much carbon as it emits each year. We will shift to a 100% supply of renewable energy by 2025″. – Noelle Walsh, Corporate VP, Cloud Operations
This is all good news for the environment since the hyper scalers are not only leaders in the IT industry, they are the leaders in the global economy and others will voluntarily follow their initiatives. Suppliers and service providers to the hyper scalers will be forced to follow. And it’s good news for tape technology, the greenest form of data storage, now widely adopted by the hyper scalers and growing, here in the U.S. and abroad!
With environmental benefits in mind, both Fujifilm and Iron Mountain have moved forward with many sustainability initiatives and have already implemented the use of renewable energy, LED lighting, recycled packaging materials, and environmentally friendly manufacturing components. In fact, Iron Mountain data centers have been powered by 100 percent renewable electricity since 2017 and offer a Green Power Pass program to help customers meet their environmental goals while Fujifilm ‘s LTO tape manufacturing facility located near Boston has been using solar energy since 2013.
Fortunately, both Fujifilm and Iron Mountain support the data tape ecosystem and provide data storage products that can help reduce energy consumption and CO2 emissions, all while reducing costs and fighting cybercrime. Implementing tape is one step that organizations can take as they look to implement sustainable practices int | <urn:uuid:df1ed6e0-bf88-4f38-90ee-0a069350e7ea> | CC-MAIN-2022-40 | https://activearchive.com/blog/reducing-carbon-emissions-through-the-data-tape-ecosystem/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00143.warc.gz | en | 0.934875 | 1,513 | 3.34375 | 3 |
|Table Of Contents:|
|What is monolithic architecture?|
|What is microservices architecture?|
|Pros and Cons of Monoliths|
|Pros and Cons of Microservices|
|Which approach is best for you?|
|The bottom line|
What is a Monolithic Architecture?
Monolithic architecture is a classic approach to software development. All of the application’s functionality is in the same codebase and is deployed as a single file. Besides, code updates are not separately hosted. This is why upgrades make changes to the shared code base and redeploy it.
What is Microservices Architecture?
Microservice architecture is the most popular software development option today. It represents loosely connected modules that freely interact with each other. Each module is responsible for its own task, and the interaction takes place through the API. Thus, the key difference is that modules can be independently changed and updated without the need to update other blocks. This makes it easier to scale since there is no need to calculate the power in advance.
In a monolithic architecture, components are closely related to each other, there are layers such as business logic, data access layer and user interface for interaction. In a monolithic architecture, the client gains access through the UI to loosely coupled microservices.
Pros and Cons of Monoliths
- easier to deploy 1 file;
- with a single codebase, there are no network latencies, so the application works better;
- monolithic applications provide faster communication between software components as they have a common code base and memory;
- must redeploy the app for any change or update;
- over time, the codebase grows and becomes more difficult to manage and deploy;
- you are limited in the technology stack – adding new technology could mean rewriting the entire application;
- the need for thorough regression testing when making any even small changes.
Pros and Cons of Microservices Architecture
- Flexibility in the implementation of new technologies. When changing locally on one of the servers, you do not risk the operation of the entire system;
- Fault tolerance – failure of one unit will not affect the entire system;
- Simplicity – It’s easier for programmers to understand separate blocks of code than a single codebase;
- Fast – the smaller the code, the faster the deployment;
- Scalability – you can easily add or expand the required services without affecting others;
- It is necessary to carefully build communication between services. Since each element is isolated, it is necessary to correctly configure communication between them in order to exchange requests and responses. The more services there are, the more difficult it is to build a connection between them;
- The more services, the more databases;
- Difficulty in testing – it is necessary to test each service and the interaction between them;
Monolithic vs Microservices: Which approach is best for you?
You should choose a monolithic architecture if:
- If you have a small team or a startup, then the monolith will be able to meet your needs. Microservices are not suitable for use within small organizations because they are unnecessarily complex to use. According to Forbes the whole list of tasks with setting microservices can be overwhelming for a team at the beginning.
- You have no experience with microservices, you do not understand this technology and you will not be able to correctly configure the interaction between modules.
- You have a simple application that doesn’t have much business logic. You don’t need scalability and flexibility.
- You need to get the app up and running quickly to test your business idea and don’t want to spend a lot of money initially.
You should choose a microservices architecture if:
- You have a team of experts in microservices, containerization technologies, and DevOps.
- You have a complex application with advanced business logic that will scale as you develop and add new features.
- Your team has a clear division of responsibilities, each responsible for specific tasks. The system itself can be divided into subject areas.
- You can not only correctly configure the interaction between the blocks, but also understand that the specialists in the teams must be in the loop of the whole project. Effective work requires interaction and well-coordinated work of all teams and departments.
The bottom line
The Monolithic vs Microservices architecture battle continues to this day. Overall, the key idea of this article is to say that even though microservices are trendy, they are not always the perfect choice. Sometimes, it is better to remain true to the classic approach. We hope that now you have a better understanding of both software architecture approaches and can make the right choice. | <urn:uuid:3e859874-e176-40ae-9a1a-ec395080fbc1> | CC-MAIN-2022-40 | https://ipwithease.com/monolithic-vs-microservices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00143.warc.gz | en | 0.932142 | 980 | 2.96875 | 3 |
Cloud computing is the delivery of a computing resource via the Internet and as a service. Commonly associated with software, cloud computing could theoretically be any computing asset or object being delivered via the Internet and as a service.
Cloud computing has its roots going back to the mid 1990s when Application Service Providers (ASP) were popular and delivered hosted applications as a service via the Internet. ASPs were unique because their applications were delivered to customers on a one to many basis. So, instead of a customer buying an application, hosting it and maintaining it themselves, they contracted with the ASP to do it for them. These ASP companies were close cousins to Managed Service Providers who delivered other forms of computing resources on a one to many basis, also as a service.
Today, the “cloud” is often used to refer to the Internet or “online” by non-technical people, and has taken on a very generic meaning. Cloud computing today refers to both consumer and business grade solutions, from storing photos and music to performing complex computing services for businesses and organizations.
There are also large companies, like Apple, Microsoft, Google, Amazon, and others, all of whom have their own cloud computing environments. For example, Apple’s iCloud is a popular consumer platform used to provide email, photo storage and sharing, music stream, movie viewing, and other services. Amazon, on the other hand, has developed a cloud platform which is widely used by companies for cloud computing solutions like storage, security, etc.
In contrast to these “public cloud” vendors, there is a growing movement of MSPs who are developing “private cloud” platforms, which have a number of benefits and distinctions when compared to public cloud offerings.
The term private cloud computing may have multiple meanings to different people, but it is generally considered to be the opposite of public cloud computing. Private cloud typically means cloud computing behind a corporate firewall. This means that the same benefits of public cloud computing are present, but the model also has enhanced security because it is under the more watchful eye of the IT department or MSP who manages it.
Private cloud computing differs in one key aspect from public cloud and that is the ability to more closely and accurately audit the infrastructure supporting the private cloud. For example, public cloud vendors like Google, Microsoft, Apple, Amazon, and others, cannot easily explain, much less prove, where customer resides, how many people have access to the data, or how the data is secured. Any privacy or security controls for such an environment would naturally be abstract, almost to the point of being useless for any type of compliance or assurance purposes.
While there may be different variations of cloud computing, private cloud usually has a connotation of being more secure and private compared to public cloud, not to mention offering more control over of the data, including but not limited to physical storage, access, audit controls, and redundancy.
Infrastructure as a Service (IaaS) is the practice of delivering computing resources as a service, including essential infrastructure solutions like computer processing, storage, and other components, upon which a cloud computing service can be delivered. If cloud computing is the house, the IaaS is the foundation upon which it sits.
For MSPs who want to deliver cloud services but find public cloud solutions to be too vulnerable to pricing pressures and do not afford customers with enough security and privacy controls, utilizing Infrastructure as a Service can accelerate the time to market and reduce the costs of delivering a private cloud offering. In fact, there are many IaaS providers who are currently selling their solutions both to end-user customers as well as to MSPs.
IaaS can also be sold by MSPs to their customers as a means of assisting an organization with developing their own private cloud infrastructure. For the same reasons some organizations do not want to use public cloud, developing a private cloud offering based on a IaaS platform may be more desirable. The primary benefits of this approach is the customer has greater visibility into their infrastructure, they do not have to make the large capital investment into the infrastructure, and the maintenance of the technology can be managed by a third party, in this case, the MSP.
IaaS can most easily be understood as one of the building blocks of cloud computing. It is a component of cloud technology that can also be independently delivered to a customer (including a MSP) without the application or service, which ultimately may reside on top of that cloud infrastructure.
Application Service Providers (ASP) were early versions of managed service providers, who delivered applications via a hosted and one to many business model. Beginning in the mid 1990s, ASPs were largely vertical focused companies who delivered business critical applications.
In the 1990s, ASPs were unique because their applications were delivered to customers on a one to many basis. So, instead of a customer buying an application, hosting it and maintaining it themselves, they contracted with the ASP to do it for them. These ASP companies were close cousins to Managed Service Providers who delivered other forms of computing resources on a one to many basis, also as a service.
There are many similarities between ASP and MSPs, most notably the one to many business model. Because ASPs delivered their applications to many customers from a common infrastructure, it is not difficult to how MSPs emerged from this business model to not only manage and host applications, but to also manage the infrastructure.
ASP is not a term really used much anymore, but it did go through a few variations in terminology, including a few terms which are used still today. ASP quickly became Software as a Service (or SaaS), which is a term still in common usage, especially amongst technical people.
Today, ASP may not be used much anymore but cloud computing frequently involves hosting and delivering applications on a one to many basis via the Internet. The aspect is that ASP or application management is still very much alive today as a business model. Although it may be not called ASP any longer, the fundamental business idea is still a viable solution and still in high demand by many customers across the globe.
Did you know that membership in MSPAlliance is FREE with no member dues? Register today to gain access to the most valuable, vendor agnostic, research, educational materials and operational guidance in existence today! | <urn:uuid:249b1a8e-af9e-4c4a-a2f5-cd852ecabffa> | CC-MAIN-2022-40 | https://mspalliance.com/cloud-faqs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00143.warc.gz | en | 0.974877 | 1,302 | 3.53125 | 4 |
Copyright pirates, brand impersonators, patent flouters and trade secret thieves are a major threat to businesses, given their increased aggressiveness toward intellectual property (IP) theft. These, and any other original creative works that have economic value and are protected by law, can be categorized as IP.
IP laws reward the creators of original works by preventing others from copying, performing or distributing those works without permission. They provide incentives for people to produce scientific and creative works that benefit society by allowing them to profit on these ideas. Some types of IP are automatically protected by law from the moment of their creation, whereas others require a specific grant of rights from a government agency before they can be protected by law. Although nearly every nation has laws protecting IP, some do not vigorously enforce them. As such, counterfeiting is a major problem in these areas.
The principal types of IP are patents, copyrights and trademarks:
- Patent law protects inventions that demonstrate technological progress.
- Copyright law protects a variety of literary and artistic works, including paintings, sculpture, prose, poetry, plays, musical compositions, dances, photographs, motion pictures, radio and television programs, sound recordings, and computer software.
- Trademark law protects words and symbols that serve to identify different brands of goods and services in the marketplace.
Intellectual property includes certain related fields of law, such as trade secrets and the right of publicity. Trade secret law protects confidential information that belongs to a business and gives that business a competitive advantage. For example, the formula for making a soft drink is a trade secret protected by IP laws. Right of publicity law protects the right to use one's own name or likeness for commercial purposes. For example, a famous athlete may profit by using his or her own name to endorse a given product.
IP differs from other forms of property because it is intangible – a product of the human imagination. Because IP is intangible, many people may use it simultaneously without conflict. For example, only one person can drive a car at a time, but if an author publishes a book, many people can read the work at the same time.
IP is also easier to copy than it is to create. It may take months of work to write a novel or computer program, but with a photocopier or a computer, others could copy the work in a matter of seconds. Without IP laws, it would be easy to duplicate original works and sell them for very low prices, leaving the original creators without any chance to secure economic rewards for their efforts. As a result, it is against the law to reproduce various forms of IP without the permission of the creator.
Most IP rights expire after a specified period. This permits the rest of society to benefit from the work after the creator has had an opportunity to earn a fair reward. For example, after the inventor of a patented telecommunications device has profited from the work for a specified period, anyone may manufacture that same device without paying the inventor royalties, thereby encouraging competition that allows others to benefit from the invention as well. The one exception to limited periods of IP rights is in the field of trademark law. Trademark rights never expire, so long as a merchant continues to use the trademark to identify a given product.
The Intellectual Property Issue
This is the “golden age” for IP, with IP being the lifeblood of many companies. Companies are built around patented technology. "Innovate or perish" is the motto that defines our times. Patent filings and issuances are skyrocketing, so much so that there is talk of a patent "revolution," "explosion" or “frenzy." The courts are pro-IP, as is legislation – even the Antitrust Division of the U.S. Justice Department is pro-IP. Courts read the riot act to infringers – billion-dollar damages have been awarded as a result. Treble damages, once rare, are now the order of the day and injunctions are not stayed during appeals.
As more patents are issued, companies must be aware of potential setbacks that may arise. The risks that are associated with IP include:
- Availability Risk: It is necessary for a company to make information available, and yet it is necessary for all information to be well-protected against possible infringements.
- Compliance Risk: Due to the number of legal issues pertaining to IP rights, it is important to be aware of their legal implications.
- Brand Risk: A company’s brand is part of its IP and can be one of its largest assets. It is important to protect the company images and brand reputation.
- Access Risk: Access risk includes the risk that access to information (data or programs) will be inappropriately granted or refused. Access risk ensures protection of trade secrets.
- Business Value: It is important to be aware of and track a company’s IP and know its associated business value.
Learn more about this topic by exploring these related tools on KnowledgeLeader: | <urn:uuid:dd26742d-7369-46d8-b3e3-b76d2d71bbd3> | CC-MAIN-2022-40 | https://www.knowledgeleader.com/blog/intellectual-property-risks-you-need-know | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00143.warc.gz | en | 0.950647 | 1,016 | 3.03125 | 3 |
The conventional wisdom is that the easiest way to stop social media companies like Facebook and Twitter from tracking and profiling you is simply by deleting your social media accounts. That, for example, was the basis for the #DeleteFacebook movement that gained momentum around the time of the Facebook Cambridge Analytica scandal in early 2018. But now a new study by researchers at the University of Adelaide in Australia and the University of Vermont in the United States suggests that even deleting your social media accounts might not be enough to protect your social media privacy.
Details of the social media privacy study
This research study, which was published in the journal Nature Human Behaviour, analyzed 30.8 million Twitter messages from 13,905 Twitter accounts to see whether it might be possible to profile an individual simply by examining the profiles and interactions with his or her friends. To test out that hypothesis, the researchers were able to sub-divide the 13,905 Twitter accounts into 927 “ego-networks” consisting of 1 Twitter user and 15 other accounts that interacted with that individual most frequently.
The researchers hypothesized that it might be possible to see if interactions and communication with those 15 social networking accounts somehow “encoded” information about a user and his or her interests, likes and behaviors. In fact, say the researchers, this was the first-ever study that analyzed how much information about an individual is encoded in interactions with friends.
From a social media privacy perspective, the study turned up some very concerning results. It turns out that the science research team didn’t even need 15 accounts to figure out a person’s profile. All they needed was tweets from 8-9 accounts (i.e. the “friends” of the user), and they could start to create some startlingly accurate profiles. For example, machine learning algorithms could start to predict factors such as “political affiliation” or “leisure interests” simply by studying the tweets of someone’s friends. Often, they were able to do this with up to 95 percent accuracy.
Friends can put you at risk on social networks
In many ways, the study is an affirmation of the adage, “Tell me who your friends are, and I’ll tell you who you are.” Every day, say the researchers, your friends are leaving telltale clues about you, what you like, and even how you are likely to vote in any election. Thus, even if you decide to delete your social media account, your profile is still “encoded” in previous interactions with your friends. You can think of your friends as creating a “mirror image” of yourself – all a company or government entity needs to do is figure out who a person’s friends are, and it’s possible to predict how a person will act or behave.
This obviously has social media privacy implications. In a base case scenario, a clever brand would be able to craft marketing messages customized for you, simply by analyzing the people in your network. Search engines would be able to deliver search results geared to specific people based on what their friends are saying. And, in an even scarier worst-case scenario, an authoritarian government might be able to crack down on a group of political dissidents very quickly simply by putting a few machine learning algorithms to work. Even people suspected of having certain thoughts might be rounded up, solely on the basis of Internet users in their network.
The concept of privacy as an individual choice
And there’s another element to the research study on social media privacy that is perhaps more subtle, and that is the fact that social media privacy is not necessarily an individual choice. Friends are sharing personal information about you, even if you are doing everything possible to protect your social media privacy (even to the extent of deleting your Facebook account or restricting access to personal data in other ways). This would seem to fly in the face of conventional wisdom about online activity and how data is collected. This conventional wisdom suggests that each individual is in control of his her social media privacy. All it takes is checking a few boxes, the thinking goes, and you can immediately move from “weak” social media privacy to “strong” social media privacy.
But this doesn’t seem to be the case. And it’s also particularly troubling for social media privacy advocates that some of the biggest tech companies, including Facebook, appear to be collecting “shadow profiles” of non-users. What this means is that Facebook is not only collecting data on its own users (which most people realize), but also that is creating profiles of non-users simply by capturing all the ambient data that flows through the social network on a daily basis. For example, if you tag a photo of your grandmother on Facebook, and your grandmother is not on Facebook yet, is Facebook able to start assembling a “shadow profile” of your grandmother without her realizing it? Information is collected on social media sites in ways that might not be obvious to social media users.
Why the Facebook Cambridge Analytica scandal matters
Just 18 months ago, the idea of “shadow profiles” might have sounded like a plotline out of a conspiracy movie. But the Facebook Cambridge Analytica scandal really woke people up to the perils of information and data sharing in relation to social media privacy. By using the “friends of friends” approach to figuring out information, for example, a simple quiz app was able to vacuum up data about hundreds of thousands of people. In Australia, for example, only 53 Facebook users actually used the This Is Your Digital Life quiz app, but Cambridge Analytica was still able to gain access to over 310,000 people.
Thus, when you hear that a social media data breach impacts “X” number of people, you have to assume now that the figure could actually be much higher than that. Hackers and other cyber villains could use the same approach used by the researchers in Australia and Vermont – they could sub-divide a target population of social media users into a smaller number of ego-networks and then use AI and machine learning tools to start learning as much as they can about people, simply based on who’s in their network.
Still looking for a solution to social media privacy
As the researchers from the University of Adelaide and University of Vermont point out, “There is no place to hide on social networking platforms.” Your behavior is now predictable from the social media data of just 8-9 of your friends. Even when you have deleted your accounts, you can still be profiled based on personal information online derived from your friends’ posts.
If there is currently not a way to 100 percent hide your profile online, then that might open the door to future regulatory action in the future. The European Union has already won kudos from privacy advocates for its General Data Protection Regulation (GDPR), so don’t be surprised if a similar form of sweeping regulation comes to Silicon Valley as well. | <urn:uuid:da9b675e-87e8-4135-b33c-548755f25275> | CC-MAIN-2022-40 | https://www.cpomagazine.com/data-privacy/new-research-study-shows-that-social-media-privacy-might-not-be-possible/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00143.warc.gz | en | 0.951272 | 1,445 | 2.90625 | 3 |
This year has seen a rise in cyber attacks on government agencies and prompted official warnings. Notably, a recent joint statement from the FBI and CISA warned schools about probable attacks. And a data breach of federal court records further spotlighted the need for improved municipal data governance and security. A perfect storm of cyber security risks makes municipal agencies particularly vulnerable to attack. In the first place, schools, courts, utility departments and other government entities store a treasure trove of sensitive information. Those same agencies often use legacy systems and lack critical cyber security infrastructure and data governance resources.
In late 2020, a study showed that 80 percent of businesses worried about state-backed cyber attacks. Within weeks, the Russian-sponsored SolarWinds hack and the China-backed Microsoft Exchange hack came to light, confirming those concerns. Should businesses and individuals fear these attacks, or do they only affect government agencies? Lawless Frontier Invites Multiple Players In recent years, countries have increasingly turned to the cyber arena to conduct espionage and warfare. Unlike traditional warfare, very few international laws exist to govern cyber warfare. And, while Russia and China currently dominate the stage, smaller countries can also make a big impact with a | <urn:uuid:a22119e7-db97-4d64-9392-9bee62acb9f5> | CC-MAIN-2022-40 | https://messagingarchitects.com/tag/cyber-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00343.warc.gz | en | 0.925124 | 236 | 2.796875 | 3 |
|Introduction||Client/Server on the Web|
This chapter introduces some of the basic principles of client/server applications and explains their advantages over the more traditional monolithic architecture. It also suggests how to divide your application into modules that make the most of client/server architecture, and outlines the platforms on which NetExpress applications can be deployed.
Despite a rapid increase in the deployment of client/server applications, there is still a certain amount of mystique surrounding the term 'client/server', especially as the same term is often used to describe a number of different concepts.
In principle, a client/server application consists of a client program that consumes services provided by a server program. The client requests services from the server by calling functions in the server application. In a distributed computing environment, where the client program and the server program execute on different machines and possibly even on different platforms, the client and server communicate through a communications layer that is often called middleware.
Figure 2-1: Basic client/server architecture
Although applications that are running on different machines obviously require those machines to be physically connected in some way - usually by a network (LAN, WAN or Internet) - it is important to distinguish between the network architecture and the client/server application architecture. The client application might run on a network client or a network server. The client and server applications might run on the same machine, which could be a network client or a network server, or neither! A client/server application is described as such solely because of its own architecture, without reference to how it is deployed on a network. For example, the X system used for graphical front-ends on many UNIX systems is a client/server application. However, the server part of the application often runs on the network client machine, with the client part of the application running on the network server! The easiest way to remember which is the client part of an application is to remember that the client is always the requestor of services.
The following are typical features of a client/server application:
COBOL applications request services by using the CALL statement. The request for a service is actually a call to a function implemented in a procedure. Although CALL statements are usually associated with local functions - that is, procedures that execute on the same machine as the calling program - they can equally be associated with remote functions that execute on a different machine. When a CALL is used in this way, it is often referred to as a remote procedure call, or RPC. A key requirement for the rapid development of client/server applications is that remote procedure calls should be handled independently of the network protocol in use; this enables you to concentrate on coding your application rather than handling the underlying network. NetExpress is supplied with a simple RPC mechanism called client/server binding, which provides a straightforward network-independent communications layer between client and server programs.
Most of the benefits of using client/server architecture for enterprise applications relate to flexibility of deployment and relative ease of maintenance. For example, using client/server architecture you can typically:
To maximize the potential value of using a client/server architecture, you should adhere to some basic design guidelines. These are outlined below.
To gain the most benefit from using a client/server architecture for new applications or as a conversion exercise when updating existing applications, it is essential to logically (and often, physically) separate the different layers of functionality in the application so that they are not indiscriminately mixed together. A typical approach is to split the logical functions of the application into three:
Conceptually, each of these three areas of functionality - or layers - are handled by separate programs. The user interface logic is always handled by the client application. If the client application handles only the user interface logic, it is called a thin client. Sometimes some, or even all, of the business logic is also handled by the client application; this is called a thick client.
When you create a client/server application, it makes a lot of sense to apply this conceptual division of functionality to the actual program code, so that you create physically separate programs for handling each of the three layers. In a distributed computing environment, each of these programs might run on different machines - but they would work equally well if they were all running on the same machine.
A Web application is the ultimate thin client. The user interface is handled entirely by the user's Web browser. Although the definition of the interface is provided as an HTML form which resides on the Web server, it is downloaded temporarily under the control of the Web browser.
NetExpress is designed to enable you to create 32-bit Web-based and network-based client/server applications that can be deployed on the following platforms:
(Applications created with Internet Application Wizard are not suitable for deploying on UNIX)
We have tested the following platforms and believe them to be generally compatible with NetExpress Web applications:
As Web applications created by NetExpress conform to the relevant international, platform-independent standards, it is highly likely that they will work correctly when deployed on other TCP/IP-capable client platforms with appropriate Web browsers. However, as each application has individual requirements, we advise you to test your application thoroughly on all target systems before deployment.
Copyright © 1998 Micro Focus Limited. All rights reserved.
This document and the proprietary marks and names used herein are protected by international law.
|Introduction||Client/Server on the Web| | <urn:uuid:8df5ef54-3935-42ad-88f7-00aa4e4b3a49> | CC-MAIN-2022-40 | https://www.microfocus.com/documentation/net-express/nx30books/sgdevt.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00343.warc.gz | en | 0.936736 | 1,107 | 2.65625 | 3 |
No resource would be complete without a comprehensive glossary of terms. We’ve compiled a list of terms and their definitions to better help you navigate.
Middleware Middleware describes a group of software products that facilitate the communications between two applications or two layers of an application. It provides an API through which applications invoke services and it controls the transmission of the data exchange over networks. There are three basic types: communications middleware, database middleware and systems middleware.
Message Routing A super-application process where messages are routed to applications based on business rules. A particular message may be directed based on its subject or actual content.
Message Queuing A form of communication between programs. Application data is combined with a header (information about the data) to form a message. Messages are stored in queues, which can be buffered or persistent (see Buffered Queue and Persistent Queue). It is an asynchronous communications style and provides a loosely coupled exchange across multiple operating systems.
MOM Message-Oriented Middleware is a set of products that connects applications running on different systems by sending and receiving application data as messages. Examples are RPC, CPI-C and message queuing.
MIME Multipurpose Internet Mail Extension is an extension to the original Internet e-mail protocol that lets people exchange different kinds of data files on the Internet: audio, video, images, application programs, and other kinds, as well as the ASCII handled in the original protocol, the Simple Mail Transport Protocol (SMTP). Servers insert the MIME header at the beginning of any Web transmission. Clients use this header to select an appropriate "player" application for the type of data the header indicates. Some of these players are built into the Web client or browser (for example, all browser come with GIF and JPEG image players as well as the ability to handle HTML files); other players may need to be downloaded. New MIME data types are registered with the Internet Assigned Numbers Authority MIME as specified in detail in Internet RFC-1521 and RFC-1522.
Message Broker A key component of EAI, a message broker is a software intermediary that directs the flow of messages between applications. Message brokers provide a very flexible communications mechanism providing such services as data transformation, message routing and message warehousing, but require application intimacy to function properly. Not suitable for inter-business interactions between independent partners where security concerns may exclude message brokering as a potential solution.
Message Delivery Notification (MDN) A document, typically digitally signed, acknowledging receipt of data from the sender.
Master Data Synchronization It is the timely and 'auditable' distribution of certified standardised master data from a data source to a final data recipient of this information. The synchronisation process is well known as 'Master Data Alignment' process. The master data synchronisation process is a prerequisite to the Simple E-Business concept (Simple_EB). Successful master data synchronisation is achieved via the use of EAN/UCC coding specifications throughout the supply chain. The synchronisation process is completed when an acknowledgement is provided to a data source certifying that the data recipient has accepted the data distributed. In the master data synchronisation process, data sources and final data recipients are linked via a network of interoperable data pools and global registry. Such an interoperable network is the GCI-Global Data Synchronisation Network.
Master Data Master data is a data set describing the specifications and structures of each item and party involved in supply chain processes. Each set of data is uniquely identified by a Global Trade Item Number (GTIN) for items and a Global Location Number (GLN) for party details. Master data can be divided into neutral and relationship- dependent data. Master data is the foundation of business information systems.
Market Group In UCCnet Item Sync service, a Market Group is a list of retailers or other trading partners, that the manufacturer communicates the same product, pricing, logistical and other relevant standard or extended item data attributes.
Mapping The process of relating information in one domain to another domain. Used here in the context of relating information from an EDI format to one used within application systems. | <urn:uuid:e5cc314f-cef3-4a2e-841f-8d2a0a6fedd1> | CC-MAIN-2022-40 | https://www.btrade.com/glossary/?letter=m | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00343.warc.gz | en | 0.886568 | 858 | 3.296875 | 3 |
Chipmaker ARM has announced a tiny product with huge implications. The company unveiled its ARM Cortex M0+, a 1mm microchip it says can push the edge of the Internet beyond your laptop or PC. Everything is on the drawing board — from a network-enabled fridge to devices powered by your body’s heat.
“The main advantage is a balance between energy and performance,” Thomas Ensergueix, ARM’s CPU product manager, told TechNewsWorld. The new 32-bit processor consumes about one-third of the energy used by current 8-bit and 16-bit processors while at the same time delivering better performance.
The reduced energy usage is made possible by shrinking the number of transistors needed for the chip and eliminating a step required for data to travel. By shrinking the size of the chip, the cost to manufacturers also falls, Ensergueix said. Freescale and NXP Semiconductor are already among early licensees of the new product.
Flyweight Chip Advances
In the past, small processors — referred to as “flyweight chips” — have required batteries for power and held little intelligence. Used in situations such as industrial sensors, the micro CPUs required battery changes, since they stayed constantly powered.
ARM’s new chip, however, “makes it realistic to control LED light and sensors” which are intelligent enough to power down when unneeded, opening the prospect of little energy leakage, Gary Atkinson, ARM’s head of embedded segments, told TechNewsWorld.
ARM decided to base the Cortex M0+ on the standard 90mn chip manufacturing process to hold down costs, suggested Gartner Wireless Research Director Mark Hung. The move was made despite the 90mn format being “known to have power leakage issues,” he told TechNewsWorld.
50 Billion Devices Forecast
Cisco believes about 50 billion connected devices are possible, UBS analyst Gareth Jenkins told TechNewsWorld. “Most of these are totally new markets and so far have not needed microprocessors sitting alongside them. As they connect and need to process — e.g. remote diagnostics sitting under a stroke victim’s skin — they will need microprocessors,” he said.
The horsepower increase and consumption decrease makes possible what ARM calls the “Internet of Things,” wherein chips allow everything from your TV to your MP3 player to be online. But that ubiquity “requires extremely low-cost, low-power processors that can deliver good performance,” said Tom R. Halfhill, analyst with The Linley Group and senior editor of Microprocessor Report.
Examples of an Internet of Things include “cars networked to each other, the fridge connected to the TV remote, ” Jenkins told TechNewsWorld. However, Atkins sees other uses where reliability, size and power consumption are key. The device may be used in battery-operated body sensors, wirelessly connected to health monitoring equipment, according to ARM. Current microchips lack the intelligence for such tasks, the company contended.
Solar-Powered Medical Devices?
Although only in the conceptual stage, ARM foresees its chip promoting “energy harvesting.” The company envisions chips that convert a body’s motion, natural sunlight, or even ambient temperature into energy.
Although we see rudimentary devices now recharging smartphones via solar panels or radios powered by a walker’s motion, miniature chips such as the Cortex M0+ could do much more. In one case, glaucoma patients could have a sensor in their eye, the battery powered by the photons passing through it, Atkinson foretold. | <urn:uuid:5597eb42-01de-40c9-878b-1d35389793d4> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/internet-of-things-close-thanks-to-arms-reach-74627.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00343.warc.gz | en | 0.931024 | 760 | 2.5625 | 3 |
People hate passwords because they can’t remember them. Security experts dislike passwords because they can be stolen, brute-force, or phished. The inability to solve the password’s weaknesses saw the birth of the campaign to “Kill the Password!”
In the meantime, multi-factor authentication was added as a layer of security, increasing the level of difficulty for hackers to breach an account. Key Fobs, Registered Mobile Devices, and SMS Text was referred to as the “Something You Have factor”. Biometrics such as a fingerprint scan, retina scan, facial recognition, and voiceprint represented the “Something You Are factor”.
Password-less authentication is the next emerging solution in play. Password-less authentication, as opposed to password-based authentication, doesn’t rely on passwords to verify a user’s identity. Instead, identity is based on a “Possession factor” which is then used to verify the user (e.g., One-Time Password, Mobile Device, Key Fob or Token).
“What You Know” such as a pin number, password, or passphrase, is a huge concern. Forgotten passwords lead to password resets and is attributed as amongst the unnecessary and preventable I.T. costs for an enterprise. Service providers and their users don’t know if their credentials were stolen, thus allowing a hacker to gain access to their account undetected.
The key is to find a method that is convenient and more secure for the user but difficult and inconvenient for the hacker. The user should not bear the responsibility for security. As it is, we rely on and ask the user to detect phishing campaigns which the experts themselves fall into. If the appropriate technology is developed and deployed correctly, mistakes should not occur when followed by users as prescribed. Why not make credentials immune to and not susceptible to phishing campaigns? If we make the credential Dynamic, they would be useless to the hacker even if phished.
Should we really blame the Password? Or is it the fault of the technology itself? If a password is something a user will never forget and doesn’t require any memorization, will users still hate them? If the password cannot be stolen, brute-force or phished, would that alleviate the concerns of the security experts?
Perhaps password-less authentication is not the solution. Is there a way to re-invent Password? Yes, there is…and it is patented with the U.S Patent Office. The following is a helpful overview.
“Natural Memory” is treated as stories that are unique to each user. In addition to each story has MEMORIES associated with it. Insecurity terms, each story is an alternate “Something You Know.”
These Natural Memories can be disaggregated and randomized such that they can be reconstituted only by you. Our software allows the system to intelligently challenge the user by requiring them to select the answer embedded in a set of words. This is done by displaying 4 memories (see Blue Box) and 14 associations. Only one of the memories and associations will be displayed, even though the user might have register 3 to 7 stories.. There are a few words from a specific story. These are randomly embedded among the mix of false words and serve merely as “noise”, making a selection of the actual story extremely difficult for all but one person in the world.
Using the Credential. Instead of entering a password, the user merely clicks the correct words, or values (see arrows). The red thought bubble is the story name (or memory) that the user registered. The 4 arrows point to associations with that memory. There are names and word are readily known an instantly recognized without effort to one person and yet improbable for hackers to guess.
The user need not remember something they already know. Since the answer resides in the user’s thinking mind, device vulnerabilities are absent. Since it is useless when stolen or phished, it is more secure.
Don’t kill the password. Instead, make it easier for the user by not requiring any memorization and be more secure by making it useless if stolen or phished.
If you have any questions, please send an email to email@example.com. Alex Natividad MD, CEO/Founder NimbusID and author of this article. NimbusID is a registered trademark. | <urn:uuid:3397e941-a3a2-42d1-ac49-9ceda2d5e101> | CC-MAIN-2022-40 | https://enterprisetechsuccess.com/article/Don%E2%80%99t-Kill-the-Password/bUNkMEp2Ly9NSGxWeThXVVY4WGxlUT09 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00343.warc.gz | en | 0.941934 | 920 | 2.5625 | 3 |
With the advent of accessible technology, more students have gone further in higher education to pursue STEM degrees.
According to a study conducted by the National Science Foundation, students with disabilities are now just as likely to enroll in science, technology, engineering, and math fields when they pursue higher education. The study also found that 11 percent of the undergraduate population in STEM has a disability. Recent updates to technology and education endeavors that boost accessibility could help level the playing field and encourage everyone to pursue a STEM degree.
Increased Access to Tech Tools
Accessibility is key in order to inspire all children to pursue a degree in STEM, if they wish to do so. With accessibility in mind, the team at Microsoft continues to expand the capabilities and availability of the tools that help students achieve success.
STEM technology can help students with low-incidence disabilities pursue their dreams and not be limited by their disability. It helps students with things like social skills, hand-eye coordination, and STEM skills.
STEM technology also helps students with autism or attention-deficit hyperactivity disorder succeed because of their tendency to be hands-on learners.
Students with disabilities see the world in different ways. This means they have to problem-solve and think about things differently, something that science and math require for success. Kids with disabilities and math and science go hand-in-hand.
Technology created to assist these students can only benefit science and research. For example, Wanda Diaz Merced is an astrophysicist who studies the stars with one setback: she is blind. By creating the data into sonification waves (a fancy term for sound waves) she could hear the minute differences to detect patterns never before seen by the graphs and visual representations. | <urn:uuid:6406018e-ed94-4244-aa38-ce6f99835102> | CC-MAIN-2022-40 | https://ddsecurity.com/2017/05/18/technology-allows-students-disability-pursue-stem-dreams/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00343.warc.gz | en | 0.95763 | 348 | 3.65625 | 4 |
According to Wikipedia Grid computing is an emerging computer model which provides the ability to “perform higher throughput computing by taking advantage of many networked computers to model a virtual computer architecture that is able to distribute process execution across a parallel infrastructure”.
A Grid computer system (Grids) uses the resources of many separate machines connected by a intranet or Internet network. Grids provide the ability to perform computations on large data sets, by breaking them down into many smaller ones. It models a parallel division of labor between processes and enables the system to perform many more computations at once than would be possible on a single server.
Grid server environment needs a secure resource control, reliable network and service quality in order to be sustainable. A grid hosting model means that you have an independent, self-contained grid deployments run within isolated containers on shared resource provider sites. Websites and hosted grids interact via an underlying resource control plane to manage a dynamic binding of computational resources to containers.
Netcraft says that this year’s hot web hosting trend is grid computing. The objective of the web hosts is to build server clusters and to bring the advantages of enterprise-level infrastructure to shared hosting services. Mediatemple is one of the companies that offer Grid hosting. ServePath and Rackspace have also launched their own grid hosting service. They offer respectively UtilityServe and Mosso. | <urn:uuid:66189dfd-791c-4e19-96d7-d973410fb384> | CC-MAIN-2022-40 | https://www.dawhb.com/get-a-grid-what-does-grid-web-hosting-mean/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00343.warc.gz | en | 0.947051 | 281 | 3.078125 | 3 |
A New and Elegant Quantum Teleportation Scheme
(By Amara Graps, Staff Writer) A new, highly robust, quantum teleportation method, which is, in principle, unconditional, with no entanglement, was published by Stefan Langenfeld and his colleagues at the Max Planck Institute of Quantum Optics (MPQ) in the third quarter 2021. This method, aimed for communication networks, successfully demonstrated that a single photon can teleport the spin state of a single atomic qubit to a second atom 60m away, without losing the quantum information.
The robustness of this scheme is due to the temporal shape of the employed photon, determined by a predefined, incoming pulse. The shape can be quickly changed between different shapes to counteract distortions in different channels of a quantum network. E.g. for those caused by changes in the ambient temperature that can alter the length or refractive index of both fiber- and free-space channels, which would therefore lead to fluctuating photon arrival times. The shape of the coherent pulses needs only to meet the condition that the spectral width of the photon is smaller than the cavity line-width.
In their paper, the authors describe how the information is communicated deterministically, without entanglement, in principle: If the photon is lost on the way, Alice’s qubit is not affected, and the protocol can be repeated until a successful photon transmission is signaled unambiguously with downstream photo-detectors. Instead of pre-sharing the entanglement resource, the entanglement is generated as needed, between Bob’s qubit and the photon, when the latter interacts with his node.
Once their setup is improved with cavities having a reflectivity close to unity, and negligible photon loss between Alice’s cavity and the downstream detectors, and then the researchers will achieve a unique flexibility. The authors suggest that their protocol would allow for teleportation between any combination of unknown matter and light qubits. It would be possible to convert the wavelength of the ancilla photon during its passage from ‘Bob’ to ‘Alice’, in case the two communication partners employ different kinds of qubits. | <urn:uuid:61d67a57-3598-4618-870f-13fdce8fc9c9> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/a-new-and-elegant-quantum-teleportation-scheme/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00543.warc.gz | en | 0.935175 | 445 | 2.625 | 3 |
Personal Toolkit: The metaphor drawer
Last time I suggested a metaphor for how our conversations reflect and refract the knowledge of others. This time, I want to expand on the idea of metaphors as a category of personal knowledge management (PKM) tools. We all use metaphors. Just try getting through a day without one. The word's Greek roots literally mean to "carry across" as we juxtapose one thing with another.
There are many useful metaphors in KM already. The more metaphors we keep sharpened in our toolkits the better, really. My favorite metaphor is knowledge as energy. Energy, as you may remember from grade school science classes, is "the ability to do work." Energy comes in different forms, such as potential and kinetic. And as I mentioned in my previous column, knowledge--as light--is focused when we act as apertures, imprinting the light as it passes through us. You could even say that knowledge, like light, is simultaneously particle and wave.
While metaphors have been recognized for thousands of years for their potential in learning, they have largely been dismissed as Baroque linguistic embellishments that blur effective communication (except as used by licensed poets adhering to strict guidelines). The assumption is that science and business require precision, not the artful ambiguity inevitable when we compare apples to oranges or reorganizations.
Metaphor is, in fact, how we deal with complex concepts, situations, actions and interactions. As such, it is not an artifact of language, but of cognition--part of a process of pattern recognition.
George Lakoff and Mark Johnson demonstrated that metaphors play a more fundamental role. "Metaphors not only affect the way we communicate ideas, but actually structure our perceptions and understandings from the beginning," they write in their 1980 book, "Metaphors We Live By."
Gareth Morgan, in his1986 "Images of Organizations," showed how metaphors affect organizational beliefs and behaviors by changing collective perceptions. It matters whether you see yourself as part of an intricate machine or an insect colony. "Metaphor is the genetic code of management," he says in one essay. "It is the force that produces all the surface detail."
A tool for innovation, design and conversation
To be useful in organizations, metaphors don't have to have the same kind of correspondence reality as maps or models. In fact, it's better if they don't. Tihamér von Ghyczy of the University of Virginia's Darden School of Business warns that metaphors are typically misused or wasted when they are taken too literally.
"We tend to look for reassuring parallels in business metaphors instead of troubling differences--clear models to follow rather than cloudy metaphors to explore." He writes in a 2003 Harvard Business Review essay, "In fact, using metaphors to generate new strategic perspectives begins to work only when the metaphors themselves don't work, or at least don't seem to."
In "The Fruitful Flaws of Strategy Metaphors," Von Ghyczy contrasts "rhetorical" metaphors used to constrain our thinking and get us all on the same page with "cognitive" metaphors that stimulate us to think outside the box.
"The greatest value of a good cognitive metaphor--as it makes no pretense of offering any definitive answers--lies in the richness and rigor of the debate it engenders," he explains.
Dan Saffer explores the best and worst ways to use metaphors in the design of interactive systems--ways that would certainly apply to KM systems--in his master's thesis, "The Role of Metaphor in Interaction Design," submitted in May to the Carnegie Mellon School of Design (cmu.edu). "Metaphors can provide cues to users how to understand products: to orient and personify. In short, interaction designers can use metaphor to change behavior. It is not hyperbole to suggest that without metaphor, interaction design today would be severely limited, especially in the digital realm," he writes.
"To not provide metaphors seems to be an abdication of the designer's responsibility," he adds. "Metaphor's power to transform is too powerful a tool to ignore."
Saffer quotes Gary Shank and Conrad Gleber saying, "The human mind cannot tolerate a meaning vacuum; we have no choice but to leave our familiar preconceptions and engage in meaning exploration."
As such, metaphors have a social, as well as cognitive function. Because the associations and implications that metaphors trigger in the receiver cannot be completely controlled by the sender, they bring us together to negotiate meaning rather than fighting over different interpretations--or worse, assuming clarity and agreement where none exists.
Explicit definitions are low-bandwidth and put people in a passive mode. Metaphors, because they "unpack" so much imagery, operate in a much higher bandwidth. Ambiguity can simulate or even demand an active mode of engagement and learning. That's why I often think that those interested in conveying knowledge across distances should be writing lines of poetry, not code.
The key thing (for PKM) is that a metaphor does not automatically emerge in a conversation; someone has to offer it. To keep the conversation going, knowledge workers should always keep a supply sharpened in their personal toolkits.
Steve Barth writes and speaks frequently about KM, e-mail firstname.lastname@example.org. For more on personal knowledge management, see his Web site Global-insight. | <urn:uuid:b21bb74e-0c0a-4b20-b7fd-2c7a0ea575ca> | CC-MAIN-2022-40 | https://www.kmworld.com/Articles/Editorial/Feature/Personal-Toolkit-The-metaphor-drawer-14344.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00543.warc.gz | en | 0.942604 | 1,121 | 2.53125 | 3 |
Security risk management is a broad and challenging topic. So what does it involve? It’s essentially the ongoing process of identifying security risks and then implementing plans to address them. It’s about considering the likelihood that known threats will happen, how these threats might exploit any vulnerabilities in your security protection, and the impact they could have on your organisation.
As ASIS International explains: “The term risk management has been in common use in other fields such as insurance, business R&D for many years. However, it has been recently been applied in security management and asset protection. The concept is a perfect fit, as security’s primary objective is to manage risks by balancing the cost of protection methods with their benefit.
To manage risk effectively, a security professional would reduce or limit the total number of incidents leading to loss. A goal of risk management is to manage loss effectively at the least cost. In fact, many professionals believe that risk is the most significant factor that drives the deployment of security.”
The 3 main security risk categories to consider
In risk management, every threat is put into one of three categories: physical threats, human threats or cyber threats. Let’s look at these in the context of access control.
- Physical threats – for example, a criminal smashes a door to gain entry.
- Human threats – for example, an employee makes a mistake or deliberately gives access permissions to someone who’s not authorised for them.
- Cyber threats – for example, someone hacks into your access control database and steals employee data or changes access control permissions.
The threat of natural disaster is, of course, another consideration to take into account when carrying out your risk assessments.
In security, protection methods are becoming more IT-orientated, so there’s often a convergence between physical security risks and cyber security risks. For this reason, it’s increasingly important for teams responsible for physical security to work closely with teams responsible for IT and cybersecurity.
Important areas of focus for your security risk strategy
Drawing up a strategy for security risk management can be a big undertaking, so it helps to break it down into manageable projects. Some of the key areas to focus on are:
- Emergency management – so you’re able to take the right actions, immediately, if an emergency happens.
- Business continuity – so you can identify what would affect your business continuity, how to mitigate the risks and how to protect business continuity if the worst happens.
- Security and asset protection – so you can adequately protect the physical and intellectual assets that are valuable to your organisation.
- Occupational health and safety – so you can consistently and accurately restrict unauthorised access to areas that present health and/or safety risks.
- Securing budget – so you’re able to invest in the security systems you’ve identified.
The latter can be particularly difficult for security professionals. In most industries, it’s possible to present a clear, predicted return on investment. Whereas buying security technology is almost always seen as an outlay rather than a valuable investment to decisionmakers.
The challenge is to demonstrate the size of each risk and the potential costs, losses and other repercussions if they’re not mitigated by security systems and processes. This isn’t always a direct monetary cost, but it can have a dramatic effect on the bottom line. A security breach can, for example, lead to reputation damage that affects customer loyalty and causes a drop in sales.
3 key requirements when creating your security risk strategy
1. Clear thinking
Every organisation faces different risks. So the first crucial step is to know what risks you face now and are likely to face in the future. Remember to consider physical security risks, human security risks and cyber security risks.
Then, it’s important to map out each risk and how you plan to mitigate it. To help with this, there are various useful concepts. One such concept is the five avenues, through which you consider risks in the following ways.
- Risk avoidance
This is the most direct way to remove risk. Most organisations can’t avoid risk altogether, however, because it would prevent them fulfilling their core offering or business objectives. A bank could, for example, avoid risk by not storing money on its premises – but storing money is one of its key business functions.
- Risk spreading
How can you spread your valuable assets across your estate to ensure they’re not collated in one area of vulnerability? Once you’ve spread your assets out, you can protect them through multiple forms of physical security systems and procedures, and your overall risk mitigation strategy.
- Risk transfer
Risk can be transferred by ensuring compensation for any loss or costs. An example of this is setting up insurance to mitigate against the cost of an incident or loss.
- Risk reduction
How can you reduce risk? For example by minimising the number of entrance points and communal areas that provide a journey towards your valuable assets.
- Risk acceptance Not all risks can be mitigated against. So it can be helpful to acknowledge that there’s a potential risk but being willing to accept it. You might, for example, accept the risk of people gaining entry to your main reception, as the potential for loss there isn’t too high.
2. A layered approach
Next, consider how you’ll layer your security so your most valuable assets, or the assets that would lead to greatest loss, are the best protected. View security like an onion. So the perimeter of a site, for example, is the outer skin. While a vault would be at the very centre of the onion protected by layers of security.
This may include, for example:
- Layer 1 – a barrier with vehicle recognition on the estate’s perimeter.
- Layer 2 – card entry at reception and communal areas.
- Layer 3 – card and PIN verification to enter higher-security zones.
- Layer 4 – a card and PIN and/or biometric reader for double or triple verification to enter the vault.
3. The right tools
Once you’ve mapped out your organisation’s specific security risks, and thought how to layer your protection, it’s time to select products and processes to mitigate and manage your risks. And also plan how you’re doing to use them for optimum effect.
When you’re doing this, remember to think about potential future risks as well as those you face now. And also take the following into account to reduce the number of incidents possible:
- Location – for example the surrounding geography, terrain and positioning on the site.
- Structural design – the size and shape of buildings and sites and the materials used.
- Security layers or zones – so you can ensure all assets gets the appropriate level of protection.
- Clear zones – for surveillance, threat detection and standoff.
- Access control – for controlling access to sites, buildings and the rooms and locations within them.
- Positioning of security equipment – strategic placement can dramatically increase performance.
Remember to mitigate strongly against insider human threats
The biggest type of threat to an organisation is always human threat. So remember that any products and processes you use must be user-friendly to ensure they’re operated correctly, efficiently and effectively. Alarm handlers, such as the AEOS graphical alarm handler, are really useful to identify and highlight threats, giving you as much time as possible to respond accordingly.
Training is key to make sure employees know how to identify threats, manage events and react appropriately to reduce the risks posed by each threat.
Also, bear in mind that it’s no good having a security system that offers an incredibly high level of protection if it’s operated by someone who can’t be trusted. This is why vetting is vital to ensure you’re working with trustworthy people who have the appropriate skills and capabilities. A system with the ability to hide sensitive data will help you to manage this threat, and the option to do a full system audit will also help with any post-incident analysis needed.
A futureproofing approach is essential
Cyber threats are increasingly apparent and raise additional risks, which is why it’s vital to take future risks into account when creating your strategy. From a system point of view, it’s crucial to choose one with no end-of-life, which can be upgraded to manage current and future threats. And ensure it works hand in hand with your constantly evolving security risk management strategy.
Want to talk about the role access control and AEOS can take in your security management strategy? Visit us at the ASIS Europe Online Congress on 2 March.
Be a security expert
Interested in security management technology trends? With our newsletter you will receive updates from our blog on a regular basis.
Frequently asked questions
At a very basic level, access control is a means of controlling who enters a location and when. The person entering may be an employee, a contractor or a visitor and they may be on foot, driving a vehicle or using another mode of transport. The location they’re entering may be, for example, a site, a building, a room or a cabinet. We tend to call it physical access control to differentiate it from access control that prevents people from entering virtual spaces – for example when logging into a computer network.
If you decide to use an access control system, it’s probably because you want to secure the physical access to your buildings or sites to protect your people, places and possessions. That’s just the start for access control systems though. The right system, used well, can add value in a range of ways. You can use it, and the data it generates, to boost not just security but productivity, creativity and performance.
Today, physical security is about so much more than locks and bolts. Many modern physical access control systems are IP-based, powered by smart software and able to process large quantities of data. This provides more functionality, flexibility, scalability and opportunities for integration. It also means they’re part of your IT network, so it’s essential they’re protected and upgraded – just like your other IT systems.
From our perspective, a centralised access control system is always preferable – whether you have just two locations in the same town or hundreds spread around the world. Centralising your access control brings a range of far-reaching benefits.
For the people using your building, biometrics can give a better experience compared to an access badge. These days, biometrics are used for both identification and verification – sometimes even both at the same time. Being allowed to enter your building just by scanning your hand or face makes access control more convenient than ever.
Mechanical keys are the simplest form of physical access control and the method many smaller organisations use. Even for a small company, however, using mechanical keys has several flaws and limitations – especially as an organisation gets bigger. Below are just some of the problems presented by using keys. | <urn:uuid:0be9c353-ea5c-4fed-bee7-20b513895ddb> | CC-MAIN-2022-40 | https://www.nedapsecurity.com/insight/how-to-make-sure-your-security-risk-management-strategy-covers-all-bases/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00543.warc.gz | en | 0.936606 | 2,281 | 2.671875 | 3 |
The rise in the data economy has meant that the most personal of consumer information, including passports, drivers’ license numbers, and dates of birth, are routinely being entered alongside online payments.
Cybercriminals are experts in monitoring consumer data – and data leaks and cybercrimes continue to increase. In 2017, data compromises hit an all-time high, exposing 2.3 billion data files on the internet that contained business IT system access credentials, customer passport data, bank records and medical information. Fast forward to 2022, when the Identity Theft Resource Center (ITRC) reveals that data leaks and cybercrimes associated with actions on the dark web have increased 23% over 2017’s record year.
While a credit card number can fetch $120 on the Dark Web, full medical records can garner $1,000. And once in a fraudster’s hands, the uses of personal data are endless, from identity theft to insurance scams to fraudulent loans and mortgages.
While tokenization has been around for over 20 years and was primarily designed to secure credit and debit cards, it has become an even more important technology as companies look to meet data privacy regulations and “mask” sensitive personal, financial and healthcare data.
In this two-part blog series, we first explored how tokenization applies to cardholder data (CHD) and Payment Card Industry (PCI) compliance. The second part of our blog details how tokenization applies to non-payment data, including Personally Identifiable Information (PII) and Protected Health Information (PHI), and helps to meet data privacy regulations.
Data privacy regulations – GDPR and CCPA
Europe’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) are two of the most well-known data privacy regulations. Enacted in 2018, GDPR’s goal is to protect the data and and privacy of EU citizens. Under GDPR law, businesses with procedures that handle personal data must be compliant with the proper safeguards to protect data (for example, using pseudonymization or full encryption where appropriate) and must use the highest possible privacy settings by default, so that the datasets are not publicly available without explicit, informed consent, and cannot be used to identify a subject without additional information (which must be stored separately). If businesses are not compliant and consumer data is exposed, they face steep fines.
GDPR was the first of the major privacy and protection laws to truly impact how companies globally collect, store, and protect consumer data, while also addressing the transfer of consumer data to businesses located outside of the EU.
Also introduced in 2018, the goal of the CCPA is to enhance consumer privacy rights and consumer data protection for California residents, and it is considered to be one of the most expansive set of state privacy laws in the U.S. Among its many stipulations, CCPA states that consumers will have the right to opt-out of personal data sharing, the right to “remain anonymous,” the right to have their personal data protected from theft, and the right to know how their personal data is being used.
While the U.S. does not yet have nationwide data privacy regulations in place – they are on the horizon with the American Data Privacy and Protection Act under draft legislation.
“We now have five states – California, Connecticut, Colorado, Utah, and Virginia – that have enacted a comprehensive privacy law. There is mounting concern from key stakeholders of the impact that this ‘patchwork’ of laws will have on consumers and businesses. At the same time, without a federal privacy law the United States is being left out of the conversation at a global level as Europe and China seek to lead the world in defining the privacy protection framework.” Lucy Porter, Brittney E. Justice, The National Law Review.
What information is defined as “sensitive” or “personal”?
Both GDPR and CCPA define personal information as information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household including, but not limited to:
- A real name or alias, signature, or physical characteristics or description
- Postal address or telephone number
- Unique personal identifier, account name, online identifier Internet Protocol address, or email address
- Education and employment, including employment history
- Social security number, driver’s license number, state identification card number, passport number, or other similar identifiers
- Medical information or health insurance information
- Bank account number, credit/debit card number or any other financial information
It is important to note that GDPR maintains a much broader definition of “personal information,” which can even include attributes such as mental, cultural, or social identity. But the core differences between GDPR and CCPA involve the scope of the laws and the jurisdictional reach of both.
How does tokenization relate to PHI and PII protection?
As discussed in our previous blog, tokenization is the process of removing sensitive information from your internal system — where it’s vulnerable to hackers — and replacing it with a one-of-a-kind token that is unreadable. Usually, a random sequence of numbers and symbols, tokenization masks valuable card data, PII, PHI and banking information, rendering sensitive data useless, even if hackers manage to breach your system.
While data privacy regulations do not mandate the type of technology adopted to secure data, they both discuss pseudonymization and encryption as relevant data security measures.
- Pseudonymization encodes personal data with artificial identifiers such as a random alias or code. While pseudonymization is a “false” anonymization because the data can be linked back to a person, the personal identifiers are stored outside of the company’s system or network. These personal identifiers would be required to re-identify the data subject, thus making it a secure practice. Tokenization is an advanced form of pseudonymization.
- Encryption renders data unintelligible to those who are not authorized to access it. Data encryption translates data into another form, or code, so that only those with access to the decryption key can read it.
One reason for tokens’ increasing use for sensitive, personal information is that they are versatile – they can be engineered to preserve the length and format of the data that was tokenized. Tokens can also be generated to preserve specific parts of the original data values; by adapting to the formats of conventional databases and applications, tokens can eliminate the need to change the database scheme or business processes. Organizations can treat tokens as if they were the actual data strings.
What are the benefits of tokenization for data privacy?
By employing tokenization as part of their data security program, businesses can achieve a number of benefits:
- Secures Data. Tokenization solutions have expanded beyond their original use in securing credit card information. They are now used to protect any industry that handles sensitive data, including social security numbers, birthdates, passport numbers, and account numbers – only accessing clear-text values when absolutely necessary.
- No Storage Requirements. Tokenization systems remove sensitive data from a business system, replacing it with an undecipherable token. The original data is then stored, processed and transmitted in a secure cloud environment—separated from the business systems.
- Cloud-based tokenization. Vaultless tokenization solutions have made the implementation of tokens more accessible than ever before. A streamlined process maintains the highest levels of security while offering a seamless solution managed in the cloud.
- Meet Compliance and Regulations. Using tokenization, companies significantly reduce the amount of data collection they store internally, translating into a smaller data footprint, meaning fewer compliance requirements and faster audits.
How do I select a payment tokenization solution?
Many providers offer tokenization for payment security, but one of the biggest considerations is the type of system – vaulted or vaultless. Vaultless tokenization systems are capable of handling large amounts of data and do it at a faster pace – in other words, the system is much more scalable with reduced latency. These systems are also generally considered to be more secure than their vaulted counterparts.
Bluefin’s ShieldConex® offers a vaultless, cloud-based approach to tokenization, returning the tokenized data to the client for storage. With no limit to the amount of data that can be tokenized, ShieldConex secures all CHD while also providing tokenization for PII, PHI, and ACH account data entered online.
ShieldConex does not store any of the original data – it is always tokenized and returned to the client, mitigating any data sovereignty issues. Additionally, there is no vault to lead to performance issues, and de-tokenization requests are returned instantaneously to the client. | <urn:uuid:3933c969-2ae2-496d-b6da-ac541c1cf8af> | CC-MAIN-2022-40 | https://www.bluefin.com/bluefin-news/understanding-phi-pii-data-privacy-tokenization-part-ii/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00543.warc.gz | en | 0.912691 | 1,822 | 2.59375 | 3 |
Blockchain technology can power an open DNA data marketplace that drives a new wave of genomics research to transform precision medicine and the treatment of rare diseases.
Humanity is at the very beginning of a tremendously exciting era of precision medicine. The cost of genome sequencing a person has already fallen below the $1000 dollar mark. Soon it will be down to $100. Gradually, a growing percentage of the world’s population is being afforded the opportunity to receive health treatment and lifestyle advice relevant to their genetic makeup, and also to share their genomic data for the betterment of humanity.
This means researchers and health professionals could soon enjoy access to a vast resource of genomic sequencing data and health records that could help them investigate disease and transform patient outcomes.
However there are many obstacles to overcome first.
Firstly, science needs access to extremely large numbers of genomic and other healthcare datasets in order to gather meaningful and potentially transformational information. Secondly, for the promise of precision medicine to be fulfilled, data must be easily sharable and interoperable across technological, geographic, jurisdictional, and professional boundaries.
There are many ongoing initiatives across the globe aiming to facilitate the sharing of genomic data and thereby enabling precision medicine progress. Health apps based on genomic and other health data are good examples. But frequently, they are addressing the sharing problem from different angles, or often simply competing against each other. This stifles research and innovation and prevents medicine and healthcare moving forward at the pace it should.
Ending the genomic data monopoly
In reality, a few large businesses currently hold the monopoly on the vast majority of genomic data, and make vast profits from selling it to third parties, usually without sharing the earnings with the data donor. Things have to change.
There needs to be a means by which patients, health professionals, governments, researchers and providers of health technology can access data, cooperate, collaborate, network, and form partnerships.
I believe the world needs a centralized health data hub – an open marketplace where health and genomic data can be shared, borrowed, or sold. Of course this platform would have to be secure. But by utilizing blockchain technology and next-generation cryptography, trust could easily be built around the ecosystem, alleviating consumer hesitations about leaving personal data online or in the hands of corporations.
Healthcare and wellness providers such as clinics, genomic counselors, pharmaceuticals, research organizations, governments, patient-support groups and insurance companies that joined such an ecosystem would no longer have to compete with each other to gather data. It would be there for them all to use – for example, to boost clinical trials or facilitate drug research and development.
But there would have to be incentives for people beyond donating their data for the betterment of mankind. Firstly, they should be empowered to share their data however they liked, whether donating, loaning or selling it. Blockchain technology would enable them to stay in absolute control of their data – in the knowledge it is totally secure. Individuals should also be able to benefit from access to applications that leverage their data and enhance their wellbeing and health – for example, nutritional and fitness advice, treatment plans, genealogy, disease predisposition, pharmacogenomics, and lifestyle management.
Looking into the future, as more personalized biological information becomes available, services could be offered that are based not only on genomic data, but also other health, biological, socioeconomic, and environmental information. When combining genomic data with other molecular data, such as epigenomic, metabolomic, transcriptomic, microbiome data, and clinical information, the resulting rich datasets enable integrative analyses to be carried out at unprecedented depth and scale, facilitating new insights into molecular disease processes.
By implementing an open, collaborative platform and marketplace, critical mass will be achieved faster in precision medicine, utilizing the magnifying power of network effects. Of course, this data hub has to be international. Today, many ethnic and geographical populations are still worryingly underrepresented in public databases.
This is an exciting time in healthcare, all the technologies are in place to transform the health of humanity. Like the world was changed forever by the invention of the internet, the healthcare ecosystem is ripe to be revolutionized through giving data ownership back to the people, ushering in a new form of global healthcare. The destiny of world civilization may depend upon providing decent healthcare for all humanity; that is what civilization is all about. | <urn:uuid:84df3b74-ef20-4e4d-b172-b11ffbf3d9a1> | CC-MAIN-2022-40 | https://dataconomy.com/2019/01/a-blockchain-powered-dna-data-marketplace-will-revolutionise%E2%80%8B-precision-medicine/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00543.warc.gz | en | 0.938552 | 888 | 2.703125 | 3 |
Recent scientific research revealed that in one AI system, the words “female” and “woman” were more closely associated with the arts, humanities and the home, while “male” and “man” were linked to maths and engineering professions. The AI technology was also more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words.
So how can we ensure that AI systems, which are going to have an increasing amount of influence over our lives, can shed themselves of these biases? A panel of experts at Dreamforce came together to offer advice on this issue, offering useful tips on how businesses can ensure that AI technology they develop or make use of is neutral rather than biased.
KG Charles-Harris, Quarrio CEO, noted that a core problem around understanding AI bias is that the maths that underlies the data science is usually opaque; most people don’t have an insight into how the different algorithms function. As evidence of this, he asked how many people in the room understood how the algorithms of Salesforce’s own Einstein AI technology work – just one person raised their hand, who happened to be a Salesforce employee. Charles-Harris added:
It’s very difficult for us normal people to understand why things are happening. Bias by its definition means error. If you have a data set with a group of customers or potential customers with certain types of characteristics, these characteristics are based upon the data set.
But what do we know about the data set - where does it come from, who collected it? There are a number of questions we as data scientists need to ask in order to understand where the data set will actually drive us in the wrong direction or overemphasize something that will be very negative for a certain population group.
Kathy Baxter, User Research Architect at Salesforce, said a first step for organisations should be researching the biases that exist within the business.
AI is a mirror that will hold up your biases. You need to be aware of what the biases are that exist in your data set or in your algorithms. Otherwise recommendations that are being made will be flawed.
Baxter said that a common problem with AI systems is that customers who are highly engaged with a company’s product will be overrepresented in the data set, meaning those who are not represented on a large enough scale are going to be ignored.
Charles-Harris encouraged businesses to revisit original data fed into AI systems to ensure fair representation for everyone, not just certain groups:
Technology by itself is a false multiplier. Anything that we want to happen or anything that has an unintended consequence is multiplied many times over. When we’re looking at the essence of how these things can go wrong, we have to understand that unless we look at it from the beginning, at the data set and algorithmic level, things will go wrong.
You may not explicitly enter race, for example, into your algorithms, but if you enter an income and zip code, that is a proxy for race. So biases can come in whether you intend them to or not, so it’s a matter of being aware of the factors being used and the factors being excluded.
A core part of this is getting non-technical people to have conversations with the data scientists, to ensure these issues are considered.
Charles-Harris admitted that one of his downfalls in entering the AI domain is how narrowly focused he has been on the technology. He added:
This has caused errors for me, for my company, for the customers that were completely unintended. Literally half an hour’s conversation with a sociologist could have saved a couple of million dollars.
Ilit Raz, CEO at Joonko, an AI-powered diversity and inclusion coach, called for all stakeholders to get involved in the process of creating the data and shaping AI systems. She cited the example of Netflix, which likely bases its content decisions on its most engaged users, as that small vocal minority will be producing the vast majority of the data:
Every stakeholder in the process can contribute to the discussion even by just asking some questions. Just asking these questions is probably going to lead these data scientists to the right point.
However, the fact that AI bias is now being discussed is a positive step in tackling it, according to the panel. Baxter noted:
There’s much more discussion now because there’s much more awareness of these biased algorithms. People might not have been aware that they were a victim of these algorithms and now people are starting to ask questions. It’s a matter of going back to truth and values, and what are the factors that we use in the algorithms. | <urn:uuid:5c2d60fd-f9f7-4853-a583-f20bf138946b> | CC-MAIN-2022-40 | https://diginomica.com/dreamforce-17-getting-rid-bias-ai | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00743.warc.gz | en | 0.969213 | 988 | 2.9375 | 3 |
Editor’s note: This is Part II of helping kids manage digital risks this new school year. Read Part I.
The first few weeks back to school can be some of the most exciting yet turbulent times of the year for middle and high schoolers. So as brains and smartphones shift into overdrive, a parent’s ability to coach kids through digital drama is more critical than ever.
Paying attention to these risks is the first step in equipping your kids to respond well to any challenges ahead. Kids face a troubling list of social realities their parents never had to deal with such as cyberbullying, sexting scandals, shaming, ghosting, reputation harm, social anxiety, digital addiction, and online conflict.
As reported by internet safety expert and author Sue Scheff in Psychology Today, recent studies also reveal that young people are posting under the influence and increasingly sharing risky photos. Another study cites that 20 percent of teens and 33 percent of young adults have posted risky photos and about 8 percent had their private content forwarded without their consent.
No doubt, the seriousness of these digital issues is tough to read about but imagine living with the potential of a digital misstep each day? Consider:
- How would you respond to a hateful or embarrassing comment on one of your social posts?
- What would you do if your friends misconstrued a comment you shared in a group text and collectively started shunning you?
- What would you do if you discovered a terrible rumor circulating about you online?
- Where would you turn? Where would you support and guidance?
If any of these questions made you anxious, you understand why parental attention and intention today is more important than ever. Here are just a few of the more serious sit-downs to have with your kids as the new school year gets underway.
Let’s Talk About It
Define digital abuse. For kids, the digital conversation never ends, which makes it easier for unacceptable behaviors to become acceptable over time. Daily stepping into a cultural melting pot of values and behaviors can blur the lines for a teenage brain that is still developing. For this reason, it’s critical to define inappropriate behavior such as cyberbullying, hate speech, shaming, crude jokes, sharing racy photos, and posting anything intended to cause hurt to another person.
If it’s public, it’s permanent. Countless reputations, academic pursuits, and careers have been shattered because someone posted reckless digital content. Everything — even pictures shared between best friends in a “private” chat or text — is considered public. Absolutely nothing is private or retractable. That includes impulsive tweets or contributing to an argument online.
Steer clear of drama magnets. If you’ve ever witnessed your child weather an online conflict, you know how brutal kids can be. While conflict is part of life, digital conflict is a new level of destruction that should be avoided whenever possible. Innocent comments can quickly escalate out of control. Texting compromises intent and distorts understanding. Immaturity can magnify miscommunication. Encourage your child to steer clear of group texts, gossip-prone people, and topics that can lead to conflict.
Mix monitoring and mentoring. Kids inevitably will overshare personal details, say foolish things, and make mistakes online. Expect a few messes. To guide them forward, develop your own balance of monitoring and mentoring. To monitor, know what apps your kids use and routinely review their social conversations (without commenting on their feeds). Also, consider a security solution to help track online activity. As a mentor, listening is your superpower. Keep the dialogue open, honest, and non-judgmental and let your child know that you are there to help no matter what.
Middle and high school years can be some of the most friendship-rich and perspective-shaping times in a person’s life. While drama will always be part of the teenage equation, digital drama and it’s sometimes harsh fallout doesn’t have to be. So take the time to coach your kids through the rough patches of online life so that, together, you can protect and enjoy these precious years.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats. | <urn:uuid:d5f3fed5-b2fc-414c-ae74-c93d9662a796> | CC-MAIN-2022-40 | https://www.mcafee.com/blogs/consumer/family-safety/how-to-help-kids-steer-clear-of-digital-drama-this-school-year/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00743.warc.gz | en | 0.930787 | 883 | 3.03125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.