Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
This article briefly explains how to configure OpenSSH (version 4 or higher) to share connections to the same host (for faster connecting), as well as some of the problems (with workarounds) you may encounter using shared connections.
Version 4 of OpenSSH introduced a great but little-known feature that enables you to share connections to a remote host, so that if you open multiple connections to that host, all connections after the first connecton reuse the first connection. Why would you want to do that? Because subsequent connections will connect much faster.
For example, connecting to a server of mine across the country takes 0.9 seconds or so the first time (and every time without connection sharing); connecting to that same host reusing a connection takes about .13 seconds — that‘s about 7 times faster! If you are using some tool that makes lots of connections to the same host one after the other, the time savings can be drastic. I use Darcs over SSH for source code revisioning, and it seems to be very slow for certain operations due to opening multiple connections. Reusing connections sped up certain operations I do many times a day from 7 or 8 seconds to less than 1 second.
Configuring OpenSSH to reuse connections is trivial to set up, but there are some gotchas that you might run in to.
First, make sure you have a ~/.ssh directory, creating it if necessary, and ensure that it has the correct permissions. It should have 700 permissions (i.e., all permissions for you, none for anybody else):
calvin@turing ~ $ ls -ld ~/.ssh drwx------ 2 calvin calvin 4096 2008-04-23 01:30 /home/calvin/.ssh/
If the permissions are not already set to 700, make it so:
calvin@turing ~ $ chmod 0700 ~/.ssh
Next, add the following configuration options to ~/.ssh/config, creating this file if it does not already exist:
Host * ControlPath ~/.ssh/master-%l-%r@%h:%p ControlMaster auto
This configures OpenSSH for opportunistic sharing. Host * specifies that the following block of configuration options should be used for connecting to any host. ControlMaster auto specifies that OpenSSH should reuse an existing connection to a given host if possible, or open a new connection if there is not an existing one. ControlPath ~/.ssh/master-%l-%r@%h:%p specifies where OpenSSH should create the socket file that represents the master connection, where %r is replaced by the login name, %h is replaced by the hostname, and %p is replaced by the port number. The %l option adds the local hostname to the name of the socket file, which is useful if the directory might be mounted on multiple hosts (e.g., if your $HOME is remote and accessed via NFS). If the directory is not shared across more than one machine, then the %l is not necessary, but it doesn’t hurt either, and it makes things more future-proof. (Thanks to Anthony M. for the suggestion of adding %l to the control path format string.) See the ssh_config man page for more details about what these options do.
After making this configuration change, verify that it’s working by connecting to some host, and verifying that the socket file is created. If you want to compare times, you might try running time ssh example.com exit with and without a master connection already open. You might also try connecting with the -v option, to see exactly what is happening. When you connect for the first time, you’ll see lots of debug messages (40 or 50 lines). When you connect reusing a connection, you’ll see less than 10 lines of output, and the last line before successfully logging in will be something like “auto-mux: Trying existing master”.
It is important that you configure the ControlPath to be somewhere that no other user has access to. If the ControlPath were set to a location that other users could write to, they could create a file there and prevent OpenSSH from being able to create a socket file there when you try to connect. (This wouldn’t be a fatal problem though, since you could still connect by specifying a non-default path: ssh -o "ControlMaster auto" -o "ControlPath ~/.ssh/foo-%l-%r@%h:%p" email@example.com.) But regardless of whatever attacks there are or are not, the ones you should worry about the most are the ones that haven’t been discovered yet, and there are far fewer of those for a socket file saved in a 700 directory than in a 777 directory.
The rest of this document deals with potential problems that you might run into.
Since all connections to the given host share the same TCP connection, they all reuse whatever SSH options the first connection used. This means that if you try to connect to a remote host using different options that you used initially, OpenSSH will quitely ignore the options you specify in subsequent connections. If you initially connect without enabling X11 forwarding (i.e., without using the -X flag) and want to later open a connection to that host with X11-forwarding enabled, doing ssh -X example.com will not work; it will quitely reuse the existing connection and won’t even warn you that X11 forwarding is not enabled. The same is true for the display setting and probably other options as well.
In order to use different configuration options for the connection, you must open a new connection and not reuse the existing master connection. This is accomplished by making a connection with ControlPath set to none. The -S flag is a convenience for setting the ControlPath on a per-process basis, so including -S none in your command results in a new connection being opened. In order to make a new connection with X11 forwarding enabled, for example, you would use:
ssh -S none -X example.com
If you do this sort of thing frequently, you might want to something like alias sshnew="ssh -S none" to your ~/.bashrc file, and then you can just do sshnew -X example.com.
When the master connection successfully exits — i.e., when you logout from the master connection — OpenSSH will delete the appropriate socket file. However, if the process gets killed without having a chance to properly shut down, it won’t remove the socket file. The next time you try to connect to that host, you won’t be able to, and you’ll see an error message like:
Control socket connect(/firstname.lastname@example.org:1234): Connection refused ControlSocket /email@example.com:1234 already exists
You’ll have to manually delete the socket file that it is complaining about, and then you’ll be able to connect again, and it will establish a new one if it’s a master connection.
If you mistakenly exit from the master connection you’ve opened while you still have slave connections open, the master process will just hang there until all the slave connections exit. This is arguably what should happen, since the not-so-desirable alternative would be for the master to forcibly kill all the slave connections and then exit. Still, it’s easy to keep mistakenly doing this if you open multiple connections in different terminals and forget which is which.
One simple solution is to create the master connection somewhere it won’t be used. You could use something like screen to open connections in the background somewhere where they won’t be seen or used when your system starts or when X starts, or you can just open a few terminals in a workspace that you don’t use or a virtual terminal.
If you want to really be sure you don’t use the connection, you can always start it with the -N option (and anywhere from 1 to 3 -v flags if you want to see output related to that master connection). SSH will make the connection and not invoke a shell or any other command on the other end. The net result is that you can’t do anything with that connection, except kill it when you’re finished. (Thanks for this suggestion, lacos.)
Feedback is welcome at initials-code@thisdomain, where the initials are cs and the domain is protempore.net.
2008-06-24: thanks to Reddit readers lacos and Anthony M. for their suggestions.
This page is valid HTML5 & CSS, and is licensed under a Creative Commons License.
|
OPCFW_CODE
|
When I encounter a problem on a VoIP network that needs troubleshooting, I just wish that I had microscopic X-ray vision, like a superhero who is able to peer into the subatomic level of things.
It would be so advantageous to be able to instantaneously see what is going on with transmissions on the network.
When you experience voice quality issues, one way or choppy voice, or call setup problems, it would be great to be able to see, are the voice packets arriving? How are they arriving?
What kind of signaling is getting through and what is being blocked and where?
These are questions I wish I had instant access to, and this is what port mirroring is all about.
Well, barring X-ray vision as a solution, the next best thing is to use a packet sniffer such as Wireshark to be able to look at the “sub-packet” level of the network for VoIP troubleshooting.
Wireshark as a tool can be intimidating at first, and I know this because this was the case for me. But once you get the hang of it, you will truly see what a powerful tool it is and it will quickly become one of the primary instruments in your troubleshooting toolbox.
Build pro IOS configs. FAST.
In this article, we’ll focus on how to prepare your network for capturing VoIP packets using Wireshark.
- 1 How do you capture voice packets using SPAN?
- 2 Port Mirroring using SPAN
- 3 Port Mirroring using ‘Span to PC’
- 4 Conclusion
How do you capture voice packets using SPAN?
In order for Wireshark to capture VoIP packets, there are two fundamental requirements. First, the computer on which Wireshark is installed must be connected to a port on the network, and second, this network port must be configured appropriately to send the voice packets to the computer for analysis. When troubleshooting for VoIP, there are two ways to configure such a network port. The first involves using the Switched Port Analyzer (SPAN) feature on a Cisco switch, while the second involves enabling the “Span to PC” port configuration parameter on the IP phone itself. Which method you use depends upon the nature of the problem you are troubleshooting and the procedure you are choosing to apply
Note here that the feature, when applied on a switch, uses the acronym “SPAN”, while when indicated within the web interface of the CUCM uses the word “Span” instead.
Port Mirroring using SPAN
SPAN is the term used for this feature on Cisco switches. The more general term this feature is known by is port mirroring, and it’s available on multiple platforms. Here, we’ll be focusing on Cisco switches. Regardless of the name, this is a feature available on switches allowing the collection of packets being exchanged on a network to take place on a specific port of that switch. When enabled, SPAN will send a copy of all of the packets that are seen on one switch port to a network monitoring connection on another port on that switch. Take a look at the following diagram.
Using the above scenario, Port 1 can be configured as the mirrored port, or the monitoring port. This is the port on which a computer running Wireshark would be connected. Ports 2, 3 and 4 are then configured as source ports for the port mirroring, that is, they are designated as ports whose traffic is copied to the mirrored port.
The result is that all traffic on Ports 2, 3 and 4, whether incoming or outgoing, is replicated and sent to Port 1. The packets will be collected at the network card of the computer allowing it to capture, store and later analyze them.
When to use SPAN to capture VoIP packets
The SPAN feature is ideal when the VoIP packets you want to capture are not confined to a single IP phone, but are found within the core of the network itself. This is the case when you want to capture voice packets going to and coming from the voice gateway, or when you want to examine SIP signaling that takes place between multiple endpoints and the CUCM.
SPAN configuration on Cisco IOS switches
On most Cisco IOS switches, the configuration for SPAN involves the following steps:
- Create a SPAN session.
- Specify which port is the source or monitored port. This is the port whose traffic is going to be monitored. Note that multiple source ports can be configured.
- Specify which port is the destination, or monitoring port. This is the port where all of the traffic on the source ports will be copied and sent to. There can only be one destination port per session. This is also the port on which you will connect the Wireshark computer.
In the following example, we’ll configure ports GigabitEthernet 0/3, 0/4, and 0/7 as the monitored ports, and GigabitEthernet 0/1 as the destination port.
Switch(config)# monitor session 1 source interface gigabitethernet0/3 Switch(config)# monitor session 1 source interface gigabitethernet0/4 Switch(config)# monitor session 1 source interface gigabitethernet0/7 Switch(config)# monitor session 1 destination interface gigabitethernet0/1
Note here that the above commands fall under a specific monitoring session, specifically, session 1. It is possible to create multiple monitoring sessions within a switch, but each one can have only a single destination port, and each destination port can only belong to a single monitoring session.
You can verify this configuring using the following command:
Switch# show monitor session 1 Session 1 Source Ports: RX Only: None TX Only: None Both: Ge0/3, Ge0/4, Ge0/7 Source VLANs: RX Only: None TX Only: None Both: None Destination Ports: Ge0/1 Switch#
The output here shows that GigabitEthernet ports 0/3, 0/4, and 0/7 are source ports and that both egress and ingress packets on these ports are being captured and copied to the destination port. It also shows that the destination port is GigabitEthernet 0/1.
There are several additional and useful parameters that can be used to narrow down the types of packets captured. The following output shows some of these parameters:
Switch(config)#monitor session 1 source interface fa0/1 ? , Specify another range of interfaces Specify a range of interfaces both Monitor received and transmitted traffic rx Monitor received traffic only tx Monitor transmitted traffic only
Using the context sensitive help, we can see that you can specify a range of interfaces with a comma to separate them, or you can specify a range of interfaces. You can also indicate if you want to capture ingress traffic (rx), egress traffic (tx), or both. Capturing both directions is the default, and this is why the output in our configuration indicates this.
Port Mirroring using SPAN – what to keep in mind
When configuring SPAN on a switch, you should keep the following in mind:
- Wireshark will capture all of the packets “seen” on a particular monitoring source including data packets. So you will have to perform some filtering of the resulting captured packets to view the particular voice packets you are interested in analyzing.
- The source of a monitoring session can be a switchport, a routed port, an EtherChannel port, an access port, a trunk port, a VLAN interface, or a whole VLAN. Choose whatever is most appropriate for what you want to capture.
- The more sources you have the more packets will be captured, and the more difficult it will be to find the packets that you are interested in. Make sure you choose your source ports wisely and the direction of traffic you want to capture to minimize the capturing of needlessly excessive traffic. This will make your analysis easier, and will also avoid oversubscribing the monitor port resulting in lost (and uncaptured) packets.
- When you configure a destination port, its previous configuration is lost, and it cannot be used to forward normal traffic.
Port Mirroring using ‘Span to PC’
The Span to PC feature allows you to configure a Cisco IP phone so that all of the voice traffic it sends and receives can be copied to the PC port on the device. It’s kind of like the SPAN feature, but for an IP phone. The PC connected to the PC port of the phone can capture all of the packets sent to and from the phone simply by running Wireshark. The following diagram describes such a scenario.
When to use it ‘Span to PC’ to capture VoIP packets
The Span to PC feature should be used when you want to analyze the voice packets sent to and from a specific phone. This feature is useful when troubleshooting issues that are isolated to the specific device on which it is being configured.
Span to PC configuration
The Span to PC feature is somewhat more simplistic than the SPAN feature. In order to enable it, you must log in to the CUCM web administration, and go to the “Product Specific Configuration Layout” section of the “Phone Configuration” page of the particular device you want to configure.
Under this section, find the “Span to PC Port” option.
It may help to do a “find in page” on your browser and search for the text to find it, as, depending on the phone model, there may be hundreds of configuration parameters.
Once you find it, verify that it is set to “Enabled”.
Keep in mind that in order for the Span to PC feature to function, the following must also be set:
- “PC Voice VLAN Access” must be set to “Enable”
- “PC Port” must also be enabled
The result of these settings is that all packets sent to and from the IP Phone on the phone’s interface are copied and sent to the PC port as well. As a result, they can be captured using Wireshark installed on the PC connected to that port.
Port Mirroring using ‘Span to PC’ – what to keep in mind
When configuring Span to PC feature, you should keep the following in mind:
- Wireshark will capture all of the packets coming to and from the NIC of the PC, including data packets that it sends and receives anyway. So you will have to perform some filtering of the resulting captured packets to view the voice packets and analyze them.
- The Span to PC feature is not supported on older phones such as the 7940 and 7960
- It is best practice to keep this feature disabled for security purposes when not in use.
SPAN and Span to PC are port mirroring features that provide you with superheroic X-ray vision to see what is happening with your voice packets. Once these features have been configured, the next step is to prepare Wireshark to begin capturing and storing packets for analysis. We’ll take a closer look at this preparation and analysis of captured packets in an upcoming article.
Build pro IOS configs. FAST.
|
OPCFW_CODE
|
Introducing the App-V 5.0 Beta Sequencer: What Has Changed Since 4.6 SP1.
-Adam Kiu | Program Manager, Microsoft Application Virtualization
Disclaimer: This article describes the Beta version and does not include the complete list of features in the final product.
App-V 5.0 is the largest release since Microsoft acquired the application virtualization technology from Softricity in 2006. The App-V 5.0 release is focused on providing our customers with a seamless Windows experience for users, flexible virtualization options and powerful management. The new sequencer is released to support these goals. It has been updated to generate App-V 5.0 packages, ease the conversion of existing packages, and leverage and improve on the usability enhancements we made in App-V 4.6 Service Pack 1. For those new to the product, the sequencer is App-V’s packaging tool that turns native applications into virtual applications. Those unfamiliar should read the previous sequencer documentation first.
With App-V 5.0 Beta released to the public, I would like to talk about these Sequencer improvements and changes. The purpose of this article is to give you a high-level overview of what we’ve changed and why we feel that we’ve changed it for the better. This blog post will not delve into details on each change – if you would like to learn more, then I encourage you to read the latest documentation. In summary, we have made improvements in the following areas:
· Optimizations in the package creation process
· Simplifications and improvements to the advanced package editing process
· Changes around the output of the package
No more Q: - Primary Virtual Application Directory (AKA Install to Location)
In the previous versions of the Sequencer the best practice was to install to Q:. This virtual drive ended up being displayed on the client, and some users would end up seeing this App-V drive letter, and asking why it was there. We’ve heard the feedback and eliminated the Q: drive! The new best practice is to set the Primary Virtual Application Directory (PVAD) to the same path that the installer is installing its application to. Matching the installer path to the Primary Virtual Application Directory path allows the Sequencer to create a package that has optimal runtime performance.
You’ll notice that we no longer fill-in a default choice as we did before, and that the PVAD field has become mandatory. This is because the Sequencer does not guess the application installer’s directory. It is best for the application’s packager to enter the proper directory.
Stream Optimizing Packages
In App-V 5.0, we have a new form of streaming called on-demand streaming delivery. This means that when a user interacts with a virtual application before any files have been downloaded, the files needed for it to function will be streamed onto the client as they are requested by the application. For example, on first launch of the app, all resources needed to start it will be extracted from the package onto the client. This process is called stream faulting because the faults generated by the application when it cannot find the files it need triggers App-V to stream the files from the package. Once the application is launched, it will background stream the rest of package to the machine, stream faulting files on-demand if needed.
This means that there are now three options for optimizing streaming across networks:
· On-demand streaming delivery (Default): A package created and skipping the stream optimization step will be streamed on-demand to the machine via stream faults.
· Stream optimized: A package that goes through stream optimization process contains a primary feature block and this entire block is streamed before launching. This can be performed in the Stream Optimization step in the Sequencer, just like 4.6 SP1 release.
· Fully downloaded (not available in Beta): The package will be fully downloaded before it can be launched. There is a checkbox that allows you to specify all applications in the package to be fully downloaded.
Application Installers that require a reboot
The App-V Sequencer no longer simulates reboots that are detected – they are processed natively. By allowing reboots to natively occur, the Sequencer can do a better job in capturing pending operations created by the reboot. This increases the chances of creating a fully functional package. When the application being sequenced requests a restart, allow the machine to reboot and the sequencing machine will restart and resume in monitoring mode after the reboot completes. Note that you will have to log back into the account you were previously using for sequencing.
Creating add-on/middleware Packages
App-V 5.0 now allows multiple App-V packages to interact with each other through a concept called Virtual Application Connection. Unlike Dynamic Suite Composition in previous versions of the product, this is no longer a part of the Sequencing process so the add-on/middleware package creation process changes a little bit. To sequence an add-on package you’ll still see the same experience of natively installing the parent app, and then sequencing the add-on package. However, once completed, these apps are connected together via Application Connection Groups. For more details, please see the Beta documentation.
Shortcut/File Type Association (FTA) Editing
We have heard the feedback that this page is too important to be a part of a wizard workflow screen. To make things easier, in App-V 5.0 we have moved the shortcut/FTA editing page to the Advanced Edit screen after the package has been completed. This way, if a packager wants to continue editing shortcuts/FTAs, they can access that page independent of which application sequencing workflow they attempt. Note: Due to the way shortcuts and file type associations need to be unpacked onto the machine to be edited, when opening a package for edit, you must select the Update Application or Add New Application workflow to get to this tab.
Modifying an Existing Package
After you create a package in the Sequencer you can either stop (basic users), or continue to an advanced page that contains tabs allowing you to customize your package as you see fit. In App-V 4.6 SP1, this editing pane allowed you to modify the virtual registry, virtual file system, deployment configuration options, etc. In App-V 5.0, some of these options are no longer needed, have moved, or have been simplified. This section mentions each:
· No OSD tab. The new file format does not use OSD files. Metadata about application shortcuts are stored in the manifest and custom scripts are stored in new Dynamic Configuration files. Future blog posts will cover Dynamic Configuration in more detail.
· The Virtual File System (VFS) tab has been replaced with a Package Files tab. It is now possible to manipulate all files in the package, not just those in the VFS.
· The deployment tab has fewer options: Compression is always enabled, MSI packages are always output, security descriptors cannot be overridden (they follow the ones in the files and registry), and packages no longer need streaming protocol information.
Package Format Changes
The App-V 5.0 file format is very different from the previous formats. A quick look at what the Sequencer now produces:
· .appv package. This contains the sequenced application files, registry, stream map, and manifest.
· Deployment configuration and user configuration template files. These template files are used to customize package functionalities on the client during run-time.
· Report.xml file. This is a saved report of the Sequencing warnings and errors that occurred during the sequencing.
· MSI file. This is the MSI that allows administrators to deploy sequenced packages via MSI.
Customers that come from 4.6 and earlier have large amounts of packages in the previous file formats. The App-V 5.0 Sequencer comes with a PowerShell module called the Package Converter and allows customers to leverage their previous investments. The Package Converter lets you convert App-V 4.5 or higher packages directly to the new App-V 5.0 format. The Package Converter consists of two commands:
There are a few limitations to the package converter, so it is important to read the documentation before using it.
|
OPCFW_CODE
|
//
// Drink.swift
// DrinkKit
//
// Created by Nick Hayward on 10/21/18.
// Copyright © 2018 Nick Hayward. All rights reserved.
//
import Foundation
public enum Kind: String {
case beer
case wine
case distilled
}
public struct Drink {
public typealias Ounce = Double
public let volume: Ounce
public let ABV: Double
public let time : Date
public let alcohol: Alcohol
public let style: String
var ethanol: Double {
return volume * ABV
}
public init(volume: Ounce = 12,
ABV: Double = 0.050,
time: Date = Date(),
alcohol: Alcohol = .beer,
style: String = "") {
self.volume = volume
self.ABV = ABV
self.time = time
self.alcohol = alcohol
self.style = style
}
}
extension Drink {
public init(alcohol: Alcohol,
time: Date = Date(),
style: String = "") {
self.time = time
self.style = style
switch alcohol {
case .wine:
self.volume = 5
self.ABV = 0.12
self.alcohol = alcohol
case .spirit:
self.volume = 1.5
self.ABV = 0.40
self.alcohol = alcohol
case .beer:
self.volume = 12
self.ABV = 0.050
self.alcohol = alcohol
}
}
}
extension Drink : CustomStringConvertible {
public var description: String {
return "\(alcohol)" + (style == "" ? "" : " (\(style))") + " \(volume)oz \(ABV * 100)% ABV at \(time)"
}
}
|
STACK_EDU
|
The understanding and prediction of protein structures is a fundamental aspect of molecular biology. For decades, scientists have used computational methods to predict how proteins fold, which is crucial for understanding their function in the body. Two of the most widely recognized tools for this task are DeepMind’s AlphaFold and the Rosetta software suite. This article will compare their capabilities and explain how AlphaFold is breaking new ground in the protein folding domain.
Comparing the Capabilities: AlphaFold and Rosetta in Protein Folding
AlphaFold and Rosetta, while both designed to predict protein structures, have distinct modeling approaches. AlphaFold, backed by Google’s DeepMind, utilizes a machine learning approach. The system is trained on thousands of known protein structures from the Protein Data Bank, learning to predict the distance and angle between amino acids. It then uses this information to predict how new proteins will fold.
On the other hand, Rosetta, developed by the Baker lab at the University of Washington, employs a combination of physics-based and knowledge-based methods. It uses a Monte Carlo algorithm to sample different possible conformations of a protein and then scores these based on their probability. Rosetta is well-regarded for its flexibility and has been used extensively for protein structure prediction, protein design, and other related tasks.
Analysis: How AlphaFold Breaks New Ground in the Protein Folding Domain
AlphaFold has gained significant attention for its groundbreaking performance in the Critical Assessment of Structure Prediction (CASP) competition. In the 2020 CASP, AlphaFold outperformed all other tools, achieving a median Global Distance Test (GDT) score of 92.4. This score is close to the accuracy of experimental methods and significantly higher than the previous state-of-the-art score of around 60 achieved by other methods.
AlphaFold’s ability to predict protein structure with such high accuracy has transformative implications for biological research. Accurate protein structure prediction can greatly accelerate drug discovery and the understanding of diseases. AlphaFold has already been applied to predict the structure of proteins related to the SARS-CoV-2 virus, providing valuable insights for COVID-19 research.
Furthermore, the machine learning approach used by AlphaFold represents a paradigm shift in the protein folding field. It showcases the potential of AI and deep learning in tackling complex scientific problems, pushing the boundaries of what is computationally possible.
In conclusion, both AlphaFold and Rosetta have made significant contributions to the field of protein folding. While Rosetta’s flexible and robust algorithm has been a stalwart in the field for years, AlphaFold’s machine learning approach represents a groundbreaking shift in the field. The high accuracy achieved by AlphaFold not only opens up new possibilities for biological research but also underscores the potential of AI in solving complex scientific challenges. As we move forward, it will be fascinating to observe how these technologies further unlock the mysteries of protein structures and their roles within our bodies.
|
OPCFW_CODE
|
Updated: Aug 8, 2021
Intention: In honor of my full-time start date July 29, 2019 at PayPal, I thought it'd be great to take some time and reflect on what I've learned especially during this past career year. This is in hopes of making connections to seeds that have been planted even more than a year ago.
A tip for a promotion is to start doing what the next level requires. Find ways to make your manager's life easier.
During my internship time at PayPal during Summer 2018, I had a 1:1 with Wes H. who gave me this valuable advice. As I was learning more about the PayPal Credit ecosystem, I also observed what other SWE 2's were doing and worked to adopt the positive techniques into my coding practice.
In February 2021, I got promoted to Software Engineer 2 as a fruit of hard work + already performing at an SWE 2 while I was an SWE 1, guidance from my manager, Priyanka S., and modeling the great examples that I've had along the way which include teammates, coworkers, and other leaders.
One thing I learned, is that it's important to document your findings. An example is that my team was new to GraphQL and we were going to be creating a GraphQL service for a project we're helping with. I took it upon myself to be the frontrunner and understand how to get the service up and running. I documented the steps I took so that my teammates could follow and it would save time if any of us ran into the same problems. In addition, if my teammates came across any new problems, they could document their solutions as well.
You can be a leader without having to be in a management position.
I took a Dale Carnegie Develop Your Leadership Potential: Stop Doing, Start Leading course in November 2020 and it helped me realize that each person has the ability to make an impact and a positive influence on others regardless of position. There's a reason why "word of mouth" recommendations are so meaningful because they can come from a place of trust and connection. I place an importance on building trust. I also view trust as reliance and credibility where my team can rely on each other to jump in when needing advice and build each other up. Especially with all of us still working remotely, it's crucial to promote this value in the virtual space as we have gained new team members in both my team as well as the other teams.
In March 2021, I was blessed with the opportunity to be the Scrum Master on my team. The Scrum Master has the responsibility of upholding the Scrum methodology rules and facilitating Scrum ceremonies (Daily standup, Sprint Planning, Backlog Grooming, Demo/Retrospective). I also feel that I have an unofficial responsibility of continuing the team camaraderie and culture as I also have the unofficial title of "Chief Culture Officer" for the Scottsdale Credit organization. As an example, one Friday, I started using a Microsoft Teams-provided balloon background to celebrate that the weekend is near. The previous-Scrum master adopted this and now as a team, we all have our balloon backgrounds on every Friday. I'm also a huge proponent of bringing all of yourself to work; "all" being what you feel comfortable sharing. So with our daily stand-ups, as everyone starts joining the call, I create space for my coworkers to share about a fun hobby that they did over the weekend. During post-scrum (after every person gives their status update), I try to hold space for any questions that they have that's may relate to what we're working on or PayPal-specific.
Gather a wholistic approach
With the projects that my team has been working on in the past year, we've been able to contribute in back-end services that use the tech stack of Java, Spring Boot, and Maven as well as front-end services that utilize the tech stack of React, Node, and GraphQL. It's been really neat to see how we implement the N-tier architecture model. I've also enjoyed connecting the dots along the way while coding with how we generate and propagate resources. In addition, I've realized that I enjoy doing full-stack which encompasses both back-end which appeals to me from a performance standpoint and front-end which appeals to me from a visual standpoint.
Seize the opportunity! Even if it may terrify you.
I was presented the opportunity to give a deep dive around my team’s internal React, Node web application tool to the Consumer Credit organization with a couple of my coworkers, who have been mentors of mine since the internship days. The foundation of this tool was actually my internship project! We use this tool for administrative functions, triaging, and automating manual remediations. During my short stint on one of the Credit User Experience teams, I was able to bring what I've learned and expand the capabilities of this tool. It just so happened that on the designated presentation date, both of my coworkers were on PTO. My initial reaction was to reschedule because I didn’t have all of the material prepared. We had already re-scheduled twice because the first date was on such short notice and I was in the middle of attending the JS @ PayPal Conference. After thinking it over on a weekend, I thought, “Why not present and with the additional help from my coworkers, we can still include all of the information and context. It’s the best of both worlds.” We moved forward with this plan and I rehearsed a week in advance. in the end, the presentation went really well! It was definitely a pat-on-the-back moment. I also received a lot of good feedback which I will take into the next presentation opportunity.
The greater blessed you are, the greater blessing you can be
I chose this statement because it helps bring color to my thoughts on mentorship. There's so many people that I look up to and I hope that in return, I can continue passing on the advice I've gained. There's so much knowledge out there whether it's specific to the company or generic that it's important to spread the knowledge because that makes us stronger as a team. In these past couple of months (Q2 2021), I've had the opportunity to help onboard a new hire and intern. I've provided overviews of some of the services my team works on along with giving advice on how to approach the JIRA tickets/user stories that we've been assigned. The sharing of knowledge demonstrates how much knowledge I've accumulated so far as well as highlights what I have yet to learn.
Keep those creative juices flowing!
With working remotely, PayPal has giving us a Global Wellness Day every 6 weeks this year 2021. I have been able to use that time towards baking, learning Ableton 11 music production software to provide soundtracks for my dancing, and learning Mandarin and Korean. In December 2020, I was inspired to create this tin foil set in my living room as I got an 8x8 dance floor (and tap board) back in August 2020. Shoutout to my sister for helping me put this together over the course of 2 weeks! It also required around 300 sq ft of tin foil and 60 ft of pvc pipes. Since then, I've added some Neewer lights and RGB led tubes which I use to create a mood around the dances (YouTube, Soundcloud).
Since quarantine, many dance studios started offering a live-streaming option via Zoom. I've been able to take dance classes from studios and learn from choreographers globally! As a fun tally, I've taken from a choreographer and/or studio based in Seoul/South Korea, Thailand, Taiwan, Japan, New South Wales/Australia, Italy, Israel, Peru, Argentina, Germany, and New York/USA.
TLDR; Wrap Up
From presenting to leading to sharing knowledge, it’s amazing what can happen in a year based off of the fruits of hard work. As a personal goal, I will do my best in working smarter as I accumulate more knowledge. I am also grateful for the support, encouragement, and mentorship that I’ve been receiving along this journey.
If you’ve made it all this way, thank you for taking the time to read. Feel free to share your learnings as well!
|
OPCFW_CODE
|
Unbound DNS: Troubleshooting with ssl-upstream option and/or vpn interface
I recently wanted to setup unbound in place of dnscrypt to resolve queries with my pi-hole on my rasp.
The version of unbound available on Raspbian is 1.6.0 currently.
When activating the options
ssl-upstream: yes
ssl-service-key: "/etc/ssl/certs/ca-certificates.crt"
unbound stopped working and we have something like this in the logs:
[1556709926] unbound[4394:0] info: server stats for thread 0: 23 queries, 7 answers from cache, 16 recursions, 0 prefetch
[1556709926] unbound[4394:0] info: server stats for thread 0: requestlist max 13 avg 1.875 exceeded 0 jostled 0
[1556709926] unbound[4394:0] info: mesh has 0 recursion states (0 with reply, 0 detached), 0 waiting replies, 16 recursion replies sent, 0 replies dropped, 0 states jostled out
[1556709926] unbound[4394:0] info: average recursion processing time 0.948223 sec
[1556709926] unbound[4394:0] info: histogram of recursion processing times
[1556709926] unbound[4394:0] info: [25%]=0.32768 median[50%]=0.603573 [75%]=0.920715
[1556709926] unbound[4394:0] info: lower(secs) upper(secs) recursions
[1556709926] unbound[4394:0] info: 0.000000 0.000001 1
[1556709926] unbound[4394:0] info: 0.008192 0.016384 1
[1556709926] unbound[4394:0] info: 0.016384 0.032768 1
[1556709926] unbound[4394:0] info: 0.262144 0.524288 4
[1556709926] unbound[4394:0] info: 0.524288 1.000000 6
[1556709926] unbound[4394:0] info: 1.000000 2.000000 1
[1556709926] unbound[4394:0] info: 2.000000 4.000000 2
[1556709926] unbound[4394:0] debug: cache memory msg=33040 rrset=33040 infra=17292 val=40931
[1556709926] unbound[4394:0] debug: switching log to stderr
I did also try to setup unbound to send queries through a vpn connection on the rasp itself, but I can’t resolve apparently through the vpn connection.
I tried set it up by hardcoding the ip address from the vpn connection, same result. I tried to used udp and tcp separately, same result
Am I missing something? I have connectivity through my vpn so that’s not the problem apparently. And the problem disappear as soon as I deactivate the vpn connection.
Or is all that supposed to happen in 1.6?
Does anyone have an idea about this?
Thanks in advance.
Those look like wrong options, at least ssl-service-key should specify private key of your instance (and not list of trusted CAs!), and you would always use it in combination with it's public key in ssl-service-pem, otherwise you should not use it. For my Debian Stretch, the following config enables listening for plain DNS queries on port 53 as well as DoT (DNS-over-TLS) queries on port 853 on all addresses (for both IPv4 and IPv6).
server:
#verbosity: 2
interface: <IP_ADDRESS>
interface: ::0
interface: <IP_ADDRESS>@853
interface: ::0@853
#tls-cert-bundle: /etc/ssl/certs/ca-certificates.crt
ssl-service-key: "/var/lib/acme/live/your.domain.example.com/privkey"
ssl-service-pem: "/var/lib/acme/live/your.domain.example.com/fullchain"
ssl-port: 853
This is example for final resolver, so it doesn't use forwarders (you'd need forward-zone block for that. Also, you can check /usr/share/doc/unbound/examples/unbound.conf for example config with explanations. In my example I use acmetool to generate keys/certificates in /var/lib/acme/live automatically, but you can use whatever way you want (or even omit it if you don't care about security).
|
STACK_EXCHANGE
|
You’ve probably heard a lot about customer relationship management software (CRMs) over the last few years. A CRM manages sales workflows, automates marketing tasks and makes customer information readily available to your service staff. Those are the basics, but CRMs do a whole lot more.
There are plenty of big names on the market such as HubSpot, Zoho, and Salesforce, so you would naturally wonder if there’s a CRM by Microsoft. The tech giants have had a go at everything from children’s games to fingerprint readers over the years, so you’d be right to ask the question.
And of course, the answer is a resounding “yes”.
Introducing Microsoft Dynamics 365
Never one to miss out on a chance to provide great digital solutions, Microsoft launched Microsoft Dynamics. In line with their other cloud-based software solutions, it’s now known as Dynamics 365. Dynamics 365 is targeted more towards mid-large sized businesses who need CRM and Enterprise Resource Planning functionality.
As we know, Microsoft products almost live in their own universe, a bit like the tech version of Marvel. With everything inter-connected, Dynamics integrates seamlessly with other Microsoft products.
Microsoft Dynamics 365 helps businesses manage a range of tasks such as:
- Customer service
- Field service
- Project service automation
We’re not saying Microsoft feel the nee to outdo everybody, but they’ve certainly packed plenty of features into Dynamics 365.
Why choose a CRM by Microsoft?
The first reason to choose as CRM by Microsoft is the easy integration with familiar products. For those familiar with Office 365, for example, you’ll find everything integrates so easily into Dynamics. But there are plenty more benefits:
Scalability: You can add apps to the Dynamics suite as you need them, so in that sense it’s very easy to tailor to your business. Also, the monthly subscription cost makes it far easier than outlaying your annual budget on a CRM in one go.
Productivity: Staff get access to all of the information they need, and nothing they don’t. Customer information, daily processes and with integration your entire operational needs can be met by Dynamics.
Cost-effective: Monthly subscriptions, and even individual subscriptions to apps per staff member make it simple to only pay for what you really need. Plus, the program is always growing to suit industry needs.
Is a CRM by Microsoft easy to install?
In theory, CRMs can be installed pretty easily. Microsoft Dynamics is no different in that sense. However, it’s similar to the way that it’s easy to mount an air-conditioner on the wall. You can put it there, but you’d run into problems if you had to wire it up yourself (electricians excluded).
If you’re considering installing Microsoft Dynamics, you’d be well served to speak to a CRM consultant first. A Microsoft Dynamics specialist is helpful because they can assist with things like integration into existing systems and other third party software. They’ll also give you great insight into how you can get the best out of your new CRM.
|
OPCFW_CODE
|
I have copied a previous Project as it accurately describes what I require.
I'd like to get a clone of the scripts here:
[url removed, login to view]
All of the mini-scripts should be fairly simple to create. I need these to be completely CLEAN. Every script should be in a seperate PHP file named appropriately, and should have a CAPTCHA image security code (simple code is fine).
The scripts should be text/form fields/verification image only and should be left-aligned.
All of the scripts should display the results on the same page as the form so as to allow for the user to check another site.
Note that all of these scripts need to be seperate (if a user only wants to upload one script, then that means it should work without the rest of them uploaded - no dependencies between scripts please!)
"Clean" means they should all look similar to this format:
[url removed, login to view]
The sooner these are done the better. I don't want re-used code, unless you have FULL RIGHTS to it, as I will be keeping full rights to the scripts once they are created.
Feel free to let me know if there are some you can do and some you can't do -- I may select you as long as you are priced reasonably.
Total of 34 very short scripts. AJAX is a plus.
Thanks for looking!
List of scripts:
Find a list of backlinks linking to you.
Google Banned Checker
Discover if your website is banned on Google.
Google PageRank Prediction
Predict your future Google PageRank.
Keyword Density Checker
Discover what keywords appear on your pages.
Find related keywords matching your search.
Retrieve your backlinks from search engines.
View your Google PageRank and Alexa Ranking in bulk.
View your Google PageRank on differnet Google servers.
Get a overview of your website's ranking.
Search Engine Position
Locate your search listings on Google and Yahoo!.
Search Listings Preview
Preview your website on Google, MSN and Yahoo! Search.
Discover how spider bots view your website.
View the PageRank of links visually rather than in text.
Send e-mails to users anonymously.
Make a long web address short and easy to remember.
Encrypt text to MD5.
A simple online calculator.
Your Browser Details
View your IP address and your browser details.
Alexa Traffic Rank
View and compare Alexa Ranking graphs.
Check the availability of domains.
Retrieve a range of information about a domain.
Retrieve domain whois information.
Instant Domain Checker
Check the availability of domains instantly.
Check the presence of an active connection.
Resolve a host to an IP address.
Check if your website is online or offline.
Website Speed Test
Find out how fast your website loads.
Hide your HTML source code.
Optimize and clean your HTML source code.
Extract the HTTP Headers of a web page.
Extract links from a specific web page.
Extract meta-tags information from a web page.
Generate and configure your meta-tags.
Source Code Viewer
View the source code of a page.
|
OPCFW_CODE
|
Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)
#ai #chess #alphazero
Chess is a very old game and both its rules and theory have evolved over thousands of years in the collective effort of millions of humans. Therefore, it is almost impossible to predict the effect of even minor changes to the game rules, because this collective process cannot be easily replicated. This paper proposes to use AlphaZero's ability to achieve superhuman performance in board games within one day of training to assess the effect of a series of small, but consequential rule changes. It analyzes the resulting strategies and sets the stage for broader applications of reinforcement learning to study rule-based systems.
0:00 - Intro & Overview
2:30 - Alternate Chess Rules
4:20 - Using AlphaZero to assess rule change outcomes
6:00 - How AlphaZero works
16:40 - Alternate Chess Rules continued
18:50 - Game outcome distributions
31:45 - e4 and Nf3 in classic vs no-castling chess
36:40 - Conclusions & comments
My Video on AI Economist: https://youtu.be/F5aaXrIMWyU
It is non-trivial to design engaging and balanced sets of game rules. Modern chess has evolved over centuries, but without a similar recourse to history, the consequences of rule changes to game dynamics are difficult to predict. AlphaZero provides an alternative in silico means of game balance assessment. It is a system that can learn near-optimal strategies for any rule set from scratch, without any human supervision, by continually learning from its own experience. In this study we use AlphaZero to creatively explore and design new chess variants. There is growing interest in chess variants like Fischer Random Chess, because of classical chess's voluminous opening theory, the high percentage of draws in professional play, and the non-negligible number of games that end while both players are still in their home preparation. We compare nine other variants that involve atomic changes to the rules of chess. The changes allow for novel strategic and tactical patterns to emerge, while keeping the games close to the original. By learning near-optimal strategies for each variant with AlphaZero, we determine what games between strong human players might look like if these variants were adopted. Qualitatively, several variants are very dynamic. An analytic comparison show that pieces are valued differently between variants, and that some variants are more decisive than classical chess. Our findings demonstrate the rich possibilities that lie beyond the rules of modern chess.
Authors: Nenad Tomašev, Ulrich Paquet, Demis Hassabis, Vladimir Kramnik
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
|Category||Science & Technology|
|Sensitivity||Normal - Content that is suitable for ages 16 and over|
1 week, 6 days ago
3 weeks, 4 days ago
Warning - This video exceeds your sensitivity preference!
To dismiss this warning and continue to watch the video please click on the button below.
Note - Autoplay has been disabled for this video.
|
OPCFW_CODE
|
By Alan Zeichick | March 2020
When enterprises journey to the cloud, many applications and resources come along for the ride—and those applications might kick off a tug-of-war.
For example: An on-premise ecommerce platform built on the Oracle Exadata Database Machine and Microsoft .NET applications. Or perhaps an ERP system that uses Oracle PeopleSoft applications, Office 365, and Microsoft Workplace Analytics, as well as Oracle Database with Oracle Real Application Clusters. Which public cloud is the right place for those critical enterprise workloads? The best answer might not be either Oracle Cloud Infrastructure or Microsoft Azure. Indeed, the best answer might be “Both.”
Until recently, “Both” was not a realistic option for most organizations. Fast-forward to June 2019: An alliance between Oracle and Microsoft produced a fast, secure, easy-to-implement interconnect between their two clouds. That interconnect allows cross-cloud provisioning and low-latency links among different complex applications—each with a variety of parts—protected by a single identity-management system for both clouds. (See figure 1.) And with the recent announcement of a new interconnect site in Tokyo extending new functionality into Asia, more joint Oracle/Microsoft customers stand to benefit.
Many systems integrators, including multinationals such as Accenture, Capgemini, and Cognizant, are working with enterprises that have significant investments in both Oracle and Microsoft technology. Here are four top reasons why those systems integrators are bullish about the Oracle-Microsoft interconnect and alliance.
Want to link applications in two separate clouds? You could build your own bridge code. Another alternative: License a third-party service. Either of those options adds complexity to the connection, will incur high costs, might increase data latency, and can be brittle if something changes. And if something goes wrong, who are you going to call?
By contrast, the Oracle-Microsoft interconnect is direct without any intermediaries and is supported by both companies. You need to call only one phone number if you have questions or concerns.
“The interconnect solves the inherent problem of the connection,” says Chris Pasternak, global Oracle cloud infrastructure lead at Accenture. “If I wanted to connect two clouds independent of each other, and do it myself, I’d have to go through some sort of third-party connection, such as a cloud broker or cloud exchange. That creates a potential pain point, and another bill to pay.”
According to Pasternak, the Oracle-Microsoft interconnect eliminates that complexity. “Through the magic of automation, I could simply write a script and connect two applications,” Pasternak says, recalling his initial experience with the alliance. “It took less than an hour to connect our first application. We’re talking an hour versus weeks. And now that we have the scripts, it takes minutes.”
1.5 Average time, in milliseconds, for cross-cloud communications, according to Accenture benchmarks
When one part of a complex business application talks to another part, such as to query a database or to refresh website content, most applications expect that communication to happen nearly instantly. That’s what happens inside a data center, where a web application server in one equipment rack talks to a database server that’s only a few feet away. The communication between them is said to be low latency, specifying only a very short wait before data being sent is received and processed. When those business applications are migrated into a single cloud, the connection is also low latency.
But multicloud applications create a challenge. If data in one public cloud must exit that cloud, travel across a bridge that might be hundreds or thousands of miles long, enter the other cloud, and finally be routed to the correct cloud server, that lengthy communications path can introduce significant delays—which are doubled when the second part of the application sends its reply back to the first part. The complexity of the communications path can introduce inconsistent delays, especially if the path crosses multiple service providers. And of course, the greater the physical distance between cloud data centers, the longer it takes the data stream’s electronics or photons to flow across that distance—which, even at the speed of light, can become problematic for some multicloud applications.
Such delays, which don’t occur in conventional data centers, can cause applications to malfunction in several ways. For example, the first application might time out—that is, think, “Hmm, this is taking too long,” and resubmit its query, over and over again. Or the first application might simply give up and register a “connection broken” error.
A benefit of the Oracle-Microsoft interconnect is that the two companies’ cloud data centers are physically close to each other. Another is that the connection is direct between the clouds and has been carefully engineered to provide consistent, low-latency links comparable to what would be found inside a traditional enterprise data center.
How fast is that low-latency connection? According to tests from Accenture, run more than 32,000 times, the cross-cloud latency averaged less than 1.5 milliseconds—fast enough, reliable enough, and predictable enough to give customers confidence in this multicloud connection.
Chris Hollies, CTO of the Oracle practice at Capgemini UK, believes that this low-latency interconnect will become the foundation for a seamless, multicloud-architected set of applications and will open up multicloud enterprise adoption in a big way. “The enterprise used to put Windows and Oracle databases alongside each other in the same rack,” says Hollies. “The Oracle-Microsoft low-latency interconnect allows the enterprise to think that way again.”
Within a data center, organizations can standardize on a single directory system to manage access privileges for users, applications, storage, and services—even if those resources come from different vendors. That approach typically breaks down if those applications are migrated to separate nonconnected clouds. That’s one problem that the Oracle-Microsoft alliance addresses, says Roshan Subudhi, vice president and global head of the Oracle practice at Cognizant, who notes that Microsoft Active Directory is often the preferred enterprise directory platform.
When moving to the cloud using the interconnect, enterprises can leverage Azure Active Directory as a single sign-on for Oracle Applications running in Oracle Cloud, Subudhi explains. “The interconnect allows us to leverage multiple security and single-sign-on products that link both cloud stacks across Oracle Cloud Infrastructure FastConnect or Azure ExpressRoute.”
The cloud alliance provides unified identity and access management, including automated user provisioning, to manage resources across Oracle Cloud and Azure. As Subudhi points out, Oracle Applications can use Azure Active Directory as the identity provider and for conditional access.
“You can leverage these dual architectures and deliver a single set of credentials, as well as an improved logging and security experience over the interconnect,” Subudhi continues. “We are able to provide solutions around the cross-application architecture that resides on two different clouds. This capability will create an explosion of opportunities.” Such opportunities include user-facing applications running on Microsoft Azure talking to ERP services within Oracle Cloud, or applications running within both clouds sharing a single access-control model using Microsoft Active Directory or Oracle Access Manager.
Even in a digital age, the chief operating officer and chief technology officer can be buried under paperwork, and that includes managing software licensing and technical support. The complexity of figuring out the right license terms for applications and services that span multiple clouds can be daunting – as can be the challenges of figuring out which cloud provider to call when there’s a technical question. The Oracle-Microsoft alliance addresses those pain points, applauds Ramanan Ramakrishna, cloud center of excellence lead at Capgemini.
“The enterprise’s procurement function is becoming a very powerful gatekeeper,” Ramakrishna says. “This multicloud alliance takes the licensing conversation out of the equation.” Those license terms, offered by both Oracle and Microsoft to their volume-licensing customers, apply to specific applications and services.
Similarly, Oracle and Microsoft have created a collaborative support model, where a customer can call either company in regards to technical or operational questions for the interconnect. This lets developers and systems administrators leverage their existing Oracle or Microsoft customer support relationships and processes.
Ramakrishna is thrilled that the cross-cloud licensing terms provided by the Oracle-Microsoft interconnect encourages IT leaders to adopt the best of what both parties have to offer. “Nobody has to dig really deep or become concerned about ‘how do I license this software?’ or ‘who do I contract this service with?’” he explains—and that flexibility unlocks the multicloud innovation in the enterprise cloud journey.
Illustration: Wes Rowell; Motion graphic: iHua Design
Alan Zeichick is director of strategic communications at Oracle, and is editor-in-chief of Java Magazine. He was previously the editor-in-chief of Software Development Times. You can follow him on Twitter @zeichick.
|
OPCFW_CODE
|
Using Guava Cache As Map with Time Based Eviction
I basically need a map where entries would expire after a specific known time period and then are removed.
The way it's being used in my app is not really a cache but it seems Guava cache can serve the purpose. Would this be the right choice? One thing is I'm going to need to query if the map is empty and I saw that Guava has only a size function which its documentation says is only an approximation.
Why do you need to query if the cache is empty?
It's basically used as a map. I have a Map<String, Cache<String, String>> and once a specific Cache value becomes empty I want to remove it from the Map.
maybe https://stackoverflow.com/questions/3802370/java-time-based-map-cache-with-expiring-keys
The reason it's an approximation is because multiple threads can be modifying the cache at the same time. Can you say more about why you want to remove empty caches from the map? Is it for performance? Will it actually save you much?
@LouisWasserman the reason I need to evict empty cache from the map is regarding the correctness of the app.
Guava won't return expired entries, so usually how immediate the removal occurs is not important. Can you describe what "correctness of the app" entails?
You can use Guava for this purpose. Note the caveat about cleanup, as noted in the documentation here (reproduced below).
Caches built with CacheBuilder do not perform cleanup and evict values
"automatically," or instantly after a value expires, or anything of
the sort. Instead, it performs small amounts of maintenance during
write operations, or during occasional read operations if writes are
rare.
The reason for this is as follows: if we wanted to perform Cache
maintenance continuously, we would need to create a thread, and its
operations would be competing with user operations for shared locks.
Additionally, some environments restrict the creation of threads,
which would make CacheBuilder unusable in that environment.
Instead, we put the choice in your hands. If your cache is
high-throughput, then you don't have to worry about performing cache
maintenance to clean up expired entries and the like. If your cache
does writes only rarely and you don't want cleanup to block cache
reads, you may wish to create your own maintenance thread that calls
Cache.cleanUp() at regular intervals.
If you want to schedule regular cache maintenance for a cache which
only rarely has writes, just schedule the maintenance using
ScheduledExecutorService.
As for the checking size point, you are correct that the size() is approximate. If you need to perform some action whenever an entry is invalidated, you should use the removalListener functionality. Relevant sample code from the documentation reproduced now.
CacheLoader<Key, DatabaseConnection> loader = new CacheLoader<Key, DatabaseConnection> () {
public DatabaseConnection load(Key key) throws Exception {
return openConnection(key);
}
};
RemovalListener<Key, DatabaseConnection> removalListener = new RemovalListener<Key, DatabaseConnection>() {
public void onRemoval(RemovalNotification<Key, DatabaseConnection> removal) {
DatabaseConnection conn = removal.getValue();
conn.close(); // tear down properly
}
};
return CacheBuilder.newBuilder()
.expireAfterWrite(2, TimeUnit.MINUTES)
.removalListener(removalListener)
.build(loader);
How will removalListener allow me to know if cache is empty?
Why do you need to know if the cache is empty? I don't think Guava can give such functionality, so if you require it for whatever reason, you will probably have to write your own.
Still, you could theoretically maintain your own counter. Increment it when an item is added to the cache, decrement when removed (via the listener). If it gets to 0, then your condition is met.
Counter is hard because it can be hard to determine when it should be incremented. A put may be a duplicate element. Instead you would need to maintain the set of keys and on removal notification remove the key (provided in the notification) from the set. Then if the set is empty, the cache is empty. Consider a class that wraps the cache (maybe extend ForwardingCache) exposing get / put methods, implements RemovalListener and keeps track of the cached keys to determine empty state.
|
STACK_EXCHANGE
|
Examples where is Java more elegant than C#?
It's been a couple of years since I last worked with Java.
Can you tell me what problems can be solved more elegantly in Java?
I am aware of the following benefits of Java:
Java 'runs everywhere',
Java has support for units and
measures
(supposedly) better latency in Java
J2EE (I don't think there is an equivalent in .Net)
different approach to generics (with odd circular definitions such as "Enum>", see Ken Arnold)
What about generics - are there elegant Java examples that cannot be represented in C#? Or other APIs or libraries?
Thanks,
Jiří
P.S. some general links:
Wikipedia comparison article
Comparing Java and C# Generics -
Jonathan Pryor's web log
I don't mind questions of the type "what are the differences between generics in Java and C#", but questions that presuppose a qualitative difference seem to be begging for an argument. My preference would be to rewrite this question in a less provocative way.
I like Java's anonymous classes... nice for visitors for example.
Java cloning destroys .NETs that's the only plus I can think of.
Java generics are very different to C# generics. And yes, there are places where that means it can be more elegant - usually in terms of wildcarding and variance. On the other hand, wildcarding is generally poorly understood (and I very definitely include myself in that camp) and the whole business of type erasure means that in general I far prefer .NET generics.
A rather different place where Java "wins" IMO is its enum support. C# enums are basically named numbers - Java is much more object oriented. A similar effect can be mostly achieved in C# using nested classes, but more framework support (an equivalent to EnumSet) and switch support would be welcome.
I also like the ability to restrict visibility to a package (namespace) in Java - although this is the only side of Java's access rules that I prefer to C#.
Having used both Java and C# pretty extensively for a number of years, my own feeling is that on the language level C# is far, far ahead of Java. Really, good cross-platform support and a large existing codebase are the only two significant advantages Java has over C# and .NET at this point.
enums... of course they gave themselves plenty of versions to mull over that particular problem didn't they? :)
What does "better latency" even mean in this context?
Other than that, I agree with Jon Skeet. On the whole, C# is lightyears ahead of Java. There are a few tricks in Java that are neat (enums for example), but they're very much the exception, not the rule.
Thanks for the replies guys - I am just googling EnumSets... Under "latency", I meant the speed of reply to an incoming request (e.g. HTTP request over LAN)
-jiri
That has nothing to do with the language though. In networking, latency is measured in milliseconds. In code, it is measured in nanoseconds, or at most microseconds. Any language you care to mention can handle a HTTP request in less time than it takes your network card to receive a packet.
|
STACK_EXCHANGE
|
The ModelAnt CORE package contains tasks, types and macros to provide model-independent or reflective access to models and model elements, like:
This task adapts the general compare.models task by providing comparison rules for models in MOF 1.4. The MOF models are used mostly as meta-models, this way defining other modeling languages, so this macro helps identifying what transformations are needed when converting a model from one meta-model to another, i.e. expressing the same model from one modeling language into another. For example: In order to identify the transformations needed to convert an UML 1.3 model to UMl 1.4, the following procedure was applied:
- start with compare.metamodels with no nested elements, set up to compare both meta-models.
- do a meta-model comparison
- use the reported differences to identify the corresponding meta-model elements and define them as nested <equals>, <except> or <map> elements in the compare.metamodels macro
- repeat steps 2 and 3 until no more differences are reported
- the defined <equals>, <except> or <map> elements define the transformations to be used in copy.in.metamodels task
Uses the MOF Reflective API to compare models, so the comparison is independent of their meta-model. In order to find the corresponding model elements this task uses a list of nested <equals> elements that define for each meta-class of that model which attributes and associations must have equal values in order to treat two of its instances as equal.The <equals> defined for a meta-class is valid for all its subclasses. Any <equals> defined for a subclass inherit the <equals> set for the super-classes.
NOTE: The default comparison specifies no attributes and associations. As a result, if no comparison is defined, all model elements are treated as comparable.
In order to state manually any correspondence between model elements, identified by other means, provide corresponding <map> elements. For example, if a-priori a correspondence between model elements is known, state it as nested <map> elements.
The results of the models comparison are sent to the tasks in comparison element with some specific conventions. See the ant.doc documentation for more details.
Copies a model from one meta-model into another meta-model, i.e. it represents a model from one modeling language to another modeling language. The correspondence between both meta-models (languages) is stated in nested <map> that could have been identified in compare.metamodels task.
print tasks family
These tasks print a model element considering it in different meta-models and for different purposes. See the ant.doc for more details.
ref. tasks family
They access the attributes and associations of the object in a model using the MOF Reflective API
wrap tasks family
Uses a registered factory and builds a wrapper around the model element or a collection of such. This task wraps an object into a corresponding wrapper object using a corresponding factory which has been registered previously. The object is provided as a value of the property named in “property” attribute (default: this) and is stored in the property named in “name” attribute (default: this)
Registers a factory of model element wrappers, adding more features to the model. This class is a task that registers a new factory of wrappers for the subclasses of the class/interface provided. The factory is loaded through the classpath[ref] provided, whereas the root class/interface is searched through the system class loader.
Unregisters a factory of model element wrappers. This class is a task that unregisters a factory of wrappers. The factory class is loaded/searched through the classpath[ref] provided.
Wraps a model element, invokes a method on the wrapper and stores its result in a property. This task calls the method with the name provided and arguments, on the wrapper object for the referred model element and stores the result into the property with the name provided. Void methods are treated as producing null result. It allows adding nested tasks, running in a separate environment, so they could prepare the interim data and prepare the “property” value without affecting the task’s environment.
See also the core package documentation.
|
OPCFW_CODE
|
I identify three trends in IT that will have a large impact on the university:
- increasingly inexpensive storage, network, and computation power for individuals For $25/year, I am promised unlimited storage and bandwidth for all my photos by Flickr. I can upload all my videos to YouTube or Google Video for free. For $16/month, I have 400 GB of storage and 4TB of monthly bandwidth from dreamhost.com. With this comparatively inexpensive infrastructure, I can create sophisticated web applications that fuse together a vast array of open source libraries and applications, as well as further storage (S3) and computation power (EC2) from amazon.com and a numerous other providers.
- the rise of peer production/mass collaboration in "Web 2.0". In naming "You" (that is, all the many, typically nameless, individuals who participate on the Web) as Person of the Year, Time summarizes this trend in the following way: "In 2006, the World Wide Web became a tool for bringing together the small contributions of millions of people and making them matter." It is easy to spot the plentiful junk emerging from Web 2.0, yet universities will find it increasingly difficult to dismiss the astounding richness of such entities as the Wikipedia and Flickr.
- the continued deployment of XML web services XML will continue to be used widely by organizations and, more recently, by individual users. Using service-oriented architectures, organizations/enterprises will re-factor their infrastructure in terms of reusable services that will be accessible through XML web services.
After first dismissing these technology trends as merely faddish, the university community will come to terms with them to take advantage of their positive aspects, adapting them to the university environment, while avoiding the negatives (which are very real, because of the difference in priorities between commercial enterprises and the university)
These technology trends will accentuate the computerization of research in academic disciplines. Some pioneers, especially those in disciplines that have a long history of computation, have already taken advantage of commodity hardware and built extensive computer-based collaborations. Many other researchers will be struggling to use the same technology. I argue that it is in the institution's interests to help all of its members to work at some baseline level. Moreover, there will be challenges, such as the long-term archiving of data, that the university as a whole will have to tackle, creating a demand for architectures and policies to handle these common needs.
The availability of cheap hardware and storage outside the university presents an immediate challenge to university. Many pioneering university members will be tempted to use those systems, because of low prices even if these services are not quite optimized for users' academic needs. Should people at the university be encouraged to use those outside services? Is there a way for the university to purchase those services and adapt them on behalf of the university community? What policies should be put in place concerning the use of outside services? I predict that the university will figure out a combination of industrial partnerships, system integration, and ways to help individuals cobble together the best solutions that will satisfy their research needs and also handle relevant policy issues.
The university community will have its own large collections of data and digital content to handle. Take, for example, the digitization of the UC library, which will result in a collection of millions of digitized books available to the university community. These data present incredible opportunities for education and research, ones that are best exploited if we work together as a community.
This is a great time for the university to develop an information technology architecture to handle these challenges, specifically an SOA that will work for this context.
|
OPCFW_CODE
|
We’ll take a look at this example. If you’re on Chrome Desktop you can try it online.
Inspecting the final body HTML leads us back to the source code:
Step 1: What are we looking at?
The user has selected an element from the DOM. Its outerHTML looks like this, and the “H” in “Hello World!” is selected.
The outerHTML came about as the combination of two events:
<div id="welcome"></div>in the initial page HTML
Since the user clicked on the “H” character in the tag content it’s straightforward which event we’ll need to look at in more detail: the
Step 2: Finding out where the innerHTML value was set
To track where in the code the
innerHTML assignment happened we need to run some code every time
innerHTML is updated.
This is possible by adding a property setter to the
innerHTML property of
Now, the downside is that we are no longer actually updating the
innerHTML of our element, because we overwrote the original setter function that did that.
We want to call this native setter function in addition to running our tracking code. The details of a property - such as its getter, setter, or whether it’s enumerable - are stored in something called a property descriptor. We can obtain the descriptor of a property using
Once we have the original property descriptor we can call the old setter code in the new setter. This will restore the ability to update the DOM by assigning to
Now, in the setter we want to record some metadata about the assignment. We put that data into an
__innerHTMLOrigin property that we store on the DOM element.
Most importantly, we want to capture a stack trace so we know where the assignment happened. We can obtain a stack trace by creating a new
Let’s run the “Hello World!” example code from earlier after overwriting the setter. We can now inspect the
#welcome element and see where its
innerHTML property is assigned:
Step 3: Going from “Hello World!” to “Hello”
We now have a starting point in our quest to find the origin of the “H” character in the
#example div. The
__innerHTMLOrigin object above will be the first step in on this journey back to the “Hello” string declaration.
__innerHTMLOrigin object keeps track of the HTML that was assigned. It’s actually an array of
inputValues - we’ll see why later.
Unfortunately, the assigned value is a plain string that doesn’t contain any metadata telling us where the string came from. Let’s change that!
This is a bit trickier than tracking the HTML assignments. We could try overriding the constructor of the
String object, but unfortunately that constructor is only called when we explicitly run
To capture a call stack when the string is created we need to make changes to the source code before running it.
Writing a Babel plugin that turns native string operations into function calls
Babel is usually used to compile ES 2015 code into ES5 code, but you can also write your own Babel plugins that contain custom code transformation rules.
Strings aren’t objects, so you can’t store metadata on them. Therefore, instead of creating a string literal we want to wrap each string in an object.
Rather than running the original code:
We replace every string literal with an object:
You can see that the object has the same structure we used to track the
Putting an object literal in the code is a bit verbose and generating code in Babel isn’t much fun. So instead of using an object literal we instead write a function that generates the object for us:
We do something similar for string concatenation.
greeting += " World!" becomes
greeting = f__add(greeting, " World!"). Or, since we’re replacing every string literal,
greeting = f__add(greeting, f__StringLiteral(" World!")).
After this, the value of
greeting is as follows:
greeting is then assigned to our element’s innerHTML property.
__innerHTMLOrigin.inputValues now stores a tracked string that tells us where it came from.
Step 4: Traversing the nested origin data to find the string literal
We can now track the character “H” character in “Hello World!” from the
Starting from the div’s
__innerHTMLOrigin we navigate through the metadata objects until we find the string literal. We do that by recursively looking at the
inputValues is an empty array.
Our first step is the
innerHTML assignment. It has only one
inputValue - the
greeting value shown above. The next step must therefore be the
greeting += " World!" string concatenation.
The object returned by
f__add has two input values, “Hello” and “ World!”. We need to figure out which of them contains the “H” character, that is, the character at index 0 in the string “Hello World!”.
This is not actually difficult. “Hello” has 5 characters, so the indices 0-4 in the concatenated string come from “Hello”. Everything after index 4 comes from the “ World!” string literal.
inputValues array of our object is now empty, which means we’ve reached the final step in our origin path. This is what it looks like in FromJS:
A few more details
How do the string wrapper objects interact with native code?
If you actually tried running the code above, you’d notice that it breaks the
innerHTML assignment. When we call the native innerHTML setter, rather than setting the content to the original string, it’s set to “[object Object]”.
innerHTML needs a string and all Chrome has is an object, so it converts the object into a string.
The solution is to add a toString method to our object. Something like this:
When we assign an object to the innerHTML property, Chrome calls
toString on that objects and assigns the result.
Now when we call code that’s unaware of our string wrappers the calls will still (mostly) work.
Writing the Babel plugin
I won’t go into too much detail about this, but the example below should give you a basic idea of how this works.
Call stacks and source maps
Because Chrome runs the compiled code rather than the original source code, the line and column numbers in the call stack will refer to the compiled code.
Luckily Babel generates a source map that lets us convert the stack trace to match the original code. FromJS uses StackTrace.JS to handle the source map logic.
|
OPCFW_CODE
|
Does "happy path to the left edge" break Python conventions?
I found the short article Align the happy path to the left edge quite helpful in improving readability of functions. Briefly, reading down the left edge of the function should step you through the logic of the happy path scenario. Errors and special cases are nested in conditionals, or decanted into separate functions.
The article was written with Go in mind but I believe this approach could be applied to other languages, Python in particular. But do any of the guidelines below, taken directly from the article, break Python conventions (sometimes called Pythonic idioms)?
Align the happy path to the left; you should quickly be able to scan down one column to see the expected execution flow.
Don’t hide happy path logic inside [nested indents]
Exit early from your function
Avoid else returns; consider flipping the if statement
Put the happy return statement as the very last line
Extract functions and methods to keep bodies small and readable
If you need big indented bodies, consider giving them their own function
I don't think 5, 6 or 7 are at all controversial. Are there existing guidelines or conventions that contradict any of 1-4?
Which Python convention are you worried about?
If Golang didn't use 8 space indentation (by convention) there wouldn't be such a big problem with readability. I really want to learn the language, and I can tolerate highly opinionated languages or frameworks, unless most of those opinionated choices are just plain stupid.
@user949300 While I don't disagree with your characterization of Go, I do have to point out that gofmt enforces the use of tabs for indentation. So 1-tab indent, not 8-space. You can configure your editor to render tabs however large you want. I hear some people like setting their tab-width to 3!
@amon True, but the preferred Golang configuration is 8 spaces per tab. (I actually use 2 most of the time!)
There are no strong Python conventions such as PEP-8 about any of this.
Some tools like Pylint will complain about useless else-clauses (item #4) or about excessively convoluted control flow.
I think there is a strong language-independent argument for what you call the “happy path to the left edge”. Previously, tradeoffs of different code layouts were considered on this site under Approaches for checking multiple conditions and its linked questions.
One notable drawback of structuring the code with a linear happy path is that the guard conditions will often feature negations of the form “if this isn't the expected case, then return”. Such negations can make the code more difficult to read. Within reason, there's nothing fundamentally wrong with nesting – but there is something wrong with following one “best practice” or another when it makes the code more difficult to read.
|
STACK_EXCHANGE
|
20140207 - 20140214 Gamification
I strongly recommend the Coursera Gamification Course
Players are the center of the game. From the players' standpoint, it's about them.
Try to realize how players need to feel that they are in control
- player needs to have the sens that they can make choices and the choices have results.
- So think about how do you create an environment so that your customers, users, whatever you call them, feel like they're the ones who are driving?
That causes some meaning
- It means something that they care about that they are willing, at that moment, to think of as being valuable.
Purposes of game designers:
- Get players into the game
- Keep them playing
- Player journey:
- the player is engaged in an experience.
- You want the player journey to have a beginning, a middle, and an end. And ideally, those are in some sort of progression.
- The player always starts at the beginning. You want the player to get to a point of mastery and to have that journey be seamless.
What are the things that the game does that get users quickly into the game and make it easier for them to make progress within the game:
- Express feedback
- Dumb down (a limit on you can do in the first “levels”). Simplification
- first levels unbelievably easy.
- Without any kind of manual
Games have to be balanced. Not too hard. Not too easy. Not too many choices. Not too few choices.
Things that are fun:
- Problem-solving. Overcoming obstacles. Surmounting challenges.
- Chilling out. Relaxing.
- Triumphing. The notion that you’ve crushed someone else.
- Role playing
- Goofing around. Act silly
Nicole Lazarr. What in games actually produces fun? Four different kinds of fun:
- Easy fun: blowing up stream, just chilling out, goofing off, hanging out with your friends. It’s fun because it’s easy.
- Hard fun: problem solving, mastery, completion, overcoming obstacles. Fun represents accomplishment.
- People fun: interacting, working together, socializing
- Serious fun: fun in doing things that are meaningful. good for the planet, good for your family, good for your community. Anything that has meaning for you. I.e: Collections. it's meaningful in some serious way, for you at that time. It may not be something that someone else finds fun or meaningful
Fun doesn't just happen, it has to be designed.
Gamification is about finding the fun, finding the game-like aspects wherever they are, and using them to create an environment that moves people a little bit more towards an objective.
|
OPCFW_CODE
|
Like many people, Anna do Rosario ’25 became familiar with entrepreneurialism through lemonade stands that she and her siblings set up outside their Massachusetts home. While these stands nurtured her entrepreneurial spirit, the family’s stimulating dinner conversations and weekly museum trips expanded her mind.
“I’m always thinking about how things work. How can I improve this? How can this help people?” said do Rosario, a first-year student considering an economics and computer science double major at Colby. “It’s a state of being.”
As she grew up, she sought ways to transform her ideas into actions. With high school friends, she cofounded Stick ’Em, a short-lived company that sold adhesive dry-erase stickers. She also started designing an app, SYMPlicity, that used artificial intelligence (AI) to analyze symptoms of simple medical conditions for people without access to basic medical care. Now at Colby, she’s onto her next project. In collaboration with her father, Alden do Rosario, she cofounded Poll the People, a software service that offers micro-surveys for making data-backed decisions for various business dilemmas.
“I want to make market research more accessible,” she said, “and help small businesses, or even individuals, to make more informed decisions.”
The idea for Poll the People stemmed from her father’s work. A computer scientist turned entrepreneur, Alden do Rosario was developing an app and trying to decide between two logos. But he wanted to defer the choice to possible users and decided to run a comparative test of the logos. “He had to code that himself, send it out, and analyze the results,” Anna do Rosario said. “And that took copious amounts of time, effort, and creative bandwidth.”
Because not everybody has that kind of expertise and time, Anna do Rosario saw this as a business opportunity in a data-driven society and economy, which she pursued during her gap year before coming to Mayflower Hill.
The platform uses AI tools to examine the results, including the word-processing tool GPT-3, which helps assess sentiments and generates a cohesive essay for polling results that would take hours for a human to produce. “My dad and I brainstormed how to use such a powerful and ingenious language-prediction tool like GPT-3 to analyze responses,” said do Rosario, who built the wireframes for the website, produced all the content, and finalized the logo.
At Colby, she’s gathering tools that will equip her to become a better decision-maker for Poll the People.
Last fall she took a computer science course and formally began learning the coding language Python. “It has definitely awarded me the ability to understand the more technical parts of the software [of Poll the People],” she said.
She also took a deep dive into ethical concerns surrounding AI in her science, technology, and society class called Information Before and After Google: Impacts and Technologies with Data Services Librarian Kara Kugelmeyer. “I’m writing about the regulation of artificial intelligence while concurrently developing a startup that uses artificial intelligence,” she emphasized. “We spent a lot of time talking about how to regulate AI and the parties—mainly the public and private sectors—that must cooperate. It was especially fascinating to discuss this while tech companies, such as Facebook, operations come into the public eye.”
In the process, do Rosario also turned to Colby’s Davis Institute for Artificial Intelligence and its director, Amanda Stent, who previously worked on the voice recognition technology Siri, for insights on AI ethics.
“AI can be an incredibly powerful and beneficial technology if we understand it,” said do Rosario. “AI can be used to solve many, many different problems, and I think that’s why we have the institute here because it can and should be applied to any department. AI is the next frontier in knowledge discovery. There’s so much potential for discovery when students and teachers collaborate to apply these tools to their passions and fields of interest.”
Seeing its wide application, do Rosario wants to use computer science to solve economic problems in the world. After Colby, she aspires to work at a tech startup. In the long run, she hopes to lead one as its CEO.
“My dream,” she said, “is to have an idea that I’m really passionate about and let that passion guide me to success.”
Getting a Head Start on AI
The Davis Institute for Artificial Intelligence and Halloran Lab for Entrepreneurship host SureStart, a summer program that teaches real-world skills to students interested in AI
Colby Debates a Blueprint for an AI Bill of Rights
The campus hears from a coauthor of the White House’s statement of principles for artificial intelligence
Elevating the Role of Undergraduates
Adaobi Nebuwa ’24, a computer science neophyte when she got to Colby, now plays a pivotal role in one CS professor’s lab
A Vital Element of AI? Empathy
How one Colby graduate is putting humanity into AI technology
A Force for Good
Students use innovation, creativity, and a 3D printer to give people in need a prosthetic hand
|
OPCFW_CODE
|
Background Images not showing on Github Pages for Website
I've looked at some other threads on Stack Overflow regarding this problem, but for some reason they don't seem to be working. I've checked things like the path directory for the image, and I think that it's correct.
Here's a link to my repo for the website on github pages: https://github.com/lawrencecheng123/lawrencecheng123.github.io
In my repo there is a "Seattle.jpg" image that I'm trying to set as the background of the first page of my website, which is referenced by the "fstPage" class on line 81 of the "index.html" file and line 321 of the "index.css" file in the repo.
Thank you for the help!
It fails because you named your file wrong. Inside of your index.css, you wanted to use a file named Seattle.JPG.
Your file is named Seattle.jpg. Fix the ending and add https://.
Here's the right link: https://lawrencecheng123.github.io/Seattle.jpg
Complete CSS:
.fstPage {
background-image:url("https://lawrencecheng123.github.io/Seattle.jpg");
/*background-color: lightgray;*/
height: 700px;
width:100%;
background-size:cover;
}
Working snippet:
.fstPage {
background-image:url("https://lawrencecheng123.github.io/Seattle.jpg");
/*background-color: lightgray;*/
height: 700px;
width:100%;
background-size:cover;
}
<div class="fstPage"></div>
Thanks for the response. I did that before for the link, but it still wouldn't load. I also read on a different thread that using .JPG would work, so I was trying that way too. However, for both ways it just shows a white page for me
Did you include index.css? Can't see it in your index.html.
Thank you for the snippet. It seems to be working in there. However, when I put it into the main file it still shows up as blank for some reason.
I put the new link with the https into the fstPage class in index.css, and I also have the fstPage in index.html as well. Not entirely sure why it works in the snippet, but not in my main code
Looks like you don't include your index.css. Try to add <link rel="stylesheet" href="index.css"> in your head.
Actually nevermind, it's showing up now. Thank you for all your help!
first import index.css file in your index.html file like
change (1) :
<link rel="stylesheet" href="index.css">
and then you have to update your class as mentioned below
change(2):
.fstPage {
background-image:url("Seattle.jpg");
/*background-color: lightgray;*/
height: 700px;
width:100%;
background-size:cover;
}
and I hope it will work fine for you also
@Lawrence Cheng
I was searching for all the forums. But none of which worked for me.
Only workaround for me was to change the order of stylesheets in the head
I mean , if you are using multiple stylesheets, including those from bootstrap cdn with the locally saved one, always keep the local stylesheet on top.
Like this
<link rel="stylesheet" href="style.css">
<script src="https://kit.fontawesome.com/yourcode.js" crossorigin="anonymous"></script>
<link<EMAIL_ADDRESS>rel="stylesheet" integrity="sha384-1BmE4kWBq78iYhFldvKuhfTAU6auU8tT94WrHftjDbrCEXSU1oBoqyl2QvZ6jIW3" crossorigin="anonymous">
<link<EMAIL_ADDRESS>rel="stylesheet"
|
STACK_EXCHANGE
|
Why and where do we use array in software development
I am new to programming and everyone discuss about array. I have gone through array in C# and tried it in console. If someone can say me where do we exactly use array in real time software development. I know its a basic question but couldn't figure it out. Thanks in advance
You need to read more about data structures
I wasn't fast enough with my answer... I'll leave it as a comment:
The point of data structures is to come up with different methods of storing collections of the same data so that different operations are more efficient, depending on usage patterns. This is a common problem to have once you've abstracted a real-world concept into a model - such as a Person into a Person class. Rather than re-writing data structures libraries from scratch each time we reuse them across different types (in C# generics help us accomplish this).
Arrays and Linked Lists provide the basic building blocks of data structures.
Arrays store the type of object (or references to the same type of object) all in a row in memory. It has constant (O(1)) access time to find an element, but linear (O(n)) insert time to put an element at the front.
Linked lists are the opposite - you only store a reference to the first item. It takes in general linear time to find an element, but constant time to put an element at the front.
When you combine the two concepts together you can get some powerful hybrid data structures - hash sets (I think of them as predominately arrays of linked lists) and trees (I think of them as predominately linked lists of arrays) that find ways to get fast operations all around.
In addition to arbitrary inserts and reads there are other operations to consider, as well.
You should look into a Data Structures book, course or website to understand all these concepts more fully. Unless you're going to be implementing the libraries it's probably more important to learn the major data structures and their space and time complexity more so than their implementation so you know when to use them and how it will affect the performance of your code.
You would typically use an array or a list where you have multiple instances of data with the same structure. Imaging for instance that you want to display a list of all users in your system then behind the scenes you might want to hold them as User[] users = new User[] rather than User user1, User user2, etc... It makes it a lot easier to do the same thing to (or with) each element of the list e.g.foreach(User user in users){...} than if you had to hold each entry separately.
Instead we can use sql query to fetch all the users details and display? wat is the difference between writing sql query and using array? As I am beginner I am using array only in console application I dont know how to use this concept in asp web forms.. Sorry If I am wrong at any point
You use arrays wherever you need to maintain lists of anything at all. For example, if I need to display a list of users in an organization, I could hold all in an array of objects.
For more information: https://en.wikipedia.org/wiki/Array_data_structure
|
STACK_EXCHANGE
|
It is Free and Open Source Software, released under the LGPL, available for Windows and Linux. Minetest is developed by Perttu "celeron55" Ahola and a number of contributors.
Minetest is technically simple, stable and portable. It is lightweight enough to run on fairly old hardware. It currently runs playably on a laptop with Intel 945GM graphics. Though, as for the CPU, dualcore is recommended.
- Walk around, dig and build in an infinite voxel world (or boxel, as reddit calls it), and craft stuff from raw materials to help you along the way. We hope to add in some survival elements, but not much really exist ATM.
- Sinfully easy server-side modding API .
- Multiplayer support for tens of players, via servers hosted by users
- Voxel based dynamic lighting (quite similar to Minecraft; light up caves with torches)
- Almost infinite world and a fairly good map generator (limited to +-31000 blocks in all directions at the moment)
- Runs natively on Windows and Linux (C++ and Irrlicht. No Java.)
Extract the zip package somewhere. Run the executable found in the bin/ folder.
Note: Android version is in its early stages, so you can expect bugs.
- Add camera smoothing and cinematic mode (F8) (rubenwardy)
- Radius parameter for /deleteblocks here (SmallJoker)
- Save creative_mode and enable_damage setting for each world in world.mt (fz72)
- Configurable automatic texture scaling and filtering at load time. (Aaron Suen)
- Connect rails with connect_to_raillike and shorten the codes (SmallJoker)
- Clouds: Make cloud area radius settable in .conf (paramat)
- Added hour:minute format to time command (LeMagnesium)
- Add mod security (ShadowNinja)
- Add texture overriding (rubenwardy)
- Improved parallax mapping. Generate heightmaps on the fly. (RealBadAngel)
- Make attached objects visible in 3rd person view (est31)
- Remove textures vertical offset. Fix for area enabling parallax. (RealBadAngel)
- Add minimap feature (RealBadAngel, hmmmm, est31, paramat)
- Add new leaves style - simple (glasslike drawtype) (RealBadAngel)
- Add ability to specify coordinates for /spawnentity (Marcin)
- Add antialiasing UI setting (Mark Schreiber)
- Add wielded (and CAOs) shader (RealBadAngel)
- Add map limit config option (rubenwardy)
Apps similar to Minetest 3
Minecraft is a fun arcade game where you explore lost worlds, kill monsters and uncover secrets. Download here for Windows, Mac and Linux. This is the Exploration Update.
Infiniminer is an open source multi-player block-based sandbox building and digging game, in which the player is a miner searching for minerals by carving tunnels through procedurally generated voxel-based maps and building structures.
Battlefield meets Minecraft. FPS shooter with online multiplayer action in a sandbox world.
|
OPCFW_CODE
|
Memory allocation with Thread
I am wondering what happen if you declare a local thread within a method? Normally all the local variables will be gone as soon as the function returns since they are all allocated on Stack. However, it seems that a local thread would be a different story. Is that right?
public int A() {
Thread t = new Thread() {
doSomething();
}
t.start();
return -1;
}
This question is hard to read/understand. Can you edit it to flesh it out? Explain better what you are asking. Show some concise. code samples?
A Thread is its own GC root. So any time you create a thread despite its creation context it will not be ready to GC until its run method completes. This is true even if the local method completes and the thread is still alive.
Example:
public void doSomeAsync(){
Thread th = new Thread(new Runnable(){
public void run(){
Thread.sleep(500);
}
});
th.start();
//do something else quickly
}
After //do somethign else quickly anything defined that did not escape the method is then marked for GC. Thread th will not be marked for GC and is correctly placed on the heap with it's own thread-stack.
+1 Nice answer John. Can you show some sample code to elaborate?
Does it mean that local thread will be placed on the Heap even though it declares as local within a method ?
@user1389813 Java will do escape analysis to determine if an object can be placed locally on the stack or on the heap. A Thread inherently escapes and thus would be placed on the heap. You can read on escape analysis here http://www.ibm.com/developerworks/java/library/j-jtp09275/index.html
John's answer is good but I thought I'd add some more details. Here's a code example that I'll use to show specific variable usage.
public void startThread() {
long var1 = 10;
byte[] var2 = new byte[1024];
final byte[] var3 = new byte[1024];
final byte[] var4 = new byte[1024];
Thread thread = new Thread(new Runnable() {
private long var5 = 10;
private byte[] var6 = new byte[1024];
public void run() {
int var7 = 100;
byte[] var8 = new byte[1024];
System.out.println("Size of var4 is " + var4.length);
baz();
...
}
private void baz() {
long var9 = 2;
byte[] var10 = new byte[1024];
...
}
});
thread.start();
}
So we have a number of variables here allocated around a thread. We also have the Thread object itself as well as the Runnable target the thread is running.
thread -- Although it looks to be local to startThread(), the associated Thread is also managed by the JVM. It is only GC'd after the run() method finishes and the Thread is reaped by the JVM. After the Thread is GC'd then all of the fields used by the Thread can be GC'd.
Runnable -- This anonymous class is what the thread is running. It can be GC'd after the Thread finishes and is GC'd.
var1 -- This is local to startThread() and allocated on the stack. It will be overwritten when the startThread() method finishes and the stack is reused.
var2 -- This is local to startThread() and allocated on the heap. It cannot be used by the thread since it is not final. It can be GC'd after startThread() finishes.
var3 -- This is local to startThread() and allocated on the heap. This is final so it could be used by the thread but it is not. It can be GC'd after startThread() finishes.
var4 -- This is local to startThread() and allocated on the heap. This is final and it is used by the thread. It can only be GC'd after both the startThread() method finishes and the Runnable and the Thread are GC'd.
var5 -- This is a local field inside of the Runnable and allocated on the heap as part of the Runnable anonymous class. It can be GC'd after the Runnable finishes and the Runnable and the Thread are GC'd.
var6 -- This is a local field inside of the Runnable and allocated on the heap. It can be GC'd after the Runnable finishes and the Runnable and the Thread are GC'd.
var7 -- This is a local field inside of the run() method and allocated on the stack of the new thread. It will be overwritten when the run() method finishes and the stack is reused.
var8 -- This is a local field inside of the run() method and allocated on the heap. It can be GC'd after the run() method finishes.
var9 -- This is a local field inside of the baz() method and allocated on the stack of the new thread. It will be overwritten when the baz() method finishes and the stack is reused.
var10 -- This is a local field inside of the baz() method and allocated on the heap. It can be GC'd after the baz() method finishes.
Couple other notes:
If the new thread is never started then it can be GC'd once startThread() finishes. The Runnable and all of the variables associated with it can be GC'd then as well.
If you have a final long varX primitive declared in startThread() and used in the thread, then it must be allocated on the heap and not the stack. When startThread() finishes it will still be in use.
var 9 will be allocated on stack of that Anonymous thread, correct ? since each thread has its own stack, but shared the same heap. Also I think in order for inner class (anonymous class) to access the variables on the outer scope, the only way is to declare variable as final, not just because for Thread, correct ?
Yes, var9 will be on the stack of the new thread. Same for var7. I've edited my answer to make that more clear @user1389813.
@user1389813 If the new thread is never started then it can be GC'd once startThread() finishes. All of the variables associated with it can be GC'd then as well.
If a Thread is started from a local context, the thread will continue to execute until it's Runnable's run method has completed execution.
If the variable is of a primitive, then it'll be on the stack and will be gone when the method returns -- but your thread's Runnable instance (or whatever contains the meat of the thread) will have a copy of that primitive value.
If the variable is of a reference type, then the object is allocated on the heap and lives until there are no more references to it, at which point it's eligible for garbage collection. The reference to that object is on the stack and will be gone when the method returns, but as with primitives, the thread's Runnable will have a copy of that same reference (and will thus keep that object alive).
A primitive variable declared inside of a method will be in the stack. A primitive variable that is a field of a class will be on the heap.
@Gray Yes, I should have specified non-field variables. I tend to call class/instance variables "fields" to reduce that ambiguity, and while I think that's not uncommon, I agree it's not strictly JLS.
If you spawn a local Thread within a method, only the local method variables declared as final will stick around until the Thread has completed. When the Thread completes its run() method, the thread and any final variables it had available to it from the method that created it will get garbage collected like everything else.
Clarification
Only final variables used within the original method AND the spawned Thread's run() method will refrain from being garbage collected until both the method and the run() method completes. If the thread doesn't access the variable, then the presence of the thread will not prevent the variable from being garbage collected after the original method completes.
References
http://java.sun.com/docs/books/performance/1st_edition/html/JPAppGC.fm.html
Final has nothing to do with whether a field "sticks around". Final just affects whether the field can be reassigned and it also impacts constructor assignment ordering.
Sure it does. If I have a method that declares a final variable at the top of the method, then spawns a thread that accesses that final variable within its run() method, that final variable will not get garbage collected as long as the Thread is still running, because that final variable is still reachable in the JVM's object graph by the Thread. This is true regardless of whether the original method that spawned the thread returns before the run() method or not.
Oh I see. I thought you were talking about final fields within the Thread. Please edit your answer to clarify. Also, I'm not sure you are correct Ben unless the thread actually accesses the final field. I've verified that with testing.
I'm pretty sure (though I'm too lazy to check the JLS right now) that it's not that the method variable sticks around forever, but rather that it gets copied to an instance variable in the Runnable. (And in Java 8, you won't even need to declare method variables as final for that to work -- it's enough that they're unchanged, which is to say, you could have marked them final if you wanted.)
I think it probably depends on JVM implementation, but the prescribed behavior spec'ed by Java would be that external access by other objects would determine how long those final variables stuck around.
@yshavit if the method variable is a deep Object reference with a complex structure, I am certain that at most, the Object's reference is copied to an instance variable in Runnable - if the original method decides to modify the contents of the original Object while the Runnable is running, the Runnable will also see those modifications.
@BenLawry I'm sure that's the behavior, and in practice I'm nearly sure that's how it's implemented. I'm just not sure if that's actually mandated by the JLS, or if it's allowed by the JLS and also happens to be the only reasonable way to do it.
By the documentation below, the final variable in the question description would be considered "in use" as long as the Thread maintains a reference to it. Therefore, if the run() method of the thread accesses it, the variable will not be garbage collected until run() has completed.
http://java.sun.com/docs/books/performance/1st_edition/html/JPAppGC.fm.html
|
STACK_EXCHANGE
|
Introduction of XQuery Training:
XQuery training is designed for query XML data, the XQuery is the language and it is also constructs the expressions. It will searching and extraction the elements form XML streams. XQuery it can be used to extract the elements why it extract elements means for exchange the data from XML to XHTML. XQuery training is the Expression language and this each and every expression will be return sequence of elements. For XPath the sequences of elements can be generated from an XML document and also they can be generated in the from an XML stream. The most important expression in XQuery is the FLWOR expression, in this expression it will have five clauses. Global Online Trainings provides best XQuery online course by professionals. The classes are taken here in flexible-hour, participants can take their classes at their spare time as per personal schedule.
Mode of Training: We provide online mode of training and also corporate, job support.
Duration of Program: 30 Hours (Can be customized as per requirement).
Materials: Yes, we are providing materials for XQUERY online training.
Course Fee: Please register in website, so that one of our agent will assist you.
Trainer Experience: 10 years.
Prerequisites for XQuery Training:
- The attendances for XQuery Training should have basic knowledge on Marklogic, XML and Oracle SOA.
Overview of XQuery Training:
Learn about XPath in XQuery training:
- XPath is a W3C standard for accessing data in XML content and it’s pretty fundamental part of some other XML technologies like XSLT, XQuery and some other. Essentially XPath is allowing you to define a path into an XML document.
- XPath it is used for identifying web applications and it is most important element to identify complex elements. XPath is a language which is used to get information from XML document, so basically we use some path expressions to create this XPath.
- Earlier we use this XPath more dynamically rather than combining multiple properties to make a particular object unique. We have seen that if at all certain information is not provided by a developer for a particular object, we use to make it unique by taking some combination of properties.
- XPath training allows you to customize how an action finds the location of an item on the page. XPath can consist of path expressions and conditions.
Learn about XML in XQuery training:
XML is nothing, but Extensible Markup Language in that Markup means enclosing the textual information in between two tags. They are opening tag and closing tag, the XML is the markup language not only this HTML is also a markup language. The XML tags are user defined tags and this tags functionality is decided by user. Compare to HTML, in that HTML tags functionality is limited because that are predefined, but the XML functionality tags are extensible.
For example, we are designing an application in web service programming, in that we have to send data from client to server applications. We are XML format for communication between them. A server and client applications are nothing but java applications, .net application or etc. Simply XML is used transferring data in one language applications to another language applications in the XML format. So, the XML is acting like mediator for communication between two language applications, so the XML is called as Language. Language is nothing it will be act like a mediator between two applications. Above we explaining XML and Xpath, the basic knowledge of those courses will helps to understand XQuery training.
Importance of XQuery XPath in XQuery training:
XPath stands for XML path language and is a language used to query XML data. It’s made up of a path like syntax similar to that is found in operating system directory structures queries. XPath are called expressions, expressions can be simple. In addition to XPath training you can query XML data using XQuery. XQuery is a language that is capable of both query and transforming XML data. XQuery is actually a superset of XPath and this means that all XPath expressions also work with in XQuery in addition to querying. XQuery is also capable of handling on the manipulations and construction of XML documents. Which can allow for some very powerful expressions to be written the heart of flwor (For Let Where Order Result) statement and XQuery is similar to SQL. We provide best XQuery online course by corporate trainer, with covering all the topics of XQuery training.
Role of FLWOR (FOR LET WHERE ORDER RETURN) Expression in XQuery training:
XQuery3 online training is the Expression language and this each and every expression will be return sequence of elements. For XPath the sequences of elements can be generated from an XML document and also they can be generated in the from an XML stream.
The most important expression in XQuery training is the FLWOR expression, in this expression it will have five clauses. The first clause is for, it is iterative variable and in this clause expression will results a set of elements. Second one is Let clause is more typical assignment, it is only run once each time the rest of the query is run and so this expression is evaluated and even if it is a set its assigned once to this variable so it’s not iterating. Where clause is similar to the filters and order clause is also sort of similar to SQL. Finally, the Return clause is actually get in the result of query and just effectively executing the query in n times.
Earlier we have internet, the all the multiple computers connected by the wires. The one person of one computer can read documents from another person’s computer using a language called HTML. Each computers web browser can properly display these documents. Add CSS to the mix and you have got a beautifully styled web page.
Conclusion of XQUERY training:
XQuery training will turn into an extremely strong language, with numerous applications. Since XQuery can do the majority of what SQL can, and substantially more, it is likely that all the significant databases sooner or later will offer XQuery support to make queries, , views, and possibly stored procedures. Global online trainings provide best XQuery training by our highly skilled consultants. We also provide the classroom training at client premises Noida, Bangalore Gurgaon, Hyderabad, Mumbai, Delhi and Pune.
|
OPCFW_CODE
|
A sniffer is basically a network analyser. Likewise, a wireless sniffer is software that can analyse the traffic over a wireless network. The data thus obtained can be used for various purposes—debugging network problems, for instance. These tools can also grab all the non-encrypted data from the network, and hence can be used to crack unsecured networks. This is one of the major reasons why sniffers are a threat to networks.
Detecting the presence of such sniffers is a challenge in itself. On the other hand, you can use these tools to analyse your own networks and check the extent to which they are secure against threats. You could say that the sniffers give you an X-ray view of your network.
Sniffers provide real-time packet data from local, as well as remote machines. Some network analysers even have the ability to alert you of potential developing problems, or bottlenecks that are occurring in real-time. Some have the capability of capturing packet streams and allow you to view these packet streams and edit them.
There are many such sniffing software available on Linux, UNIX, BSD, Windows, etc. Most of the commercial software is quite costly. That, and the fact that I hate Windows, means I will be using one of the popular free software under Linux for sniffing wireless networks and to crack a WEP protected network.
This article is only for educational purposes and I will be demonstrating the use of sniffers by trying to crack my own wireless network. I will not be liable for any criminal act committed by the reader.
Basic networking information
You will need to know some basics of computer networking in order to fully understand the working of a sniffer tool. Every network device has a MAC (Media Access Control) address. Let’s consider a wireless network and, say, four different wireless network cards in its proximity that are connected to that network. The wireless network simultaneously transmits data for all four cards (four computers with wireless networks). Data for each network card is recognised by the MAC address of the corresponding network card. Generally, a network card only receives the data designated for its MAC address. However, when a card is put into what is known as a ‘promiscuous mode’, it will look at all of the packets being transmitted by the wireless network.
Wireless networks are not the same as cable networks. All computers can access all the data, but generally, they ignore all available data except for the ones designated for them. However, they no longer ignore the data when in ‘promiscuous mode’, which is the basic feature of sniffing.
There are mainly two methods to achieve this. One is where you connect to the WAP (wireless access point) using your computer to receive all the traffic transmitted by it. In this mode, you need to know the password for the network in order to connect to the WAP. In the second method, known as the monitor mode, you do not have to connect to the WAP to intercept the data; yet you can monitor all the traffic.
However, these modes are not supported by all the wireless network cards. For example, Intel’s 802.11g cards do not support the ‘promiscuous mode’. The monitor mode also needs to be supported by the card. The advantage of the monitor system (from a cracker’s perspective) is that it does not leave any trace on the WAP—no logs, no transfer of packets to the WAP or directly from the WAP.
Wireless sniffing: a case study
Sniffing wireless networks is more complicated than sniffing wired networks. This is mainly because of the various encryption protocols used. If you want to sniff a network with Wired Equivalent Privacy (WEP) security then it is fairly easy. In fact, it has been proved many times that WEP can be easily cracked (as will be shown later in the article). Sniffing/cracking networks with Wireless Protected Access (WPA) security, however, is not so easy.
The difference in WPA and WEP is that WEP applies a static method to use pre-shared keys for encryption. It uses the same key to encrypt all the data. This means a large number of packet transfers with the same key, which makes cracking easy. Second, one has to manually update all the client machines when a WEP key is changed on the network. This is not practical for large installs. WPA, on the other hand, uses the pre-shared keys to derive a temporary key, using which all the traffic is encrypted. So, WPA generates a unique key for each client and access point link. Moreover, the pre-shared key is very rarely used, making it difficult for sniffers to crack the key. I would like to make one point clear here—one can crack WPA passwords if they are too simple. This is not a flaw in WPA, but in the network manager who sets the weak password.
We will now see how to sniff a wireless network with WEP security and use the sniffed packets to crack the password.
For this study, I will be using two laptops. One running a Live CD of BackTrack Linux 3 and the other running Windows XP. The Windows laptop has access to the WAP. The user knows the key. He is using the Internet on his laptop. I (the cracker) am using the laptop with BackTrack Linux. There are many popular wireless sniffing and key sniffing tools available for Linux like Air Snort, Air Crack, WireShark, etc. I decided to go with Air Crack. (For an extensive list of all the tools, please visit, backtrack.offensive-security.com).
Remember, not all cards support monitor mode, which is what is being used here to crack the password. I am not going into the details of how to install Air Crack (or any other tool) in this article. I assume that you already have the software. In order to carry out attacks on wireless networks efficiently, you’ll almost certainly need to patch your wireless drivers to support packet injection—the patches as well as details of how to do this can be found at www.aircrack-ng.org. BackTrack Linux comes with pre-patched drivers and is a very good distribution for hacking purposes. The driver being used in this experiment is ‘MadWiFi’.
Now you can check if your card supports monitor mode by issuing the following command as the root user (from here on, all the commands are issued as the root):
This will give you the name of your wireless network card (Figure 1).
Once you get the name, issue the following:
airmon-ng stop eth1
You can replace ‘eth1’ with the name of your wireless network card device.
Then execute the following command to make eth1 work in ‘monitor’ mode (Figure 2):
airmon-ng start eth1
Now scan for wireless access points by issuing the following command:
As you can see in Figure 3, this will show you any networks detected, the MAC addresses of the access points (BSSID), the MACs of any computers that are connected to them (STATION), and the Wi-Fi channels they are operating on. If the access point is broadcasting its name (ESSID), this will also be shown.
Once you have got this information, you can try and crack the key. Note the channel of the WEP encrypted network in Figure 3—it is 6. Quit
airodump by pressing Ctrl+C and then issue the following:
airodump-ng -c X -w mycapture eth1
Replace the X with the channel number of your access point (6, in my case). This will start capturing the data that you will use to crack the WEP key, in a file called
mycapture-01.cap in your home directory. You will see packets being gathered by the tool. Make sure you get at least 40,000 packets, good enough for more than 50 per cent of the cases. In case of a very strong password, go for 100,000 packets or so, making the efficiency (chance of cracking the key) close to 99 per cent.
Now we need to inject some traffic on the network. We can do so using the
aireplay tool as follows. Note the MAC address of the base station and the client from the Airodump window. Now open a new root terminal and issue the following command:
aireplay-ng -3 -b ‘base station MAC address’ -h ‘client Mac address’ eth1
aireplay to search for ARP (Address Resolution Protocol) requests and replay them. Once the request is received, the injection of packets will begin. Airodump will start collection packets in
mycapture-01.cap file (see Figure 4).
The work is almost done at this point. All you have to do now is issue the following command in the third terminal window, and you will get the password 95 per cent of the times (depending on the number of packets you have collected. If it fails, retry with more number of packages).
aircrack-ng –z mycapture-01.cap
In a couple of minutes, you will see the network key as shown (Figure 5). The key in this case is ‘CD123AB456’—a hex-64bit WEP key.
How to secure your network
As can be seen from the example above, sniffing wireless networks with a WEP key (or no encryption) is fairly easy. The protocols telnet, pop3, imap, ftp, snmp, and nntp are more susceptible to cracking as they transfer the passwords in plain text while authenticating. Once a cracker gets hold of your key, he can sniff all the data to and from your network. Even if you use secure protocols, only the password and username are encrypted and not the actual data.
You can make your networks less vulnerable to sniffers and play sniffing to your advantage. As already said, a network administrator must try and sniff his own network to check its immunity to such attacks. It can be used to strengthen the network and debug it whenever necessary. To make the attacks less damaging, the only sane remedy is to use strong encryption. Again, some protocols do not support password encryption, so you must always sniff your own network to see if any password and/or other sensitive information is left non-encrypted. Of course, you should use more secure keys such as WPA or WPA2 for your networks.
One more thing to take care of is changing the default password of your WAP. Most routers come with default username/password combinations like admin/admin or admin/password. Change it and use a strong password. You can turn off the SSID broadcasts of your WAP. Broadcasting SSID makes setting up wireless clients extremely convenient since you can locate a network, without having to know what it’s called, but it will also make your network visible to any wireless systems within range of it (as shown in the demo above, we are using the SSID of the station (BSSID)).
You can enable MAC address filtering so only the devices with allowed MAC addresses can access your WAP. (Remember, the MAC address is unique for a device, just like a fingerprint.) Even MAC addresses can be spoofed once known, but this is still better than using no filtering at all.
Where do we stand?
There are many sniffing tools available on Linux, UNIX and the Windows platforms. Most of these can be used to sniff packages and then try and crack the passwords of the networks. The only way to avoid damage is to use preventive controls. Follow the steps given above to secure your network. Do not fear sniffing tools. Use them to your advantage and try cracking your own network to see how secure you are…
|
OPCFW_CODE
|
ux-redesign: UserMessages redesign according to Patternfly
Fixes: https://github.com/oVirt/ovirt-web-ui/issues/647
This change is
@bond95 Sorry for my delayed review! Back from PTO :)
This is looking great! Just one comment:
Can you add an empty state when there are no notifications? I think it will help make it more clear. In this case the Clear All and Mark all Read actions can go away too. Check out an example here by clicking "Clear All": http://www.patternfly.org/pattern-library/communication/notification-drawer/#code
@lizsurette Done.
@gregsheremeta @sjd78 Please code review.
In empty state, 3 questions.
Should the 'Clear all' button be disabled since it is a no-op anyway.
Should the 'Display all' button should also be disabled if there are no hidden messages?
Why the extra space below the buttons? I assume the buttons should be at the bottom of the min-height of the notification pane, or the pane should be shorter.
@sjd78 Adding my thoughts on some of the design-y questions :)
If I "Clear All", should "Display All" bring them all back? (based on the code, yes)
I would even wonder if we need the concept of "Display All".
If I hit "Clear All" now and then 20 minutes from now open the user messages and hit "Display All", should ALL of the previously generated messages come back? (based on the code, yes)
Again, what's the use case for needing to bring them back?
Is "Dismiss" on a single message the same function as "Clear All"? (based on the code, yes)
It should just clear that one message.
Again, what's the use case for needing to bring them back?
+1. Seems useless to me.
Why the extra space below the buttons?
indeed, that should be shortened up
@lizsurette , @gregsheremeta , @bond95 - the tray buttons match what is currently in webadmin (not that it makes sense either).
<off-the-top-of-my-head-before-I-looked-at-the-patternfly-website-again>
Dismiss (single), Clear all, and Display all could make more sense in terms of unread messages vs read messages (seen) and deleting the messages. So the badge shows the unread count, the tray shows unread + read messages, and the bottom buttons could be "Hide all read" and "Show all read" or "Delete all read" and "Mark all unread" or something similar. Read/unread/delete -- the operations that can be taken on messages.
</off-the-top-of-my-head>
PF Notification Drawer has "Unread count", "Mark All Read", and "Clear All". It also says "this option may be used differently across products", so there is just mild guidance.
@sjd78
Why the extra space below the buttons? I assume the buttons should be at the bottom of the min-height of the notification pane, or the pane should be shorter.
Actually this is default behavior of notification drawer. Like here for example https://www.patternfly.org/pattern-library/communication/notification-drawer/#code
or in WebAdmin. So for consistent better to leave it as it is.
I would even wonder if we need the concept of "Display All".
@sjd78 @lizsurette @gregsheremeta
Actually this is default behavior of notification drawer. Like here for example https://www.patternfly.org/pattern-library/communication/notification-drawer/#code
or in WebAdmin. So for consistent better to leave it as it is.
This could be a bug in PF-core. Taking a look at the PF-React example looks better to me without the extra spacing... https://rawgit.com/patternfly/patternfly-react/gh-pages/index.html?selectedKind=patternfly-react%2FCommunication%2FNotification Drawer&selectedStory=Notification Drawer&full=0&addons=1&stories=1&panelRight=0&addonPanel=storybooks%2Fstorybook-addon-knobs
@sjd78 @lizsurette @gregsheremeta Okay so you think we need to delete "Display All"?
Yes, I'm in favor of removing.
I removed 'Display All' button. And made rebase.
We could do better than Date.now() for a unique id for the messages
+1
Get rid of FAILED_EXTERNAL_ACTION and LOGIN_FAILED and just replace them with a new ADD_USER_MSG
I don't know enough yet to get this one :)
would make more sense from the data perspective to be HIDE_ALL_USER_MSGS, HIDE_USER_MSG, SHOW_ALL_USER_MSGS
I don't follow this one -- are we lining those up with something?
Either they should be removed, or they should be updated on every operation.
+1
@gregsheremeta - Should we add an issue for this so we don't forget?
Yep!
@sjd78 opened #712 #713
Can you elaborate on points 2 and 3 above?
|
GITHUB_ARCHIVE
|
Technical and domain expertise to implement an inhouse bespoke solution for organization specific needs.
Multiple proprietary monitoring solutions create an operations nightmare and inflates the cost of operation.
How we help our clients
Zone24x7 provides a wealth of experience in IoT engineering, integration, and quality assurance.
- Multilingual programming (i.e. C, Java, Python)
- Single-board platforms such as Raspberry Pi
- Micro-controller platforms
- Edge node security practices
- Application of cryptography
- Lightweight messaging protocols and serialization techniques
- Efficient remote monitoring and control of IoT edge nodes
- Tools (Vault, JKS) and standards for key management and storage (NIST 800-57)
- Microservices architecture and frameworks
- Configuration and application of message broker technologies
- Application of various communication models based on the scenario
- Experience in workload management platforms such as Kubernetes for orchestration and Fault Tolerance
- The know-how in selecting the right communication and serialization protocols
- Experience in systems security, key management, and API design
- Our team can easily integrate with the open-source IoT platform, ThingsBoard, which is extremely useful for data collection, processing, visualization, and device management
- Integration of Hawkbit open-source update delivery platform, to deliver updates to the edge nodes of the IoT
- Conducting longevity testing on IoT edge nodes
- Edge node failure scenario tests
- Experience in setting up and implementing performance tests on a multitude of protocols (i.e. REST APIs, MQTT based systems, Raw TCP based systems)
- Vulnerability and pen tests on edge node as well as middleware
Read our latest success story
Delivering business agility to a US Tier 1 retail icon with Remote Monitoring & Management
How our clients benefit
- We offer over a decade of technology and domain experience in bespoke remote monitoring and management solutions which successfully manage thousands of business critical systems and devices for clients including the Fortune 500.
- We offer multiple paths to a solution. One can choose to build a solution from scratch, or extend existing solution for better capabilities. or build on top of MATRIX24x7 platform.
- Unlike a monolithic one size fits all off the shelf products, a bespoke solution ensures finer controllability of your key tech assets supports business agility.
- Greater integration with existing systems, devices and infrastructure combined with centralized remote monitoring, configuration and touchless troubleshooting improves efficient systems management
|Angular5 plus, KendoUI|
|Web API2, ASP.Net , SpringBoot, Kafka, Flink|
|Data Access Technologies|
|REST, ODBC, ADO.Net, JDBC, Transport Client, Mongo Java Driver|
|Android, support for Java compatible ARM devices|
|MS SQL, ElasticSearch, MongoDB|
|KendoUI , Grafana, NGX Chart|
|Enterprise Integration technologies|
|Google Cloud Platform, On-Premise|
|TLS1.2, AES-128 Encryption of sensitive data|
|GRPC, HTTP2.0, TCP, MQTT, WebSocket, WebRTC|
|
OPCFW_CODE
|
A TypeError prevents me from forming the conditionals of my function (Python)
The function is supposed to receive a number representing a year, and then print if it's a leap year or not.
def isItALeapYear(year):
while True:
if year % 4 == 0:
print("That is a leap year! ")
break
elif year % 4 != 0:
print("That is not a leap year...")
break
elif not isinstance(year, int) or year == None:
print("Please enter a number...")
break
The program works, the only thing I can't get right is that it is supposed to notify you if anything that it's not a number is being used as an argument. I've tried both the isinstance() function, as well as writing what I want as year != int. And then year == None in the hopes of making it work in case anything nondefined is used as an argument.
I read this post with the exact same error: TypeError: not all arguments converted during string formatting python
But I'm not intending to format anything with the % symbol
As far as I'm concerned the % can be used as an operand to get the residue of a division.
So in this case I'm using it to figure out if a year is a leap year or not by asking if the residue is 0 when divided by 4. I'm pretty stuck, and the sad thing is the error comes up in the very first "if", so I can't really know if the last lines for excluding any non int type argument work or not. Any help would be really appreciated!
Could you edit your question to include the exact error traceback? It seems likely though that this is because your type check is at the end. The code will still attempt to perform modulo operation first, which won't work if it is not an int.
Check its type before you try and do arithmetic on it.
The isinstance conditional should be the first of the 3, not the last
Hi @Shize and welcome to SO! You should check the value for it's type outside of the function and only pass it if it's numeric. Also the while loop makes no sense at all.
Can I just point out that the calculation to determine a leap year is inaccurate. A leap year which is mod 4, and mod 100, is not a leap year, unless it is mod 400. Here is a simple explanation. And the while loop is completely unnecessary.
I recommend to divide the functionality of checking the number from returning the output as well as from receiving the input.
def is_multiple_of_four(number: int):
if number % 4 == 0:
return True
else:
return False
if __name__ == '__main__':
user_input = ""
while not user_input.isdigit():
user_input = input("Please type in a year: ")
if is_multiple_of_four(int(user_input)):
print("That is a leap year!")
else:
print("That is not a leap year.")
Here you can see the function that checks the number does only that, it checks the number if it's modulo of 4 equals 0.
In the script outside the function you can retrieve the user input for as long as it takes to get a valid numeric and return the output in respect of the functions results.
Edit (adding clarification asked in the comments)
The first condition if __name__ == '__main__' is quiet common in python. It's not necessary for your function, but I like using it in answers if people seem to learn Python, so they don't miss out on it. Here is a question with a good answer: What does if __name__ == "__main__": do?
The short answer in the accepted answer is enough to understand why you might want to use it.
The second condition
user_input = ""
while not user_input.isdigit():
first defines a variable user_input with an arbitrary non-digit String value and than uses the negated isdigit() method of the String class on it as condition. Therefor the while loop gets entered in the beginning, as the arbitrary value is not an digit. From then on the value will be re-assigned with user input until it holds an actual digit. This digit is still a String however.
This works wonderfully, I should have considered using an actual input. It also feels like a super clean and pretty solution. I managed to finish creating the function with no issues following this. There is just one thing that I would like to understand, because it kind of feels bad using someone else's code and not understanding it. What exactly is that "empty" if condition.
if name == 'main':
user_input = " "
I don't really understand what it's doing. But if I take it out, or replace the words it doesn't work. So it seems pretty important.
@Shize - It’s not required for your specific use case; with some minor refactoring. This construct (which I’d recommend researching to fully understand) is essentially used to run a script as a program; whereas it appears you are simply searching for a function to call. It’s overkill in this situation. That aside, it’s an inefficient and inaccurate calculation and should not be used without modification to correct the inaccuracy. This answer is the correct solution.
I see! I must have removed it before defining it as an actual function. Now it works just fine without it. Thank you so much!
@Shize I edited my answer to address your questions. As you might have noticed I changed the name of the function, from the beginning. It's name represents what it actually does. Also you might want to check out the answer considering the calender library. I assume you want to train python coding and this is totally fine. For production code I strongly recommend not implementing anything regarding character encoding, or dates/time and rather use existing libraries.
You could use the isleap() function from the standard library (module calendar):
from calendar import isleap
def isItALeapYear(year):
if not isinstance(year, int):
print("Please provide a number")
elif isleap(year):
print("That is a leap year!")
else:
print("That is not a leap year...")
This is the actual fixed version of the code in the question. +1
First thing while loop in the function makes no sense you can remove it(If you want).
There are multiple ways to do that I will show.
First One.
def isItALeapYear(year):
if type(year) != int:
return # here return to exit the function
while True:
if year % 4 == 0:
print("That is a leap year! ")
break
elif year % 4 != 0:
print("That is not a leap year...")
break
elif not isinstance(year, int) or year == None:
print("Please enter a number...")
break
Another is.
def isItALeapYear(year):
try:
int(year)
except ValueError: # this line of code executes when the year is not the int
return # here return to exit the function
while True:
if year % 4 == 0:
print("That is a leap year! ")
break
elif year % 4 != 0:
print("That is not a leap year...")
break
elif not isinstance(year, int) or year == None:
print("Please enter a number...")
break
I know there are more ways to do that but these are the best ones (I Think).
Your function is not accurate then you can use this one.
def isItALeapYear(year):
if type(year) != int:
return
if (( year%400 == 0)or (( year%4 == 0 ) and ( year%100 != 0))):
print(f"{year} is a Leap Year")
else:
print(f"{year} is Not the Leap Year")
Edit For Quetioner
def isItALeapYear(year):
if type(year) != int:
return
if (( year%400 == 0)or (( year%4 == 0 ) and ( year%100 != 0))):
print(f"{year} is a Leap Year")
else:
print(f"{year} is Not the Leap Year")
try:
isItALeapYear(asdasd)
except NameError:
print("You give the wrong Value")
The leap year calculation is inaccurate. And this is an unnecessary abuse of a try/except block.
@S3DEV This is not what the OP wants.
@S3DEV Now OP's function is accurate.
I tried the first one and it works. The only thing is, what if the user using the function chooses to input some random combination of letters as an argument. It works if I a pass it a string, like this isItALeapYear("akwhfuh") and I made it do a print and quit. But what if the user does isItALeapYear(ksduj)? I get "Name error: ksduj is not defined". That would be the very last thing I need to solve. A way for the program to notify the user they cannot input anything random.
@Shize Now you can check
Please don’t suggest that users to accept your answer. I’d bleeds of desperation and is poor form.
@S3DEV Okk, Bro.
|
STACK_EXCHANGE
|
// timestamps ----------------------------------------------------------------
// Utility methods to deal with native Javascript Date objects and return
// string representations suitable for use in log files.
// Public Objects ------------------------------------------------------------
// Convenience function to return the current date and time (via toDateTime())
// as a YYYYMMDD-HHMMSS representation of local time.
export const nowDateTime = (): string => {
return toDateTime(new Date());
}
// Convenience function to return the current date and time (via toLocalISO())
// as an ISO 8601 representation with local time and appropriate offset.
export const nowLocalISO = (): string => {
return toLocalISO(new Date());
}
// Return a string in the format YYYYMMDD-HHMMSS for the specified date
// (in local time).
export const toDateTime = (date: Date): string => {
return date.getFullYear()
+ leftPad(date.getMonth() + 1, 2)
+ leftPad(date.getDate(), 2)
+ "-"
+ leftPad(date.getHours(), 2)
+ leftPad(date.getMinutes(), 2)
+ leftPad(date.getSeconds(), 2);
}
// Return an ISO 8601 representation of the specified date (to seconds
// resolution), expressed as local time with the appropriate offset from UTC.
// This implementation (except that it doesn't pollute prototypes) is based on
// https://stackoverflow.com/questions/17415579/how-to-iso-8601-format-a-date-with-timezone-offset-in-javascript
export const toLocalISO = (date: Date): string => {
return date.getFullYear()
+ "-" + leftPad((date.getMonth() + 1), 2)
+ "-" + leftPad(date.getDate(), 2)
+ "T" + leftPad(date.getHours(), 2)
+ ":" + leftPad(date.getMinutes(), 2)
+ ":" + leftPad(date.getSeconds(), 2)
+ localOffset(date);
}
// Private Objects -----------------------------------------------------------
// Zero-pad the input string with zeros until it is of the requested size.
const leftPad = (input: string | number, size: number): string => {
let output = String(input);
while (output.length < size) {
output = "0" + output;
}
return output;
}
// Return a local timezone offset string in the format required by ISO 8601.
const localOffset = (date: Date): string => {
const offset = date.getTimezoneOffset();
return (offset < 0 ? "+" : "-")
+ leftPad(Math.floor(Math.abs(offset / 60)), 2)
+ ":" + leftPad(Math.abs(offset % 60), 2);
}
|
STACK_EDU
|
React useState updates on click ahead of the desired click
I am creating 50 buttons which on click should set the useState value to the current button number (array index+1).
But I realized that I have to click a button twice to get the current value (array index+1). The first click always gets the previously clicked button value. e.g. When I clicked button 1 for the first time, I get nothing, when I clicked button 2 I get 1, when I clicked button 3 I get 2. Thank you.
Here is the code
const [theValue, setTheValue] = useState({first: 1});
const [disableNext, setDisableNext] = useState(false);
const [disablePrev, setDisablePrev] = useState(true);
<div className="flex flex-row">
{disablePrev ? <div className="flex flex-1 justify-start">
<button type="button" className="btn bg-gray-300 text-gray-400 m-10" disabled>Previous</button></div>
: <div className="flex flex-1 justify-start">
<button type="button" className="btn bg-jamb-light-green text-white m-10 hover:bg-green-400" onClick={prevButton}>Previous</button></div>}
{disableNext ? <div className="flex flex-1 justify-end">
<button type="button" className="btn bg-gray-300 text-gray-400 m-10" disabled>Next</button></div>
: <div className="flex flex-1 justify-end">
<button type="button" className="btn bg-jamb-light-green text-white m-10 hover:bg-green-400" onClick={nextButton}>Next</button></div> }
</div>
<div>
{[...Array(50)].map((item, index) => {
return (
<button key={index} className={(index+1 === theValue.first) ? "w-10 h-10 p-2 m-0.5 rounded-full bg-green-500 text-white" : "w-10 p-2 m-0.5 rounded-full bg-red-300 hover:bg-green-400 hover:text-white"}
onClick={
() => {
if(theValue.first === 1){
setDisablePrev(true);
}else if((theValue.first > 1) && (theValue.first < 49)){
setDisablePrev(false);
}
if(theValue.first === 50){
setDisableNext(true);
}else if(theValue.first < 50 && theValue.first > 1){
setDisableNext(false);
}
setTheValue({...theValue, first: index+1})
}
}>{index+1}</button>
)
})}
</div>
Since both first and index are used to render the button (to some degree) try key={index+'-'+theValue.first} as the key to avoid potentially passing the wrong state object to the closures
Okay thanks. I will do just that.
That is because the state is set on the next render. To update the state by 1 you could do:
const [theValue, setTheValue] = useState({first: 1});
<div>
{[...Array(50)].map((item) =>(
<button key={index} className={(index+1 === theValue.first) ? "w-10 h-10 p-2 m-0.5 rounded-full bg-green-500 text-white" : "w-10 p-2 m-0.5 rounded-full bg-red-300 hover:bg-green-400 hover:text-white"}
onClick={
() => {
setTheValue(currentStateValue => {...currentStateValue, first: currentStateValue.first+1})
}
}>{index+1}</button>
)
)}
</div>
It did not work. I get syntax error in VS. And the index of map is what sets the desired value between 1 to the length of the array (50). I am creating 50 buttons which on click should set the useState value to the clicked button number (array index number). The issue I am having is I have to click a button twice to get the current value. The first click always gets the previously clicked button value. e.g. When I clicked button 1 for the first time, I get nothing, when I clicked button 2 I get 1, when I clicked button 3 I get 2. Thank you.
Oh i see I didn't understood your question properly. Will update my answer asap
Firstly, there is typo in your second if condition, theValue.firts. Coming to your issue, when you update a state inside a function, the updated state is not immediately accessible inside the same function. You only have access to the previous state value. So let's say, the current state value is 9 and when you click button number 10, you are updating the state value to 10 and at the same time console logging the value of theValue.first whose value is still 9 currently since it doesn't have access to the updated state value of 10. Now if you click the same button number again, it will display 10 but it doesn't mean it is the updated state value, it is the value of state that you previously set(which is 10) when you clicked the button. That's what is going on there. However, you will be able to access the latest updated state value inside the render div inside the paragraph as I have added for demo below. Going with your question, the state value is updated correctly. It's just that console logging from the same function makes you feel otherwise.
const [theValue, setTheValue] = useState({
first: 1
});
<div> {
[...Array(50)].map((item, index) => {
return ( <
button key = {
index
}
className = {
(index + 1 === theValue.first) ? "w-10 h-10 p-2 m-0.5 rounded-full bg-green-500 text-white" : "w-10 p-2 m-0.5 rounded-full bg-red-300 hover:bg-green-400 hover:text-white"
}
onClick = {
() => {
if (theValue.first === 1) {
console.log(theValue.first);
} else if (theValue.first > 1) {
console.log(theValue.first);
}
if (theValue.first === 50) {
console.log(theValue.first);
} else if (theValue.first < 50) {
console.log(theValue.first);
}
setTheValue({ ...theValue,
first: index + 1
})
}
} > {
index + 1
} < /button>
)
})
}
<p>{theValue.first}</p> // access to updated state value
</div>
I've corrected the typo, thanks. What you said was exactly what happening, the current value was registered correctly just not available for the function immediately. But that was my problem. You see, I want to disable a Prev button whenever the theValue.first === 1 and enable it when the value is > 1 and < 49, and also disable 'Nextbutton whenevertheValue.first === 50` and enable it when the value is less than 50. If I can't get the current value immediately on click, I don't think I will be able to achieve that, at least not the way I am trying to do it.
I don't know how you are trying to disable the button, but I would suggest you check the condition then and there while assigning the classname since you will have access to the latest updated state value if you do so, and disable the button with css pointer-events:none on the classname you assign when disable conditions are met. And yes, I agree with you that it would be impossible to achieve that from within the onclick callback.
Thank you for your responses. I have edited the original question and incorporated the buttons in the code. I am using tailwind.css framework for CSS functionality.
|
STACK_EXCHANGE
|
Today we released a major update to the form features in UCare. This is basically a complete rewrite of the Forms features but don’t worry; for people visiting your website Forms still work like before.
So what changed? First the basics; we renamed Fields to Questions and Results to Responses. While fields and results are correct wording if you’re a techo it didn’t fit with most people and so they had trouble understanding what they were.
Under the covers we’ve worked hard to make it easy for you to edit forms, change question options and reorder questions. You can also edit responses and print them in exactly the same format that they were submitted.
To help avoid duplicates we’ve added a new question type called ‘Person’, if you are signed in to UCare when filling in the form then it will let you look up existing people and their details. If not it will ask for a name, email or mobile and use that info to find existing people that match, if their is no match then it will add a new person. You can also ask for birthday, gender and address info and it will update the person’s profile.
A new question option can now update people’s profiles with answer from the form response, for example if you ask for the person’s address then it can update their home address or if you ask for their salvation date then completing that question will save the date on their profile. The types of questions you can add include the following:
- Person - look up an existing person or collect their name and contact info, optionally you can also collect their birthday, gender and residential address.
- Contact Detail - any type of contact detail. E.g. email, mobile, emergence, etc.
- Date - any type of date. For example a custom date like Salvation or Baptism.
- Text - a single line of text.
- Paragraph text - a paragraph of text.
- Choose from a list - a drop down allowing the person to choose one option.
- Checkboxes - a list of checkboxes allowing the person to choose multiple options.
- Number - a number input that can have a minimum and maximum value.
- Scale - a scale from 0 to 10, useful to collect info like how much a person agrees with the question.
- File - upload a file, if the person isn’t signed into UCare this question isn’t displayed.
- Signature - useful on touch devices like iPads and phones, this allows the person to use their finder to sign the form.
- Section header - enter a section title and description so that you can break your form up.
We’ve added a few extra little niceties like if you mark a form as Protected then other UCare users can respond to a form but can’t edit the form and change questions. You can add a CAPTCHA to your forms to avoid spam submissions on your website. Finally when exporting form responses you can now select the date range that you want to export instead of exporting all of the form responses.
With these and other updates we’re working hard to make UCare smarter and easier to use, if you have any feedback we’d love to hear from you, simply email email@example.com.
|
OPCFW_CODE
|
using System;
using System.Linq;
using Classes.Characters.Slime;
using Classes.UI;
using UnityEngine;
namespace Scripts.characters
{
public class AttackPoint : MonoBehaviour
{
public Transform tr;
public float coefficient = 0.25f;
[SerializeField] private Character player;
[SerializeField] private AttackJoystick joystick;
public void Update()
{
if (player.CurrentState != Character.States.None) return;
var verticalPosition = joystick.Direction.y;
var horizontalPosition = joystick.Direction.x;
var yNormalized = verticalPosition > 0 ? 1 : -1;
var xNormalized = horizontalPosition > 0 ? 1 : -1;
var isHorizontal = Math.Abs(horizontalPosition) > Math.Abs(verticalPosition);
if (verticalPosition != 0 && horizontalPosition != 0)
tr.localPosition = player.CurrentAttackType switch
{
Character.AttackType.Melee => new Vector3
(isHorizontal ? coefficient * xNormalized : 0,
!isHorizontal ? coefficient * yNormalized : 0, 0),
Character.AttackType.Range => new Vector3(coefficient * horizontalPosition,
coefficient * verticalPosition, 0),
_ => tr.localPosition
};
Rotate(player.transform, tr);
}
protected void OnTriggerEnter2D(Collider2D collision)
{
if (!player.TagWhiteList.Contains(collision.tag)) return;
if (collision.TryGetComponent<Enemy>(out var enemy) && !player.enemiesList.Contains(enemy))
player.enemiesList.AddLast(enemy);
}
protected void OnTriggerExit2D(Collider2D collision)
{
if (!player.TagWhiteList.Contains(collision.tag)) return;
if (collision.TryGetComponent<Enemy>(out var enemy))
player.enemiesList.Remove(enemy);
}
private static void Rotate(Transform pointer, Transform target)
{
var diff = pointer.localPosition - target.position;
diff.Normalize();
target.rotation = Quaternion.Euler(0, 0, Mathf.Atan2(diff.y, diff.x) * Mathf.Rad2Deg);
}
}
}
|
STACK_EDU
|
<?php
namespace FSQL;
/* A reentrant read write lock for a file */
class LockableFile
{
protected $file;
private $lock;
private $rcount = 0;
private $wcount = 0;
public function __construct(File $file)
{
$this->file = $file;
$this->lock = LOCK_UN;
}
public function __destruct()
{
// should be unlocked before reaches here, but just in case,
// release all locks and close file
$this->file->close();
}
public function readerCount()
{
return $this->rcount;
}
public function writerCount()
{
return $this->wcount;
}
public function file()
{
return $this->file;
}
public function getHandle()
{
return $this->file->getHandle();
}
public function getPath()
{
return $this->file->getPath();
}
public function exists()
{
return $this->file->exists();
}
public function drop()
{
return $this->file->drop();
}
public function acquireRead()
{
if ($this->lock !== LOCK_UN) { /* Already have at least a read lock */
++$this->rcount;
return true;
} else { /* New lock */
if ($this->file->open('rb')) {
$this->lock(LOCK_SH);
$this->rcount = 1;
return true;
}
}
return false;
}
public function acquireWrite()
{
if ($this->lock === LOCK_EX) {/* Already have a write lock */
++$this->wcount;
return true;
} elseif ($this->lock === LOCK_SH) {/* Upgrade a read lock*/
$this->lock(LOCK_EX);
$this->wcount = 1;
return true;
} else {/* New lock */
if ($this->file->open('c+b')) {
$this->lock(LOCK_EX);
$this->wcount = 1;
return true;
}
}
return false;
}
public function releaseRead()
{
if ($this->lock !== LOCK_UN) {
--$this->rcount;
if ($this->lock === LOCK_SH && $this->rcount === 0) {/* Read lock now empty */
// no readers or writers left, release lock
$this->close();
}
}
return true;
}
public function releaseWrite()
{
if ($this->lock === LOCK_EX) {/* Write lock */
--$this->wcount;
if ($this->wcount === 0) {
// no writers left.
if ($this->rcount > 0) {
// only readers left. downgrade lock.
$this->lock(LOCK_SH);
} else {
// no readers or writers left, release lock
$this->close();
}
}
}
return true;
}
private function lock($mode)
{
$this->file->lock($mode);
$this->lock = $mode;
}
private function close()
{
$this->file->close();
$this->lock = LOCK_UN;
$this->rcount = 0;
$this->wcount = 0;
}
}
|
STACK_EDU
|
/*
* Output.cpp
*
* Created on: 2015/07/22
* Author: kryozahiro
*/
#include "Output.h"
#include <cassert>
#include <boost/filesystem.hpp>
#include <boost/log/sinks/sync_frontend.hpp>
#include <boost/log/sinks/text_ostream_backend.hpp>
#include <boost/log/expressions/predicates.hpp>
#include <boost/log/utility/setup/console.hpp>
#include <boost/log/utility/setup/file.hpp>
#include "cpputil/GenericIo.h"
using namespace std;
using namespace cpputil;
namespace pt = boost::property_tree;
namespace fs = boost::filesystem;
namespace lg = boost::log;
Output::Output(const boost::property_tree::ptree& outputTree) {
for (const pt::ptree::value_type& kvp : outputTree) {
if (kvp.first == "<xmlattr>") {
continue;
}
assert(kvp.first == "Logger");
const pt::ptree& loggerTree = kvp.second;
Logger logger;
logger.range = loggerTree.get<std::pair<int, int>>("Range");
logger.interval = loggerTree.get<int>("Interval");
logger.filename = loggerTree.get<string>("Filename");
assert(logger.interval > 0);
string target = loggerTree.get<string>("Target");
if (target == "Summary") {
summaryLogger = logger;
} else if (target == "Relation") {
relationLogger = logger;
} else if (target == "Evaluation") {
evaluationLogger = logger;
} else if (target == "Program") {
programLogger = logger;
} else if (target == "Solution") {
solutionLogger = logger;
} else if (target == "Validator") {
validatorLogger = logger;
}
}
}
void Output::setSink(int stage, const std::string& stageName) {
auto core = lg::core::get();
if (stageName == "SolverStage") {
core->remove_all_sinks();
fs::path summaryPath(summaryLogger.filename);
string fullname = summaryPath.parent_path().generic_string() + string("/s") + to_string(stage) + summaryPath.filename().generic_string();
lg::add_file_log(lg::keywords::file_name = fullname, lg::keywords::filter = lg::expressions::is_in_range<int>("Summary", 0, INT_MAX));
lg::add_console_log(std::cerr, lg::keywords::filter = lg::expressions::is_in_range<int>("Summary", 0, INT_MAX));
fs::path relationPath(relationLogger.filename);
relationFile = relationPath.parent_path().generic_string() + string("/s") + to_string(stage) + relationPath.filename().generic_string();
setMutableSink(stage, "Evaluation", evaluationLogger);
setMutableSink(stage, "Program", programLogger);
setMutableSink(stage, "Solution", solutionLogger);
} else if (stageName == "ValidatorStage") {
core->remove_all_sinks();
setMutableSink(stage, "Validator", validatorLogger);
}
}
std::string Output::getRelationFile() const {
return relationFile;
}
std::pair<int, int> Output::getEvaluationLoggerRange() const {
return evaluationLogger.range;
}
void Output::setMutableSink(int stage, const std::string& target, Logger& logger) {
fs::path path(logger.filename);
for (int i = logger.range.first; i < logger.range.second; i += logger.interval) {
string fullname = path.parent_path().generic_string() + string("/s") + to_string(stage) + path.stem().generic_string() + to_string(i) + path.extension().generic_string();
lg::add_file_log(lg::keywords::file_name = fullname, lg::keywords::filter = lg::expressions::is_in_range<int>(target, i, i + 1));
}
}
|
STACK_EDU
|
A particular period of history, especially one considered remarkable or noteworthy…
The beginning of a new and important period in the history of anything… A milestone.
These are a few of the definitions for the fancy word ‘epoch’. For astronomers, it is the time at which observations are made, as of the positions of planets or stars. In computer applications, epochs are used to maintain a time reference as a single number for ease of computation.
Unix Epoch Time
The unix time stamp is a way to track time as a running total of seconds. This count starts at the Unix Epoch on January 1st, 1970 at UTC, and at this very moment, the Unix epoch time is
Since computer applications make extensive use of this feature, it is quite normal to get these kind of time stamps. It is usually the case that we want to convert these timestamps to a human readable format, so let us examine a few examples to convert Unix time in Tableau.
As mentioned, a 10-digit number represent the Unix epoch time in seconds. To convert this to UTC time in Tableau, we can use the following calculation:
DATEADD('second', INT([Unix time field]), #1970-01-01#)
Note that the calculation assumes that the Unix time is in seconds. It may well be that we get a 13-digit Unix time, and in this case we will divide it by 1000 first as in:
DATEADD('second', INT([Unix time field]/1000), #1970-01-01#)
Also note that the
DATEADD() function requires the second argument to be a number and since in most times we get it in a string format we will have to convert the
[Unix time field] to an integer.
Sometimes though, we get very weird Unix timestamps that doesn’t make sense. In a recent project I was exploring LDAP/Windows Active Directory time stamps and stumbled upon an 18-digit timestamp such as this:
Well, it turns out that there are quite a few timestamps to choose from. To name a few, apple macOS considers its Epoch Time as starting from January 1, 1904. Microsoft Windows considers its Epoch Time as starting from January 1, 1601, while Unix and Linux Systems consider their Epoch Time as starting from January 1, 1970.
If you find yourself intrigued by the subject, you can check out more Notable epoch dates in computing.
Back to our ugly looking timestamp…
The 18-digit Active Directory timestamps, are also named as ‘Windows NT time format’, ‘Win32 FILETIME’ or ‘Win32 SYSTEMTIME’ or ‘NTFS file time’. Since they are widely used we can find them in various Active Directory attributes such as ‘LastLogonTimestamp’, ‘LastPwdSet’ etc.
The timestamp is the count of 100-nanosecond intervals since Jan 1, 1601 UTC. There are several ways to go around this in order to get a human readable timestamp in Tableau.
- A bit of Unix Arithmetics: We will revert to seconds and discard the last 7 digits of the LDAP timestamp, thus dividing our Windows timestamp by 10000000. We can then convert it to Unix epoch time by subtracting 11644473600 (number of seconds between January 1, 1601 and January 1, 1970).
- Conversion to Unix Epoch Time:
INT([Windows time field]/10000000) - 11644473600
DATEADD('second', [Unix Epoch Time], #1970-01-01#).
- Conversion to Unix Epoch Time:
- Conversion to seconds and then smoothly get the desired timestamp:
DATEADD('second', INT([Windows time field]/10000000), #1601-01-01#).
Unix Epoch Time Fun Fact
If you recall, the year 2000 signified the famous Y2K Bug. The bug was expected to bring down world wide computer systems and created quite some panic at the time. The bug itself was the inability of computer systems to distinguish dates correctly. Instead of allowing four digits for the year, many computer programs only allowed two digits (e.g., 90 instead of 1990).
Well… eventually nothing much happened, A storm in a cup of tea…
As we now know, Unix time represents the seconds passed since January 1, 1970. Historically, Unix time has been encoded as a signed 32-bit integer, with values up until 231 (<2,147,483,648). Once these amount of seconds will pass (apparently on 03:14:07, Tuesday, 19 January 2038), Values will change sign and indicate a negative number, leading to a false date. Any system using data structures with 32-bit time representations will be at risk to fail.
In this blog we covered Unix Epoch Time and discussed the way we can manipulate it to show a human readable form. I hope you have a clearer view of Unix Time and the different Epochs it represents.
|
OPCFW_CODE
|
M: Parenthood, the Great Moral Gamble - dnetesn
http://nautil.us/issue/2/uncertainty/parenthood-the-great-moral-gamble
R: rayiner
Many people talk about kids who grow up to be serial killers, or bringing
another child into an overcrowded world, but ignore the aggregate value of the
tremendous joy people experience simply being alive. I think often, we in the
first world cannot fathom how poor people can be happy. But the fact is that
something as simple as sharing a meal with family is a source of joy for
people the world over, and the magnitude of that joy is not proportional to
whether you live in the West Village or in an actual village.
This is a bit metaphysical, but in my opinion there is a cosmic opportunity
cost to _not_ having kids--a lost opportunity for a human person to experience
the joy of being alive.
R: dminor
> This is a bit metaphysical, but in my opinion there is a cosmic opportunity
> cost to not having kids--a lost opportunity for a human person to experience
> the joy of being alive.
By this logic you should have as many children as possible.
R: rayiner
By that logic, all else being equal, society in the aggregate (not necessarily
any individual) should have as many kids as possible.
I don't think it's such a far fetched idea. I think at least one of the
subconscious appeals of Star Trek is that it's a story where humanity is freed
from being tied to a single world, and _populates_ worlds all over the galaxy.
R: ctdonath
_Everything_ we do has a chance of causing harm. People regularly agonize over
the potential of harm, often despite minuscule odds thereof. To become a sane
productive adult, one must come to accept that harm _may_ happen as
consequence to an action, but not doing that action causes harm as well. Go
forth with good intent in good faith; stark horrors _may_ occur, but not doing
so accumulates greater horror.
[http://4.bp.blogspot.com/-KKLOmJySIgk/TghkJ21To5I/AAAAAAAAA0...](http://4.bp.blogspot.com/-KKLOmJySIgk/TghkJ21To5I/AAAAAAAAA0c/qh1g73xU22s/s1600/bloom+county+much+too+eco-
friendly.jpg)
R: perlgeek
On the one hand, those are very interesting questions to discuss.
On the other hand, getting/having children is a deeply rooted desire/need in
humans, and passing judgement on it feels like passing judgement on the
decision to eat food, sleep, breath, or having sex.
Can you blame somebody for eating when hungry? Even when it has far-reaching
consequence some fifteen years later?
R: qznc
This article reads like a confused run in circles [0]. Parents are responsible
for kids they cannot control? How can you be responsible for something you
have no control over??
Sure, there is a tendency to blame parents of evildoers, but that is fallacy
of the accuser, not the parents.
[0] @native speaker: Is there a good phrase for this?
R: coldtea
> _How can you be responsible for something you have no control over?_
In the general case, this is very easy and logical: by unleashing it to the
world.
Isn't that the idea behind the "Sorcerer's Apprentice" (famous from Mickey
Mouse's version in Fantasia)?
[http://en.wikipedia.org/wiki/The_Sorcerer%27s_Apprentice](http://en.wikipedia.org/wiki/The_Sorcerer%27s_Apprentice)
|
HACKER_NEWS
|
( ) . . .
Binary option ru
, - (,,,,)for numeric options the binary option ru value can be given in decimal,
redis scripting has support for binary option ru MessagePack because it is a fast and compact serialization format with a simple to implement specification. I liked it so much that I implemented a MessagePack C extension for Lua just to include it into Redis.
Setting options set-option E764 :se :set :set Show all options that differ from their default value. :set all Show all but terminal options. :set termcap Show all terminal options. Note that in the GUI the key codes are not shown, because they are generated internally and can't.
Import msgpack from io import BytesIO buf BytesIO for i in range(100 ckb(range(i use_bin_typeTrue) ek(0) unpacker msgpack. Unpacker(buf, rawFalse) for unpacked in unpacker: print(unpacked) Packing/unpacking of custom data type It is also possible to pack/unpack custom data types. Here is an example for datetime. datetime.
Or :set invoption Toggle option: Invert value. not in Vi :set-default :set- :set- vi :set- vim :set option Reset option to its default value. May depend on the current value of 'compatible'. not in Vi :set option vi Reset option to its Vi default value. not in.
Binary option ru in USA:
.,.2.,. : 3.,.
We also use MessagePack as a glue between components. Actually we just wanted a fast replacement of JSON, and MessagePack is simply useful. MessagePack has been simply invaluable to us. We use MessagePack Memcache to cache many of our feeds on Pinterest. These feeds are.
document doc new XmlDocument binary option ru test. XmlUnpackHandler / Now we can serialize/deserialize XmlDocument object instances via a / base class reference. RegisterPackHandler!(XmlDocument,) xml auto data pack(doc XmlDocument xml unpack!) xmlDocument(data assert(me "test.) xml / me is "test. XmlPackHandler registerUnpackHandler!(XmlDocument,)binomo:, 3-. 2.,.
list String src new ArrayList String d msgpack d kumofs d viver MessagePack msgpack new MessagePack / Serialize byte raw msgpack. Dependency binary option ru groupId gpack /groupId artifactId msgpack /artifactId version rsion /version /dependency. /dependencies Simple Serialization/Deserialization/Duck Typing using Value / Create serialize objects.pyPy can use this. Without extension, you need to install Visual Studio or Windows SDK on Windows. Windows When you can't use a binary distribution, using pure Python binary option ru implementation on CPython runs slowly. Install pip install msgpack PyPy msgpack provides a pure Python implementation.use pack for serialization, 25.5, and unpack for deserialization: import le; import msgpack; struct S int x; float y; string z; void main S input S(10,) the documentation can be found here pack / unpack msgpack-d binary option ru is very simple to use.i planning these breaking changes: packer and unpacker: Remove encoding and unicode_errors option. Planned binary option ru backward incompatible changes When msgpack 1.0, packer: Change default of use_bin_type option from False to True. You can use rawFalse instead of encoding'utf-8'.
it lets you exchange data among multiple languages like JSON. Version 1.0. Copyright Copyright (c)) 2010- Masahiro Nakagawa License Distributed under the Boost Software License, org/ MessagePack for Python What's this MessagePack is an efficient binary serialization binary option ru format. Msgpack/msgpack-python https msgpack.vim documentation: binary option ru options Help FAQ Both main help file options. Automatically setting options auto-setting 3. Setting options set-option 2. VIM REFERENCE MANUAL by Bram Moolenaar Options options 1. Last change: 2011 Mar 22. Txt For Vim version 7.3. Options summary option-summary.
Lowest investment binary options:
pack and dump binary option ru packs to a file-like object. Import msgpack ckb(1,) 2, unpackb rawFalse) 1, 3 unpack unpacks msgpack's array to Python's list, 3, use_bin_typeTrue) 'x93x01x02x03' msgpack. But can also unpack to tuple: msgpack. 2, unpack and load unpacks from a file-like object... 11:33,!, -.use the @nonPacked attribute: struct User string name; @nonPacked int level; / pack / unpack will ignore the 'level' field Feature: binary option ru Use your own serialization/deserialization routines for custom class and struct types. Z Feature: Skip serialization/deserialization of a specific field. Z input.
,,!,., rSI.,.msgpack is removed and import msgpack fail. Sadly, this doesn't work for upgrade install. I upload transitional package (msgpack-python 0.5 which binary option ru depending on msgpack)) for smooth transition from msgpack-python to msgpack. After pip install -U msgpack-python,features Small size and High performance Zero copy serialization / deserialization Streaming deserializer for non-contiguous IO situation Supports D features (Ranges,) tuples, messagePack for D is a binary option ru pure D implementation of MessagePack. Real type) Note: The real type is only supported in D.
, , . , , , , . , .
0.,,., :., 50,.!60 seconds_l binary option ru MT4 C: Program Files MT4 Templates 3. M1, mT4. 4., « « 60 seconds_profits.
777binary not available now. 777Binary was closed! They provide trading on objective professional platform, see here other Binary Options Brokers 777binary is founded in 2012 and is one of the binary options trading education binary option ru best binary options brokers.
|
OPCFW_CODE
|
when I select a file in dolphin the name disappears
I'm using dolphin under gnome 3 and when I select a file, the name disappears. I tried looking at my kde settings and looking under colors and nothing seems out of the ordinary, and I don't know if that is where dolphin is getting its colors from since I'm using gnome 3. I'm using Ambiance for a color theme, so I looked in /usr/share/themes/Ambiance/gtk-3.0/gtk.css and saw base color is white and so is selected fg color. So I changed them to black and got nothing. It appears that dolphin is getting its color information from gnome 3 since when changing from Ambiance to Adwaita the colors change in dolphin.
So how do I get the filename not to disappear when I select a file in dolphin and hover over it?
Thanks
With root privileges, please change in the following file:
/usr/share/themes/Ambiance/gtk-2.0/gtkrc
from
selected_fg_color:#ffffff
to
selected_fg_color:#000000
I figured this out in a long search for the right configuration file.
I also tried changing it in .local/share/... but it did not work that way for me.
Note that this affects selected text color everywhere, not just in Dolphin. It looks OK though so for me it's not a problem.
Thanks for this answer. I changed bg to #80f080 and fg to #a00000, and it works fine for me.
I'm still having issues with this despite changing any of these values the white still remains while my mouse is hovered over the text
Note that if your active theme is not Ambiance, you need to replace it in the path with yours.
I have struggled with the various solutions posted for hours now and have finally gotten mine working. I am running KDE Plasma on Ubuntu and a file selected AND hovered over will disappear. First, you need to figure out what theme you are using (System Settings -> Appearance -> Application Style -> (and then Window Decorations or GTK .. not sure which, but both of mine were set to the same). In this example my theme is "theme_name". Find this theme file here:
/usr/share/themes/theme_name/gtk-2.0/gtkrc
Find the line that contains "selected_fg_color:". Change the six digits after the : to all zeros:
selected_fg_color:#000000
You will need to restart Dolphin after this change. This changes the text when you select and hover to black. If you are running a theme with a white background it still won't look perfect because the background will be white, and not look selected (but at least you can read it now). You can change the color of the background fill by adjusting "base_color" in the same file. However, this changes the base background color for the whole theme... so it will change other things too.
I think that some who have said this fix does not work for them may be editing the gtkrc file for the wrong theme.
In Ubuntu Studio I found that this problem can be solved by selecting a different Style in Windows Manager, e.g. from MurrinaDark to MurrinaBlue.
Apologies, that did not work after all.
The following worked:
sudo gedit
goto /usr/share/themes/Greybird/gtk-2.0/gtkrc
change from:
gtk-color-scheme = "bg_color:#CECECE\nselected_bg_color:#398ee7\nbase_color:#fcfcfc" # Background, base.
gtk-color-scheme = "fg_color:#3C3C3C\nselected_fg_color:#000000\ntext_color:#212121" # Foreground, text.
into:
gtk-color-scheme = "bg_color:#CECECE\nselected_bg_color:#ccff99\nbase_color:#fcfcfc" # Background, base.
gtk-color-scheme = "fg_color:#3C3C3C\nselected_fg_color:#0000cc\ntext_color:#212121" # Foreground, text.
I'm sorry, but none of this is working is there some other solution to this? uiuiui
I had the same problem running dolphin in kde. I also had ubuntu-mate installed and apperantly it still set an environment key for qt apps to run gtk2 theme:
$ env | grep QT_QPA_PLATFORMTHEME
QT_QPA_PLATFORMTHEME=gtk2
You can test for yourself on your system with:
QT_QPA_PLATFORMTHEME=kde dolphin
This will use the styles set in ~/.config/kdeglobals which you can change with the settingsmanager of kde.
Other options are to use qt5ct but it is not in the official repos. See this post on webupd8
Go to the ~/.config/kdeglobals and add these lines
[Colors:View]
BackgroundNormal=94, 104, 109
the #s representing the RGB value of whatever color you choose.
|
STACK_EXCHANGE
|
Can real numbers be represented such that addition is computable in an intuitive way?
I want to represent (computable) real numbers in such a way that addition is computable, i.e. there exists a Turing machine $M(x, y, n)$ which halts with the $n$th digit of $x + y$ on its tape.
The most obvious way to encode real numbers is to lead off with a sign bit, i.e. the first digit of a number is 1 if the number is negative or 0 if it is positive. But it seems to me that this will result in addition being non-computable, because $M$ would need to examine an arbitrary number of digits in order to determine if $(x +\epsilon) - x$ is positive.
One thing I could do is to take a continuous bijective function $f:\mathbb R\to[0, 1]$ and encode $x$ as $f(x)$. However, this is very different than how I usually think about the representation of numbers.
Is there a more straightforward way to represent real numbers such that addition is computable? Since all computable functions are continuous, and the sign function is discontinuous, it seems like my intuition about positive versus negative numbers is incompatible with computability.
@Rahul Is it? Doesn't the same problem apply? We don't know what the integer part of the Cauchy sequence:$$(.9,.99,.999,.9999\dotsb)$$is.
@Rahul Ah. That seems to have its own problems, though, such as the fact that we never know whether a nonzero real is positive or negative, or whether two reals are within a given $\epsilon$ of each other. (Consider a sequence whose first thousand entries seem to approach $\pi$, and whose remaining entries all equal $-100$.)
Rahul's comment essentially kills off any hope of a natural answer to your question. However, there is a very silly answer.
It's not hard to show that the structure $(\mathbb{Comp}, +)$ (where "$\mathbb{Comp}$" denotes the set of computable reals) is isomorphic to the additive group of the ring $\mathbb{Q}[\pi]$. The latter has a natural computable presentation, hence the former has a (not very natural) computable presentation; that is, there is a structure $(A, \oplus)$, where $A$ is a computable set of natural numbers and $\oplus$ is a total computable binary function, which is isomorphic to $(\mathbb{Comp}, +)$.
The problem, here, is that the unfolding of this interpretation is hard: there is no uniform algorithm to take an element of $A$ and output its decimal expansion (for basically the reason Rahul mentioned in their comment).
In such a representation, you can't get multiplication, right? (Also, hi.)
Do you have more information/a link about this? It's not obvious to me that those two groups are isomorphic.
@Xodarap Any two vector spaces with the same dimension over the same field are isomorphic. $\mathbb{Comp}$ and $\mathbb{Q}[\pi]$ are each countably-infinite-dimensional $\mathbb{Q}$-vector spaces, so they're isomorphic.
|
STACK_EXCHANGE
|
Windows Media Player 9 Codecs Pack 1.0 Home Windows Video Video Codecs Windows Media Player 9 Codecs Pack Windows Media Player 9 Codecs Pack 1.0 Download Now! How should I use "probable"? Wähle deine Sprache aus. Is there a way to prevent developers from using std::min, std::max? navigate to this website
Why was Gilderoy Lockhart unable to be cured? had a look at my files on AVIcodec and found that the DivX codec stuff was missing. If your looking at a black screen in WMP when the clip plays and a error pops up right away then this is probably where you stand. >>"An operation failed due Wird verarbeitet... https://support.microsoft.com/en-us/kb/291818
After the determination has been made, WMP (or Winamp) hauls in the right codec to decipher the file. Veröffentlicht am 25.12.2013G'day Youtubers, In this tutorial I'm going to assist you on how to download and install K-Lite Codec Pack to abolish Windows Media Player's Error [Lifetime]. The missing codec might be available to download from the Internet.
Or am I taking the wrong road, is there something else that could correct this. Cheers, Jimmy 04-28-2004 12:31 PMJimmy R Basically, the problem is Windows Media Player 9 won't download competitor's codecs. I installed Microsoft Application Screen Decoder (MSA1). Windows Media Player 9 Audio Codec Anmelden 24 Wird geladen...
Hopefully this should resolve the situation. Windows Media Player 11 Codec Error However, on some systems when the user clicks the link, Windows Media Player gives them the error: Window Media Player cannot play the file. thx! http://www.microsoft.com/windows/windowsmedia/player/webhelp/default.aspx%3FID%3DC00D10D1%26codec%3DH264 This may be a codec issue.
Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the Windows Media Player 9 Dvd Codec When you install a codec, it places some information about itself in the Windows Registry that tells WMP (and any other multimedia programs) what the codec is for, and where to When I try to play it it does some stuff then it says 'error downloading codec'. If that one doesn't work, try: 'AVICodec' http://avicodec.duby.info/ 04-29-2004 03:10 AMFrightfoO615 posts Feedback Cheers, I have downloaded AVIcodec and am ready to use it.
Similar Threads - (Solved) Windows Media Windows File Explorer Issues! 741852963Anonymous, Oct 3, 2016, in forum: All Other Software Replies: 6 Views: 218 Oddba11 Oct 8, 2016 at 8:19 AM boot
Stay tuned for more tutorials like this! ▼ Links:K-Lite Codec Pack: http://codecguide.com/download_kl.htm▼ l====About the K-Lite Codec Pack====l ▼The K-Lite Codec Pack is a collection of DirectShow filters, VFW/ACM codecs, and tools. Windows Media Player Error C00d11c7 Codec Download Loading... Xvid Codec Windows Media Player 9 I gave up and installed VLC.
that way i would only have to install those codecs that are neccessary Don't throw in those weirdo codecs. useful reference http://avicodec.duby.info Thanks spiderfix, that was just what I was looking for, interestingly enough, the program says that all the video files I cant open are Windows Media Video V7 or V8, The K-Lite Codec Pack is designed as a user-friendly solution for playing all your audio and movie files. Yes, my password is: Forgot your password? Divx Codec Windows Media Player 9
Although you are trying to play a WMV (Windows Media Video File) with Windows Media Player (WMP), it may not have the correct version of the codec or their copy of Melde dich bei YouTube an, damit dein Feedback gezählt wird. Wird verarbeitet... my review here Identify the codec the clip was compressed with.
This package can be used as an alternative to automatically downloading Windows Media Codecs, or to correct problems experienced with previously-downloaded codecs. Windows Media Center Codec Error The latest Windows Media codecs. Nimo codec: http://www.divx-digest.com/software/nimo_pack.html DivX codec: http://www.divx.com/divx/?src=toptab_divx_from_/index.php (click the Standard DivX Codec FREE link) FYI...
And I just installed DivX 5.05 still doesn't work. But, now i have it, what do I do? It's up to Windows Media Player (or Winamp, which also plays WMA files) to look inside the file and decide which format was used to create the file. Codec Windows Media Player 10 whats wrong?
Is there anything you can do about that? 08-26-2012 07:47 PMWitold Feedback Related Post Your Answer TopicYour replyYour name or log inVerify you're humanBy submitting this form you agree to our Manage Your Profile |Newsgroups© 2016 Microsoft Corporation. Can Communism become a stable economic strategy? http://celldrifter.com/windows-media/error-downloading-codec-windows-media-player-10.php Autoplay Wenn Autoplay aktiviert ist, wird die Wiedergabe automatisch mit einem der aktuellen Videovorschläge fortgesetzt.
What details does AVICodec give? I tried other players like PowerDVD XP and even renamed the files to a different extension (like .avi to .mpg). It does remove the player, but it doesn't move the user detail box anymore. Anmelden Statistik 42.646 Aufrufe 160 Dieses Video gefällt dir?
If you're not already familiar with forums, watch our Welcome Guide to get started. In this case, WMP can't find the codec necessary to play The Cabinet of Dr Caligari_xvid-belos.avi, so it shows this message. 2Click the Web Help button.Windows Media Player fires up Internet
|
OPCFW_CODE
|
My druid's Rake seems to get refreshed by someone else. No other druid in raid, so I guess some pet is to blame. Anyway, Rake is a DoT and not a debuff, so I think it doesn't make sense to show the duration of some pet's Rake to a druid.
Also, "Mangle - Bear" and "Mangle - Cat" are basically the same debuff. If I saw correctly during our raid, they are not treated as such. (interesting, because if a bear applies the debuff, the cats can skip mangling, which leads to more kitty DPS)
Edit: Just saw that there are spell specific options. So most of my complaints may be moot. :)
Hmmm... I am not sure that I understand the spell specific settings. It won't let me add "Faerie Fire (Feral)". Also, how does this "aura to lookup"-thingie work? Is that a way to add aliases for spells?
Also, how does this "aura to lookup"-thingie work? Is that a way to add aliases for spells?
Yes. Enter one aura per line.
About Rake, hunters' cat have an attack named Rake, that is similar to the druids' Rake. Was it shown as yours or others (red or yellow, provided you did not changed the default colors) ?
Inline Aura can only differentiate debuffs that are "yours" or "others". Given the Blizzard API, it can not tell if it was applied by a druid, a hunter pet or whatever. If the debuff have the same name that your spell, it will be taken into account. And BTW, any DoT is a debuff that deals damages.
It may have been shown as other. I configured it now to only show my own Rake, which is the only thing that makes sense for a DoT IMO.
Couldn't test it yet. I do not team with hunters that often. :)
How do I enter Mangle correctly? Both ways? "Mangle - Cat" for "Mangle - Bear" and the other way round as well? Or is one way enough? I can apply both debuffs myself, and they do overwrite each other.
"Faerie Fire (Feral)" still seems to be a problem, tho. I can hack it into the saved config file, but I cannot enter it ingame. Dunno why, just that nothing happens when I click the okay-button.
Edit: in case it was not clear: I totally love Inline Aura. Configuration could need some more help texts, and I guess the border colors of my BF-skin are a bit too similar to differenciate between own debuffs and others. Will play around with some other skins. Other than that, IA is a huge help!
I did not have time to test the druid issues yet but I will do.
From the point of view of Inline Aura (and Blizzard API), DoTs are just debuffs, one of the two type of auras (the other one being buffs). Stock Inline Aura handles all auras the same way. This is why you see others' DoT when "only mine" is unchecked and why there are spell specific options (though they are a bit buggy ATM). If you have meaningful spell settings for druids, I will be happy to include them.
Okay, lemme see... Since it did not let me insert Fearie Fire in the settings, I edited ClassDefaults.lua by hand:
elseif class == 'DRUID' then
SetSpellDefaults('debuff', 48564, 48566) -- Mangle - Bear => Mangle - Cat
SetSpellDefaults('debuff', 48566, 48564) -- Mangle - Cat => Mangle - Bear
SetSpellDefaults('debuff', 48475, 48476) -- Faerie Fire (Feral) => Faerie Fire
That seems to work for me so far. I also set my DoTs so that I do not see those of other druids or pets. All looking good, so far. :)
Not sure about listing Mangle twice, tho. But it didn't want to work for me with just one of those lines.
Using r25 on live with Dominos+OmniCC+ButtonFacade since Dominos dumped it's buff module. It shows the timers fine, but it doesn't seem to color debuff borders for some reason (except somewhat randomly in raids - perhaps showing other people's buffs/debuffs?).
It colors buff borders just fine, although it doesn't color Mend Pet when it's up unless I'm targeting my pet.
Using r25 on live with Dominos+OmniCC+ButtonFacade since Dominos dumped it's buff module. It shows the timers fine, but it doesn't seem to color debuff borders for some reason (except somewhat randomly in raids - perhaps showing other people's buffs/debuffs?)
Hrm... It works fine here using Dominos and OmniCC. I do not use ButtonFacade myself, though I installed it for testing purpose. I guess we are going to find out what could cause this bug to happen. Here a first batch of questions : Which BF skin are you using ? Does disabling BF fix it ? In which group setup is it happening (party/raid/solo/other) ? Are you using any other addon that might mess with the action buttons ?
Hrm... It works fine here using Dominos and OmniCC. I do not use ButtonFacade myself, though I installed it for testing purpose. I guess we are going to find out what could cause this bug to happen. Here a first batch of questions :
I'm at work right now, so I'll have to get to some of the questions later, but:
In which group setup is it happening (party/raid/solo/other) ? First noticed in raid, and then tested more extensively while solo.
Are you using any other addon that might mess with the action buttons ? DrDamage, but I put it on standby/suspended because its numbers were overlapping InlineAura's, making them hard to read. I'll try with DrDamage fully disabled.
I should also mention that the Dominos buff module worked fine with this setup until it was removed in the latest update in the last couple of days. Also, Inline Aura works fine for highlighting self and target buffs, and for displaying the time counters for both buffs and debuffs.
I should probably roll back to 1.0 before launching the game as well, as I only did a rollback and reloadui while playing because it didn't look like there were any non-code changes (although I didn't look too deeply).
Okay so it seems to be specific to the Caith skin, as it works with all the other skins I have (Dream Layout, Zoomed, Blizzard). Time to find another skin I like I guess.
Edit: It seems that the "Checked" border color setting in ButtonFacade must be set to a color and alpha value that will bitwise-AND or something with InlineAura's red debuff border, or else it won't show.
Let me do a bit of testing. I'm guessing it's the same issue we had with SBF.
Edit: Ok, it seems it's the same issue that arose with Satrina Buff Frames. Apparently, there's an issue with applying colors to a specific layer that already has colors set (even if set to 0, in this case).
Due to the nature of how colors are applied in ButtonFacade, that system is not going to change. If the addon in question is using LBF correctly, then that addon *should* be able to use LBF's built-in functions to set/change the color.
However, if this is still causing a problem, the buff/debuff colors should be applied to the "Border" layer only, as all of my skins have the color left open for this exact reason. The other layers are more dynamic and should not be influenced by anything other than the skin itself.
IA does not detect nor interact with LBF. If there is a way to detect if a button is skinned with LBF and to set the border color that does not completely mess LBF, I will use it. However, my first tests let me think there was no need for this.
I updated my post above. For debuff/buff border colors, use the "Border" layer, not the "Checked" layer. Please. :) The reason being is that the "Border" layer is slightly more static than the others and is also the layer that most other similar mods use. For the record, by "Border" I mean the "Equipped" overlay.
Edit: If someone can come up with a viable solution, I'd be welcome to take a look at it, but I'm pretty sure that how LBF colors layers isn't going to change.
I did notice that this problem arises when an addon uses native (IE Blizzard) methods to apply such changes. When this is done, it interferes with BF's skinning (It usually tries to blend the two color settings). The only thing I can think of is to detect if LBF is loaded and use its built-in functions to change the color. Remember, BF is a GUI only. LibButtonFacade is a library. ;)
My primary objective is to highlight the standard action buttons. Ideally I would like to find a solution that works both with and without LBF. If I cannot, I will probably write a code that handles both situation differently.
After a closer look to LBF, I think there are 4 solutions:
1) use Border instead of Checked texture. That will fix the issue with the Caith skin but any texture that defines a Border color (or even if the user plays with the border color setting of BF) will break IA coloring again.
2) add my own texture, that will not be skinned. This will work with or without LBF but may look ugly depending on the LBF skin. (And I could add an option to have a "plain" highlight.)
3) add my own texture, that tries to use the same texture than the Border. Maybe tricky to set up but would fix the ugliness factor of point 2).
I'm also thinking that load order plays a factor here, too. For example, if BF is loaded first (alphabetical), then the problem we're discussing happens. If the mod loads first, BF will probably overwrite those colors.
As I said, I know of no other solution outside of the mod first detecting LBF and using its built-in functions to apply the color if it (LBF) exists. If someone can come up with a viable solution, I'd love to hear it.
|
OPCFW_CODE
|
Two approaches to view newlines, each of which are self-reliable, are that newlines both different traces or which they terminate traces. If a newline is taken into account a separator, there will be no newline after the final line of a file. Some plans have problems processing the last line of the file if It is far from terminated by a newline. Conversely, packages that expect newline for use for a separator will interpret a last newline as starting up a new (empty) line.
All scientific trials have standards about who will get entangled. These demands are determined by such factors as age, gender, the type and section of the disease, former treatment method history, and various health-related ailments.
Also, If the customer desires internet pages and webpages of documented output, knitr can offer them with no less than typing, e.g. by building slightly distinct versions of the identical plot time and again yet again. From a delivery of material perspective, that may be unquestionably an efficiency obtain in contrast with several hours of copying and pasting figures!
This databases presents ongoing entire-textual content educational journals which can be locally revealed by scholarly publishing companies and educational institutions in Turkey.
Strategic pondering is very essential in the course of a project’s inception: if you come up with a bad conclusion early on, it will have cascading damaging impacts all over the project’s complete lifespan.
to terms with the thought of class and generic features. Generic features and courses might be talked over even more in Object orientation, but only briefly.
In the above code gantt defines the next knowledge layout. Part refers to the project’s portion (practical for large projects, with milestones) and each new line refers to some discrete job.
To minimise complex financial debt in the outset, the top place to get started on might be using a pen and paper and an open up mind. Sketching out these details your Concepts and selecting specifically what you wish to do, free through the constraints of a certain bit of technological innovation, generally is a worthwhile physical exercise prior to deciding to get started.
benefit to a variable but The end result will not be automatically printed. Instructions are divided both by a semi-colon (‘;’), or by a
Particularly the Morse prosign represented by the concatenation of two literal textual Morse code "A" people sent with no typical inter-character spacing is Employed in Morse code to encode and suggest a whole new line in a formal textual content concept.
Unicode, In combination with giving the ASCII CR and LF control codes, also supplies a "future line" (NEL) Handle code, along with control codes for "line separator" and "paragraph separator" markers.
Preset line duration was utilized by some early mainframe working techniques. In such a method, an implicit stop-of-line was assumed each individual seventy two or eighty characters, as an example. No newline character was stored. If a file was imported from the surface earth, strains shorter than the line duration had to be padded with Areas, even though lines longer than the line length needed to be truncated. This mimicked the use of punched cards, on which each line was saved over a independent card, ordinarily with eighty columns on each card, frequently with sequence numbers in columns seventy three–80.
Thanks for the responses and bringing up the “inspired submit” you observed. I’m planning to give the dude good thing about the doubt, In particular as it’s apparent that he wrote his possess R code, and it’s in a bigger context of “Contemporary Portfolio Concept.
The R/ folder contains all the R code that defines your package’s features. Putting your code in a single spot and encouraging you to produce your code modular in this manner can tremendously cut down duplication of code on significant projects.
|
OPCFW_CODE
|
Very basic defining Digital DIY and "ABC"
This conversation follows on from Luca's email (sent: 08 February 2015 19:10 to the extended Steering Board) which invites us 'to start working on ABC cases as "core examples" of what we should take into account in our project'.
I wasn't sure whether to start a new thread, or to add this on to Bruce's attempt to start a list of *examples*. But this conversation is, I think, about even more basic definitions, so I started a new one.
So: I think we can agree that our project centres around what you're calling "ABC" technologies, i.e. (as I think it means) digital technologies that lead to the production of physical stuff, but there is still the question of whether it has to be an internet-connected machine which does the making, or if an internet-connected human can be the one who does the making.
Note also that the internet-connected machines typically require a lot of help from humans anyway, and don't sit in their own houses devising and fabricating their own projects. So I mean it's always blurry, and there are no projects that a machine initiates and completes single-handedly (as it were).
Nevertheless I suppose we can have a 'tighter' and a 'wider' definition, both of which are probably acceptable to all of us, but where the tighter one means you've got a device connected to the internet which manufactures things (? - is that what you'd have in the tighter definition?) and the wider definition is about digital technologies fostering and inspiring making, but where the making can be done by humans, humans with tools, and/or machines.
What do you think?
Incidentally, about this "ABC" phrase ... to be honest I joined in with pretending to recognise this phrase as you were casually using it to refer to these kinds of things ... and it seemed to stand for "Atoms to Bits Conversion" or something [note added 24 hours later: it's "tags/atoms-bits convergence" isn't it, sorry] -- Anyway so just now I Googled "ABC atoms bits" to see what in-use-in-the-world definitions I could look at ... and I don't really see any ...
... or at least the main thing that comes up is this exhibition by John Maeda which links Atoms, Bits and Craft (ABC)
... so personally I like that, but it's not the "ABC" you've been talking about.
Also, I think we described archetypal ABC technologies as 3D printing and Arduino. 3D printing clearly counts but where does Arduino sit in relation to either the theme of turning bits into atoms, or small-scale manufacturing ...? [comment added 24 hours later: ah, since ABC means atoms-bits convergence (rather than 'conversion'), then it's a bit easier to imagine some kind of answer, but I'm still interested to hear an answer from one of you...]
Sorry if I'm asking stupid questions, but they might be stupid questions that other people might ask, and it would be nice to have the answers to them!
To summarise, the questions that arise from this message are:
* Do we accept a 'tighter' and a 'wider' definition of Digital DIY, as outlined above?
* What precisely is the 'tighter' one - does it necessarily involve a device connected to the internet which manufactures things, or something else?
* Are we going to use this phrase 'ABC' and if so what exactly does it mean? (Are you drawing on a previous use of it ... and presumably it's not the John Maeda one [though I'd be happy if it was]?)
* In what way is Arduino a typical example of it?
|
OPCFW_CODE
|
|submitting two forms with one button|
Does anyone know how you can submit two forms with one button and validate the two before submission?
Essentially I have split a registration form into two as I require some of the content to be stored in a database and some of the content (a word document upload file) to be sent via email.
However the problem I'm getting is that users are completing the first form and not the second therefore was wondering if it was possible to submit both forms with only one button unlike to the two I'm using at present.
I don't want the first form to be submitted unless there are values in the second form.
Can anyone offer any advice or alternatives?
Something like onClick="CheckForms()"; to initiate the process.
Use the CheckForms() function to verify all your data is correct.
to submit your forms.
Try this [w3schools.com] as a starting point.
Hope that's of some help.
The above suggestion doesn't work, because once a form is submitted, focus leaves the document.
Another method is to have the second form on a second page which has the submit routne on the onLoad method. The page never really shows, it appears that the final page loads twice.
A good point and well made ;-)
I dont understand why have you split the form?
Have you set enctype=multipart/form-data
Request all the form fields on the server with the upload object and then do what you want with them.
I would just make it one form and then let your script that handles the form sort out what to do with everything.
Send all your values to your script let it drop all appropriate values into the DB and then send off the doc and throw up a thankyou page or whatever happens next.
If the two forms must be submitted together 100% of the time it would only make sense. I would think you could combine your two scripts together with only mild difficulty making sure they don't step on each others toes.
I am inclined to agree with jatar_k, one form would be the way to go. It would be nice to know what server side scripting you are using.
I do see that sometimes it is necessary to be able to post two forms simultaneously, especially in the environment where two forms are independent. For example, trying to integrate two different web applications, and both of them have forms on different parts of the page... There are at least two possible ways that I can think of.
<iframe name="dummy" width="0" height="0"></iframe>
document.Form1.target = 'dummy';
- Harvest data from these two forms by traversing through form.elements, and then encoded them into a string suitable for POST request. Hint: use encodeURIComponent().
- Create a XmlHttpRequest object, i.e.
var obj = document.all ? // Check whether it is MSIE
new ActiveXObject("Msxml2.XMLHTTP") : // MSIE
new XMLHttpRequest(); // Mozilla
- Use the object to do the POST request! Then you need to check the readystate to make sure the request has completed. You can then display a thank you message, or redirect to another page when both forms are posted.
As you can see, it only works for MSIE > 5 and Mozilla. But who cares about other players anyway :)
Thanks everyone for your suggestions particularly scotty. Essentially I followed your advice concering submitting the first form to a hidden frame and it works perfectly.
|
OPCFW_CODE
|
There is a basic requirement for blood glucose meters to be calibrated, or coded. In the absence of periodic calibrations the accuracy of any blood glucose meter’s measurements comes into question.
Without a reference point to begin with, the instrument may never read correctly.
With the use of a calibratio, an instrument is given a predefined value so as to eliminate errors. If the readings on your blood test meter vary, then there will cease to be any good reason to test.
Calibration removes the margin for error
The idea of doing coding on your blood test meter (or calibratio, which is the same thing) may sound like a bit of a chore, but a comparison might be that if you are making a cake, you need to zero out the scales before you start doing all the measurements. If the scales change each time you weigh out your ingredients, you’ll get a very odd cake at the end of it.
It’s a much bigger deal to get your blood sugar readings as close to accurate as is possible.
Test strip variations
Although blood test meters are commonly available, the technology used is still highly sensitive. For each pot of test strips there may be variations in the sensitivity of each batch. In the production of any blood test strips, a sample of each batch is taken and tested against a standard solution.
The reading thus produced allows the manufacturers to calculate what the code number for that batch should be so that when this code number is entered into your meter it will recalibrate your meter correctly for the sensitivity of that batch of strips.
Recalibrating your blood glucose meter
Each time you open a new pot of test strips you will need to recalibrate your meter.
How this is done varies from machine to machine, but you should either be shown how to do this by your healthcare professional , or you should be able to figure it out.
The instructions that come with any meter will show you how, and these days the instructions often include good diagrams, so it’s not that hard to follow.
If you get into trouble, you should be able to find a customer helpline for your particular blood test meter. It may be as easy as placing one of the new test strips in the meter and holding down a button until the displayed code in the window matches the one on the side of the pot of sensors.
Miscoded meters can give false results
A miscoded meter can give readings that are out by as much as 43% – which should persuade you it’s worth taking a couple of minutes to do a calibration. Some meters may even refuse to work unless you do a calibratio, so you won’t be able to avoid it!
No Coding Technology
One aspect of blood test meters that can distinguish them is if they use ‘no coding technology’. You might want to consider other factors when choosing a blood test machine.
Research has shown that pharmacists agree that the use of no coding technology makes it easier for patients to get accurate results.
As many as 77% of blood glucose testers have themselves said that a meter with automatic coding would be beneficial.
The meters code automatically once each test strip is inserted. This helps ensure accurate readings.
Another meter with no coding required is the Wavesense Jazz blood test meter
With no coding technology , it’s not that they do not calibrate, they do, it just means that you as the user do not have to do it, it’s automated into the machine’s technology. So if you don’t fancy the prospect of coding your blood glucose meter, then look for no coding alternatives.
|
OPCFW_CODE
|
How can I make "distinguished name" configurable via environment variables?
I am using libressl on Alpine with the following versions:
$ cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.14.2
PRETTY_NAME="Alpine Linux v3.14"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
$ libressl version -a
LibreSSL 3.3.3
built on: date not available
platform: information not available
options: bn(64,64) rc4(16x,int) des(idx,cisc,16,int) idea(int) blowfish(idx)
compiler: information not available
OPENSSLDIR: "/etc/ssl"
I am using the following openssl.cnf file
# default section for variable definitions
DN = ca_dn
DISTINGUISHED_NAME = ${ENV::DN}
# certificate request configuration
[ req ]
default_bits = 2048
default_md = sha256
encrypt_key = no
prompt = no
string_mask = utf8only
distinguished_name = ${DISTINGUISHED_NAME}
[ ca_dn ]
C = SE
ST = Stockholm County
L = Stockholm
O = Organization
OU = Unit
CN = Name 1
emailAddress =<EMAIL_ADDRESS>
# certificate authority configuration
[ ca_ext ]
authorityKeyIdentifier = keyid, issuer
subjectKeyIdentifier = hash
basicConstraints = critical, CA:TRUE, pathlen:1
keyUsage = critical, keyCertSign, cRLSign
# another distinguished name
[ other_dn ]
C = SE
ST = Stockholm County
L = Stockholm
O = Organization
OU = Unit
CN = Name 2
and I am trying to get the certificates generated with different distinguished names with the help of environment variables. So far, I have failed to achieve what I want:
$ printenv DN
$ libressl req -newkey rsa:4096 -x509 -days 3650 \
-keyout certs/ca.key -out certs/ca.crt \
-config certs/openssl.cnf -extensions ca_ext
$ libressl x509 -noout -subject -in certs/ca.crt
subject= /C=SE/ST=Stockholm County/L=Stockholm/O=Organization/OU=Unit/CN=Name<EMAIL_ADDRESS>$ export DN=other_dn
$ printenv DN
other_dn
$ libressl req -newkey rsa:4096 -x509 -days 3650 \
-keyout certs/ca.key -out certs/ca.crt \
-config certs/openssl.cnf -extensions ca_ext
$ libressl x509 -noout -subject -in certs/ca.crt
subject= /C=SE/ST=Stockholm County/L=Stockholm/O=Organization/OU=Unit/CN=Name<EMAIL_ADDRESS>
I think I have done my job to search the internet for similar problems, but I have not come up with the exact situation (and a solution) yet. There are examples that show how to benefit from environment variables when setting SANs, but I could not see a case where the user is willing to change DNs through environment variables.
I have the following questions:
Is what I am trying to achieve (i.e., have different DNs inside a configuration file and select them properly via environment variables) not doable at all?
If the answer to the previous question is "No; you can achieve it," how can I use the environment variables?
Note that I have only shared a minimal (not-)working example above. In the actual case, I have a longer configuration file that encodes requests and x509 extension options properly for different servers and clients under the same CA. Should you need the full configuration file, please let me know so that I can strip the sensitive information and paste the full configuration file by updating my question.
I thank you in advance for your time and help, and I look forward to any pointers and/or constructive feedback that would solve my issue.
you can try expand file variables with envsubst:
instead of
... -config certs/openssl.cnf ...
use
... -config <( envsubst < certs/openssl.cnf ) ...
so $DISTINGUISHED_NAME will be applied
Thank you! Your solution is clean and works as expected :)
|
STACK_EXCHANGE
|
Google doc viewer display private documents or python doc rendering library
I have a website which in a section gives users access to some documents.
The documents cannot be downloaded from the site, but only if the users are logged in.
Is is possible to use google docs viewer to show a preview of the documents to the user, considering the documents are not downloadable if you aren't logged in ... will the viewer be able to download them in order to render them? As an optional feature, could it use a secure connection when showing the documents?
If this is not possible with google docs viewer, do you know some python library that could render documents as HTML (so that I can return them to the user)? The documents will probably be of various types (like docs, excels, pdf, ppts, etc).
It's not possible with Google's doc viewer since it needs the URL of the file to be displayed as a parameter (which makes it possible to retrieve the URL and download the documents without being logged in), if want to use it anyway, you'll have to create a preview version of your documents.
Edit :
You can change the url parameter of Google's doc viewer to a script on your server, this script should only accept requests to documents from Google's doc viewer ( identifiable with the user agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:8.0) Gecko/20100101 Firefox/8.0,gzip(gfe) (via docs.google.com/viewer) it's light, I know ) and serve the appropriate file depending on an id parameter or something. This way you can control who gets access to the documents. That's my 2 cents.
It's dubious at best to simply allow only Google to download the file. The person can still modify the src attribute in the Google reader iframe to try and guess which file to download.
@JohnathanElmore Of course. Security through obscurity is never a good thing.
You could get the host name from IP, then do a reverse IP address from hostname to better verify the source is google, more info: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=80553
If I understand this correctly its impossible to use the viewer with google's authentication? I keep getting a html file in the viewer when the file isn't publicly shared. But when I share it pulbic it works fine?
@Nasreddine Would it work to add:
Allow docs.google.com/viewer
To my .htaccess file ?
Even if you do this, google saves a copy on their servers, which can be downloaded then through the save to my drive or print options if you happen to open the full viewer.
Sorry to bring this up 2 years after, but I'm facing a similar issues and there's no info at all!
|
STACK_EXCHANGE
|
#include "Physics/Scene.h"
#include "Physics/Object.h"
#include "Physics/Sphere.h"
#include "Physics/Plane.h"
#include "Physics/Spring.h"
#include <Gizmos.h>
using namespace Physics;
using glm::vec4;
Scene::Scene()
{
// Default gravity just in case
m_gravity = vec3(0.0f, -9.8f, 0.0f);
//Defaults for fixed time at 100fps
m_fixedTimeStep = 0.01f;
// Set accumulated time to 0
m_accumulatedTime = 0.0f;
// Zero the global force
m_globalForce = vec3();
}
Scene::~Scene()
{
// Delete all springs
for (auto spring : m_springs)
{
delete spring;
}
// Delete all objects
for (auto object : m_objects)
{
delete object;
}
}
void Scene::update(float deltaTime)
{
// Increase accumulated time by delta time
m_accumulatedTime += deltaTime;
// Each iteration uses m_fixedTimeStep as delta time
// The loop continues until the sum of fixed time steps is equal to or less than m_accumulated time
while (m_accumulatedTime >= m_fixedTimeStep)
{
// Applies gravity to all objects
applyGravity();
// Updates all objects with fixed time step
for (auto object : m_objects)
{
object->update(m_fixedTimeStep);
}
// Updated all springs with fixed time step
for (auto spring : m_springs)
{
spring->update(m_fixedTimeStep);
}
// Decrement the accumulated time
m_accumulatedTime -= m_fixedTimeStep;
// Check for collisions
checkCollision();
// Resolve collisions
resolveCollision();
}
}
void Scene::draw()
{
// Draws all objects
for (auto object : m_objects)
{
object->draw();
}
// Draws all springs
for (auto spring : m_springs)
{
spring->draw();
}
}
void Scene::addObject(Object * object)
{
// Adds the parameter object to the vector
m_objects.push_back(object);
}
void Scene::removeObject(Object * object)
{
// Find object in vector
auto iter = std::find(m_objects.begin(), m_objects.end(), object);
// If found remove from vector
if (iter != m_objects.end())
{
m_objects.erase(iter);
}
}
void Physics::Scene::addSpring(Spring * spring)
{
// Adds the spring to the vector
m_springs.push_back(spring);
}
void Physics::Scene::removeSpring(Spring * spring)
{
// Find object in vector
auto iter = std::find(m_springs.begin(), m_springs.end(), spring);
// If found, remove from vector
if (iter != m_springs.end())
{
m_springs.erase(iter);
}
}
void Scene::applyGlobalForce()
{
// Applies global force to all objects
for (auto object : m_objects)
{
object->applyForce(m_globalForce);
}
}
void Scene::applyGravity()
{
// Applies gravity to all objects
for (auto object : m_objects)
{
// Since gravity applies force based on mass
object->applyForce(m_gravity* object->getMass());
}
}
void Physics::Scene::checkCollision()
{
// Loops through all objects to find collisions, then place them in the collision vector
for (auto object = m_objects.begin(); object != m_objects.end(); object++)
{
// Loops through objects that the first object can collide with, the nature of this loop is that it checks
// against objects forward in the vector
for (auto object2 = object + 1; object2 != m_objects.end(); object2++)
{
Collision tempCollision;
// Passes both objects and a reference to the collision normal of tempCollision into the collision check function
// Uses the return bool and adds to collision vector if there is a collision
if ((*object)->isColliding(*object2, tempCollision.collisionNormal))
{
// For the specific case where the first object is a sphere and the second object is a plane, they are added to the struct in reverse order
if ((*object)->getShapeType() == ShapeType::SPHERE && (*object2)->getShapeType() == ShapeType::PLANE)
{
tempCollision.objA = *object2;
tempCollision.objB = *object;
}
else
{
tempCollision.objA = *object;
tempCollision.objB = *object2;
}
// Adds the struct to the vector
m_collisions.push_back(tempCollision);
}
}
}
}
void Physics::Scene:: resolveCollision()
{
// TODO: COMMENT HERE
for (auto col : m_collisions)
{
// If both objects are static, skip collision resolution
if (col.objA->getIsStatic() && col.objB->getIsStatic()) continue;
// Inverse masses
float inverseMassObjA = 1 / col.objA->getMass();
float inverseMassObjB = 1 / col.objB->getMass();
// Calculate the relative velocity
vec3 relativeVelocity = col.objB->getVelocity() - col.objA->getVelocity();
// Find out how much of the relative velocity goes along the collision vector
float impactForce = glm::dot(relativeVelocity, col.collisionNormal);
// Average elasticity of both objects
float averageElasticity = (col.objA->getElasticity() + col.objB->getElasticity()) / 2;
// Get the formula from our resources and calculate J
float impulseMagnitude = (-(1 + averageElasticity) * impactForce) / (inverseMassObjA + inverseMassObjB);
// If both objects are spheres
if (col.objA->getShapeType() == ShapeType::SPHERE && col.objB->getShapeType() == ShapeType::SPHERE)
{
Sphere * sphereA = (Sphere*)col.objA;
Sphere * sphereB = (Sphere*)col.objB;
float penetration = (sphereA->getRadius() + sphereB->getRadius()) - (glm::distance(col.objB->getPosition(), col.objA->getPosition()));
// Seperate the two objects, using whatever detail of seperation you want
if (!col.objB->getIsStatic()) col.objB->setPosition(col.objB->getPosition() + (penetration / 2)* col.collisionNormal);
if (!col.objA->getIsStatic()) col.objA->setPosition(col.objA->getPosition() - (penetration / 2)* col.collisionNormal);
}
if (col.objA->getShapeType() == ShapeType::PLANE)
{
// Cast to plane
Plane* plane = (Plane*)col.objA;
// Caclualte the velocity of the second object
// Resulting velocity = velocity - (1 + elasticity)velocity.collision normal * collision normal
vec3 velocity = col.objB->getVelocity() - (1 + col.objB->getElasticity()) * (glm::dot(col.objB->getVelocity(), plane->getDirection())) * plane->getDirection();
// Set velocity
col.objB->setVelocity(velocity);
}
else
{
if (col.objA->getIsStatic())
{
col.objB->setVelocity(col.objB->getVelocity() - (1 + col.objB->getElasticity()) * glm::dot(col.objB->getVelocity(), col.collisionNormal) * col.collisionNormal);
}
else
{
// Apply the J along the collision vector direction to object B
col.objB->applyImpulse((col.collisionNormal * impulseMagnitude)* inverseMassObjB);
}
if (col.objB->getIsStatic())
{
col.objA->setVelocity(col.objA->getVelocity() - (1 + col.objA->getElasticity()) * glm::dot(col.objA->getVelocity(), col.collisionNormal) * col.collisionNormal);
}
else
{
// Apply the J against the collision vector direction to object A
col.objA->applyImpulse((-col.collisionNormal * impulseMagnitude)* inverseMassObjA);
}
}
}
// Clears the vector as all collisions have been resolved
m_collisions.clear();
}
|
STACK_EDU
|
std::unique_ptr with RAII for mutex?
We have a lot of legacy C++98 code that we are slowly upgrading to c++11 and we have a RAII implementation for custom Mutex class:
class RaiiMutex
{
public:
RaiiMutex() = delete;
RaiiMutex(const RaiiMutex&) = delete;
RaiiMutex& operator= (const RaiiMutex&) = delete;
RaiiMutex(Mutex& mutex) : mMutex(mutex)
{
mMutex.Lock();
}
~RaiiMutex()
{
mMutex.Unlock();
}
private:
Mutex& mMutex;
};
Is it ok to make an std::unique_ptr of this object? We would still benefit from automatically calling the destructor when the object dies (thus unlocking) and would also gain the ability of unlocking before non-critical operations.
Example legacy code:
RaiiMutex raiiMutex(mutex);
if (!condition)
{
loggingfunction();
return false;
}
After:
auto raiiMutex = std::unique_ptr<RaiiMutex>(new RaiiMutex(mutex));
if (!condition)
{
raiiMutex = nullptr;
loggingfunction(); // log without locking the mutex
return false;
}
It would also remove the use of unnecessary brackets:
Example legacy code:
Data data;
{
RaiiMutex raiiMutex(mutex);
data = mQueue.front();
mQueue.pop_front();
}
data.foo();
After:
auto raiiMutex = std::unique_ptr<RaiiMutex>(new RaiiMutex(mutex));
Data data = mQueue.front();
mQueue.pop_front();
raiiMutex = nullptr;
data.foo();
Does it make sense?
Edit:
Cannot use unique_lock due to custom Mutex class:
class Mutex
{
public:
Mutex();
virtual ~Mutex();
void Unlock(bool yield = false);
void Lock();
bool TryLock();
bool TimedLock(uint64 pWaitIntervalUs);
private:
sem_t mMutex;
};
If you're using C++11 why not just use a std::unique_lock?
You can use std::lock_guard (https://en.cppreference.com/w/cpp/thread/lock_guard), with std::mutex.
In your implementation only one thread can lock the mutex, so why using a mutex at all?
@gerum The Mutex& argument to the ctor is probably what is shared between different threads!
@MarcusMüller Yes, you are right, I do not see the &.
using additional brackets would remove the need for a std::unique_ptr. Imho its not a good idea to use dynamic allocation merely to save some brackets
you are actually moving away from RAII, rather than following the typical pattern of simply letting things go out of scope
@WBuck unfortunately I can not use unique_lock wrapper because of custom Mutex class provided in sdk without lock() and unlock() methods:
class Mutex
{
public:
Mutex();
virtual ~Mutex();
void Unlock(bool yield = false);
void Lock();
bool TryLock();
bool TimedLock(uint64 pWaitIntervalUs);
private:
sem_t mMutex;
};
OK, so in this case, don't use the std::unique_ptr, the last thing you want to do is an unnecessary dynamic allocation. Stick with your RAII wrapper. I would provide a ctor which allows a user to pass in an already locked mutex though.
@Daniko you can with a wrapper: class MutexWrapper { Mutex & wrapped; public: MutexWrapper(Mutex & wrapped) : wrapped(wrapped) {} void lock() { wrapped.Lock(); } bool try_lock() { return wrapped.TryLock(); } void unlock() { wrapped.Unlock(); } };.
And if you are updating legacy code why are you updating to something I would call legacy as well?
@Caleth that would be adding an extra wrapper just to use lock_guard. I do not see the potential benefit over the already implemented RaiiMutex
@GoswinvonBrederlow Embedded devices be like that, ARM compiler 5 only supports c++11
gcc and clang both support ARM and ARM64 and c++20 and even c++23 partially.
@Daniko not just std::lock_guard, anything written for the Lockable concept
Add Mutex::lock(), Mutex::unlock() and Mutex::try_lock() methods to Mutex. They just forward to the Lock etc methods.
Then use std::unique_lock<Mutex>.
If you cannot modify Mutex, wrap it.
struct SaneMutex: Mutex {
void lock() { Lock(); }
// etc
using Mutex::Mutex;
};
A SaneMutex replaces a Mutex everywhere you can.
Where you can't:
struct MutexRef {
void lock() { m.Lock(); }
// etc
MutexRef( Mutex& m_in ):m(m_in) {}
private:
Mutex& m;
};
include an adapter.
these match the C++ standard lockable requirements. If you want timed lockable, you have to write a bit of glue code.
auto l = std::unique_lock<MutexRef>( mref );
or
auto l = std::unique_lock<SaneMutex>( m );
you now have std::lock, std::unique_lock, std::scoped_lock support.
And your code is one step closer to using std::mutex.
As for your unique_ptr solution, I wouldn't add the overhead of a memory allocation on every time you lock a mutex casually.
|
STACK_EXCHANGE
|
I have 2 Windows 7 Professional Pc's (ones a laptop wireless). I have tried countless times to consistently get the 2 to be able to share the same network for file sharing.
Both are on the same Workgroup. Both are set to Home (and I even tied Work) network. The only thing that they do not share the same is their name. My wireless name is in screen shot #1 and my Wired name is #2. Not sure if this matters, or where I can change my Wired name.
I have even tried using the Homegroup which you would think would be simple, and I can get it to work sometimes (very rarely) and others it won't connect at all.
I do have 2 routers one is in DMZ (ATT Uverse) and my primary router D-Link DIR-655. I have even used MAC address filtering to allow both PC's.
I should also add that I have the settings in Network and Sharing set correctly too. (trying to spare details)
I wish I could give better details, and I am hoping there is something simple I am missing. It bugs me that I have always had this issue (on/off). I have also tried eliminating the firewall and Antivirus and that didn't seem to help either.
More about :issues networking wired wireless network
Ensure that Network Discovery and File and Printer Sharing are switched on for both of the systems.
Make sure that you can ping each of the computers.
Once that's all done, go to the folder/drive you want to share.
Right click --> Properties --> Security --> Edit
Add the group "Everyone" to the security list, with whatever permissions you want. If it's a secure home network I'd say just set it at Full Control.
Click OK --> Sharing --> Advanced Sharing
Tick the Share This Folder box, go into permissions then set everyone to Full Control.
Click OK to get out of all of those screens, then try to connect to the other computer.
If that doesn't work, reset your modem/router and try again.
I have done all this aside from resetting my router which I am not sure is going to help. I have always had everything set this way. Why it is never consistent is the part that baffles me.
Example, I can share my external HDD on my HTPC but when I try to share the internal storage drive it doesn't work. Or, I can see the "Users folder" for each PC on each PC but I cannot see all all the folders within it. Or I can see my internal drive on my laptop but when I try to open it I get a "Windows Cannot access \\Ht-pc\e
Why some drives, but not others is the part I cannot figure out.
Also it takes a lot time to refresh the "Network" page. You know what I mean, the green bar on the top of Windows explorer takes forever to load.
There's a bigger issue I just cannot figure out what.
I've had issues with networking Windows 7 computers before. Two worked perfectly fine, but the third would connect sporadically. Both of the working ones refused to connect to the third, unless I reset the router.
In the end I just purchased a new ADSL Modem/Router and it all works perfectly now.
I have tried as stated earlier. I eventually update router FW which basically reset it. I am at a loss. I can share all the drives and see them on each PC but can only open some of them. all the sharing rights are set the same.
|
OPCFW_CODE
|
ok thanks. I re-read the post you made and it seems where you said “The answer that ends up being arrived at for this question will really depend on how the technologies we’re working on end up functioning” was where I misunderstood. I believe you were referring to functioning as in compared to how traditional servers have traditionally functioned, not suggesting you are working on something that might not work at all. :slightly_smiling_face:
and the question was relating to server up time and how long. I know eve online restarts their server cluster every 23 hours and used to have a downtime of roughly one hour but now the downtime is about 10 minutes.
isn’t eve online a good model to learn from in terms of server? They been doing fairly large universe, thousands of players and all in the same universe for quite some time. (granted they don’t have ship interiors and planetside. and they do have some load screens like when you dock and it has that ‘session change’ timer as it negotiates loading the station you dock at.
oh and jumping from one star system to the other is a load screen too, hidden by the wait for the jump through the stargate. but still must be something there to learn from right? :slightly_smiling_face: they still get thousands of players in one battle even if it runs slow and terrible.
I feel you guys took the real hard route lol in terms of immersion. elite dangerous load screens are behind every jump between systems. gives time for each system you arrive in to load while in the jump tunnel :slightly_smiling_face: but its still fancy, I guess that’s why I prefer sc, everything is real time and full immersion.
thanks for your work.
Every MMO that I’ve every played/worked on has to have some kind of downtime in order to do maintenance and security updates but there are ways to coordinate that downtime so that it’s not visible to players. That’s one of the advantages of using a cloud infrastructure. Since we don’t own physical servers and thus are not limited by the amount of available hardware, if we need to do something like a security update, we can make new servers, install the update, remove the old ones from the matchmaking pool, and then destroy the old ones as they empty out.
Sounds like you have a good grasp on how other games have handled asset loading as well. One of the games that I worked on in the past actually had a system where they had overlapping areas between different game servers where the client would preload content from the server it was moving towards so that it would be able to seamlessly transition between them once it arrived at the transition boundary. Granted there were a LOT of bugs with that system that allowed for some hilarious screen shots. I wouldn’t be able to answer your question as to what we have in mind for how Star Citizen would handle these situations though, as that’s outside of the area that I’m in charge of. I am really interested to see how it ends up working though, because it will most likely impact how we manage our cloud resources as players move from one place to another.
|
OPCFW_CODE
|
Diagonalizable subgroups of a connected linear algebraic group
Let $G$ be a connected linear algebraic group
over an algebraically closed field $k$ of characteristic 0.
Let $D\subset G$ be a closed diagonalizable subgroup of $G$
(a subgroup of multiplicative type).
Is it true that $D$ is contained in some torus $T\subset G$?
This is so for $G=\mathrm{GL}_n$.
Is this true for any connected linear $G$ (or any connected reductive $G$)?
I am stuck with this simple question...
Edit. The answer to the original question is NO, see Angelo's answer.
However, is it true that any cyclic finite diagonalizable subgroup $C$ of $G$
is contained in some torus $T\subset G$?
For the cyclic case: If $D = \langle s \rangle$ is a cyclic
diagonalizable subgroup of a connected linear algebraic group $G$, then $s$ is a
semisimple element of $G$ (of finite order). In particular, $s$ and
hence $D$ is contained in a maximal torus of $G$. Indeed, by [Borel
LAG,11.10] $s$ is contained in a Borel subgroup of $G$, and then the claim
follows from the connected solvable case [Borel LAG,10.6].
The answer for cyclic finite diagonalizable groups is affirmative for connected reductive $G$; this is Lemma 7.1 in the Appendix of http://arxiv.org/pdf/1210.8161.pdf (where $\mu_n$ is written, but the initial reduction to algebraically closed ground field does not use that the cyclic group is split -- i.e., constant Cartier dual -- and so it gives the result in general).
@nfdc23, since we're in characteristic 0 and so everything is smooth, probably @GeorgeMcNinch's comment and @JimHumphrey's answer is an easier way to think about that, no? (The reference's citation of Steinberg's connectedness theorem even in the smooth case is overkill.)
@LSpice: Probably. Somewhat stubbornly I wanted to address arbitrary characteristic (it makes me happier), so I didn't think at all about methods specific to characteristic zero. :)
No. For example, $\mathrm{PGL}_n$ contains a subgroup $G$ isomorphic to the product of two cyclic subgroups of order $n$, generated by the classes of the diagonal matrix whose entries are the powers of a fixed primitive $n^{\rm th}$ root of 1, and the permutation matrix corresponding to a cycle of length $n$. The inverse image of this subgroup in $\mathrm{GL}_n$ is not commutative, while the inverse image of a maximal torus in $\mathrm{PGL}_n$ is a maximal torus in $\mathrm{GL}_n$, so $G$ is not contained in a torus.
To reinforce Angelo's example, it's worthwhile to point out the broader setting for this kind of question: the study of centralizers and connectedness properties in a semisimple (or more generally reductive) algebraic group. An older but very useful source is part II of the extensive notes by T.A. Springer and R. Steinberg on conjugacy classes, part of an IAS seminar (Lect. Notes in Math. 131, Springer, 1970). A crucial question is whether a given connected semisimple group is simply connected or not; this shows up in the standard example where the adjoint group $\mathrm{PGL}$ fails to be simply connected. Here you have the deep theorem: If $G$ is a connected, simply connected algebraic group over an algebraically closed field, then all centralizers of semisimple elements are connected. (It's elementary on the other hand to prove that all centralizers in a general linear group are connected.) The role of the characteristic of the field is also discussed in depth by Springer and Steinberg, as well as the role of "torsion primes" (treated more fully in Steinberg, Torsion in reductive groups, Advances in Math 1975).
Some of the results are written up in later textbooks and in the first two chapters of my 1990 AMS book Conjugacy Classes in Semisimple Algebraic Groups (with the relevant example for the question here given in 1.12).
ADDED: To answer the added question, in any connected algebraic group it's true that an arbitrary semisimple element and hence the cyclic subgroup it generates lies in some maximal torus. This is part of the standard development of Borel-Chevalley structure theory (see for example Section 22.3 of my book Linear Algebraic Groups), though it does take a while to get that far into the theory.
The connectedness theorem you quote shows that if G is simply connected and $D$ is a smooth diagonalizable subgroup scheme, it is contained in a maximal torus. I believe it is still true for any -- possibly non-reduced -- diagonalizable subgroup scheme $D$ that the centralizer of $D$ in $G$ is connected when $G$ is simply connected; I'm unaware of an existing reference, though. (In fact, I have some notes on related matters that are waiting to be written up carefully...)
@George McNinch: Could you please explain, HOW the connectedness theorem quoted by Jim Humphreys shows that if $G$ is simply connected and $D$ is a smooth diagonalizable subgroup scheme, then $D$ is contained in a maximal torus.
Sorry - my argument in support of that remark wasn't correct.
@George Also the assertion of your remark is not correct, see Jim's answer to my question
http://mathoverflow.net/questions/60945
@Mikhail: I suspected as much. In fact, yesterday I tried to write down an example. But yesterday was busy and I made a hash of it.
Here is another example similar to Angelo's construction of a non-toral diagonalizable subgroup of a reductive group. I'll suppose that the characteristic is not 2.
Let $G = SO(V) = SO(V,\beta)$ for $\dim V > 2$, and write $V$ as an orthogonal sum
$V = U \perp W$ for $0 < \dim U < \dim V$ with $\dim U$ even,
such that the restriction of $\beta$ to $U$ and $W$ is non-degenerate.
Let $t \in G$ act as the identity on $W$ and as $-1$ on $U$. Then the
centralizer $M=C_G(t)$ identifies with the subgroup
{$(x,y) \in O(U) \times O(W) \mid \det(x) = \det(y)$}. In particular,
this centralizer is not connected: $M/M^0$ has order 2.
One can evidently choose an involution $s \in M \setminus M^0$, and then
$D = \langle t,s\rangle$ is a diag. subgroup of $G$ which is contained
in no maximal torus.
Part of this construction can be made in char. 2. Instead of $t$, you have
to take a non-smooth subgroup $\mu \simeq \mu_2$, essentially given by
the action of a semisimple element $X \in \operatorname{Lie}(G)$ ($X$ should
act as $1$ on $U$ and $0$ on $W$). Then $M=C_G(\mu) = C_G(X)$ is again
disconnected (well, now you can't argue by determinants) with component
group of order $2$. But this doesn't seem to lead to a non-toral diagonalizable
subgroup (any finite order element representating the non-trivial
coset of $M/M^0$ has a non-trivial unipotent part).
|
STACK_EXCHANGE
|
Two joins on same tables
I am trying to show data by joining 2 tables:
In users table I have the equipment parts i.e. tmn1 and tmn2.
In actual equipment table, I have the details for all equipment.
So when a user is logged based on TM1, TM2 numbers should be taken from the user's id and details corresponding to those in equipment table and needs to be showing. I tried using 2 sql joins on same table but its throwing an error. Any help would be appreciated.
For example user with id 1 is logged. The user has tmn1 as TS1234 and TM2 as TC1234 so in his account page the details of TS1234, TC1234 has to be pulled from 2nd table and needs to be shown.
$queryEqipmentBought = "SELECT equipment.equipment_name, equipment.number_of_parts
FROM equipment
RIGHT JOIN user_table ON equipment.tm_number = user_table.tmn1
RIGHT JOIN user_table ON equipment.tm_number = user_table.tmn2";
Thanks in advance.
You can use below query...
$queryEqipmentBought="SELECT * FROM equipment
RIGHT JOIN user_table ON equipment.tm_number IN (user_table.tmn1,user_table.tmn2)
WHERE user_table.user_email = '".$user_email1."' ";
Easy and better one.
Yes because no need to do it complex for @Ram
@Ruchish Parikh Thanks. The errors are gone. But the result shows only TM1 results. TM2 is now shown. Am I missing any conditions. Thanks again.
Try to use LEFT JOIN @Ram
@ Ruchish Parikh thanks for your quick help. I will try that out and update you.
@RuchishParikh that worked. Now I wanted to make sure the user is logged in and show only the equipment user bought. So I have a session variable $user_email1 =<EMAIL_ADDRESS> and then modified query as
$queryEqipmentBought="SELECT * FROM equipment
RIGHT JOIN user_table ON equipment.tm_number IN (user_table.tmn1,user_table.tmn2)
WHERE user_table.user_email = $user_email1 ";
Its throwing an error:
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near<EMAIL_ADDRESS>at, do you have any idea.
@Ram , Please check answer I have edited with where query. You just need to concate it when using php variables with query. Please accept answer and click right if its work for you.
@RuchishParikh Worked perfectly And accepted the answer. I cant give you a +1 as I dont have that right as I am yet to gain the reputation.Thanks again Ruchish.
|
STACK_EXCHANGE
|
Battery Health for my Experiment?
I need to design and carry out an experiment to graduate from my high school. I decided to research the battery health of lithium-ion batteries because I've heard that if you let them discharge to 0% it will worsen the battery life.
My physics supervisor told me that I should think about creating a set-up to test the experiment. My first question would be, what kind of equipment would I need to test this effect? I'm guessing it would be something that could discharge the battery but I'm not sure what it's called.
If there is a device that can test this phenomenon my second question would then be what would be my independent variable and dependent variable for the experiment?
I had thought that the independent variable would just be how much I have discharged the battery and the dependent variable the capacity of the battery.
I'm a bit unsure if I need to tweak this a bit because my physics supervisor said that I should research the actual scientific names for the variables.
He also mentioned something about how I will eventually need to talk about hysteresis in my paper.
Any help would be appreciated. Thank you.
You should check the many questions on here for battery state of charge. And you should have done that BEFORE choosing your graduation experiment...
How could've I known what to look for if I didn't know the terms to use? Would state of charge be the IV or DV?
see https://electronics.stackexchange.com/q/118161/152903
A lithium cell, even discharged to it's minimum allowed charge may have a life of 500 or more cycles. Since you want to measure the effect of discharge on lifetime, you do not want to discharge overly rapidly as that would be a confounding variable. Thus the time required to measure this discharge will be long. Since this is for your graduation, you don't want a long experiment. I suggest, instead of measuring lifetime, you measure the effect of discharge on cell capacity, i.e. milliamp-hours. I would also "overdischarge", to speed things up.
Hi thank you for your response. So if the dependent variable is cell capacity then what would be the independent variable? You mention the discharge but could you clarify? What would I be changing about the discharge and is there a name for this?
By over-discharge, I mean discharging to open-circuit voltage of 2.5V or less. A typical DW01 battery protection IC will discharge to 2.5V under load. This will bounce back after the load is removed. i.e. open-circuit voltage will be above 2.5V. If you need help with a circuit to discharge to a given level see for example https://electronics.stackexchange.com/a/694010/268467.
See also https://electronics.stackexchange.com/a/692927/268467 for more detail regarding undervoltage disconnect circuit.
what kind of equipment
A Li-ion battery tester. For example.
what would be my independent variable and dependent variable for the experiment?
Test 2 new cells, confirm they are identical (capacity and resistance)
Charge one cell to 50 %, discharge the other to 0 %
Wait 1 week
Test the cells again
Repeat for 8 weeks (50 % and 0 %, 1 week wait)
After 2 months, plot the capacity and resistance of each cell over time
|
STACK_EXCHANGE
|
How long does it take to build an app?
Whenever you want to develop an application, one of the most common questions people ask is how long does it take?
A lot of people want to know how long it takes to make an app. The answer is, it depends. It could take months or even years to build an app depending on how much work goes into it and the complexity of the app.
This can also vary based on your approach. For example, generally, first-time founders pay so much attention to features that make the app so complicated in the end. However, if you approach your app as a solution to a one-simple problem, then your project will be far more simple and more successful.
App development timeline
The App development timeline is a crucial factor when it comes to the success of a business. The time frame can have a profound impact on the app’s conversion rates and revenue, as well as its ability to retain customers and build a buzz around the app.
The app development process consists of 4 main steps:
Stage 1: Planning
The whole process of making an app can take longer, but the planning stage is the most important part. This stage includes considering the app idea and its target audience, designing and prototyping, understanding the requirements of the platform, and determining if your app idea is achievable.
It usually takes several weeks to complete this stage. If you’re not sure if you want to add another feature or go through with your ideas, then it will take longer than usual.
Stage 2: Designing
This is one of the main stages, design is the process of creating wireframes and specifications for how an app should look, behave, and function. There are many components involved in this stage, such as wireframing, user research, prototyping, and testing. The design phase at Etrexio can take between 4 to 6 weeks depending on the nature of the project and the complexity of your business model.
Stage 3: development
The development stage is when developers get involved and start programming the application using a development language.
App development times depend on the kind of app you want to make. Complex apps with many features take longer to develop than simpler ones.
The development stage at Etrexio takes around 5–7 weeks.
There are two main methods that have been used for app development, coding from scratch and developing from a template. If you want to add new features and expand your business in the future, coding from scratch is the best option. However, creating something from a template prevents further development chances even if it’s faster initially. If you find a good template and don’t want to expand it in the future, then go with the template. However, for a scalable business, it’s not a good way. Because if you want to add new features to template-based applications, your developer has to understand the structure of the template (which is equivalent to previous developers’ mindset and development approach) and then can add new features which take longer in the long run.
Stage 4: Testing
Testing is an important stage in the development of any app. However, it can be a difficult and complex process. The testing stage is the last stage of app development. The process starts with developers checking to make sure the app works on their devices and then they send it to a beta group for feedback. After that, the product is ready for release.
The testing stage usually takes a long time because it requires expertise to find out the most appropriate way to test the app so that it can stand up to user feedback. But at Etrexio, it usually takes 1–2 weeks to test the app.
To sum up…
In general, the time needed to build an application changes according to the scope and the required features of the app. But the expertise of your development partner cannot be ignored. The better the app development company has a better team, the better the result and speed you will get.
At Etrexio, with a globally experienced and professional team in the mobile application design and development department, your process can be done as soon as possible.
|
OPCFW_CODE
|
AWS Application Deployment Basics: Docker Containers
In this post, we will use Docker containers and NGINX to deploy applications to AWS.
Join the DZone community and get the full member experience.Join For Free
In the previous few posts in this series, we deployed and ran a couple of applications on our EC2 based infrastructure. Here is how our architecture currently looks from the previous post:
Our applications are running in a private subnet and NGNIX working as a reverse proxy is allowing access over the internet.
Today, we will just run yet another .NET core application on the same private EC2 instance. Just like in the previous post, we will serve this application using NGNIX. However, this time application will be running as a Docker container.
Docker on Ubuntu
Docker is a great tool to simplify application development and running. We will not go into the details of Docker or container technology. I am assuming you already know the basics of Docker. If you are new to this topic, there are a great many resources available online and I also wrote few posts and a book on this topic, which you can check.
AWS offers services to run Docker, e.g., ECS, EKS. It also offers AMIs with Docker installed. A couple of options there to start with Docker, however, we will install and use it on our EC2 instance in the private subnet.
Docker installation is covered in great detail on the official Docker website. DigitialOcean also has a nice article about how to install Docker on Ubuntu. You can check it on this link. Following are commands to install Docker on Ubuntu:
sudo apt update sudo apt install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" sudo apt update apt-cache policy docker-ce sudo apt install docker-ce
Once Docker is installed on Ubuntu, you can check the installation status using the following command:
sudo systemctl status docker
Permission for the Current User
Next, I tried to execute the
docker images command and it shows an error regarding permissions:
Let’s add the current user to the Docker group, so I can run Docker commands without sudo:
sudo usermod -aG docker ubuntu
Now, re-login to EC2 for this to take effect and once again try the Docker images command:
For the application, I have a .NET Core project from one of my earlier articles and I have cloned the repository for the LocalLogin project on my Ubuntu EC2 instance. cd into the project directory and it contains a very basic Dockerfile as shown below:
Next, I build the Docker image:
docker image build -t locallogin .
Here is the image build process:
Once the image is built, we can simply run the container from it:
docker run -d --rm --name locallogin -p 6000:5000 locallogin
And it will start our container in detached mode. Now if I use CURL to access the API:
You can see that our application is running and we can access the data.
Just like in the previous post. Open the NGNIX configuration file on the Public EC2 instance. Add a location block for the netcore application container, save and restart NGINX
sudo nano /etc/nginx/nginx.conf
sudo service nginx restart
Because NGINX webserver can be reached from anywhere in the world (due to our configs) :
The application is working as expected.
Here is the updated view of EC2 instances and applications:
Docker simplifies application delivery, installation, and execution. In this post, we installed Docker on the EC2 instance, configured permissions, and spin a .NET core application container. We also saw that wiring up with the NGINX process is simple. Let me know if you have any questions or comments. Till next time, Happy Coding!
Published at DZone with permission of Jawad Hasan Shani. See the original article here.
Opinions expressed by DZone contributors are their own.
|
OPCFW_CODE
|
very interesting point of view, ctsgnb.
To be honest i feel a bit uncomfortable with giving another answer here: first, because this is leaving the topic of your original thread-opening post and goes in some completely new direction. I understand that this is the off-topic part of the forum, but still i think there should be a minimum of "thread discipline" in effect.
Second, we are leaving the topic of some computer-related political problem (which i feel should be possible here - after all, we are doing a very political job. "Political" in the sense of "having a big impact on how society works".) for a solely political discussion. The former is at least bordering to the great topic of unix.com (which i would describe as "all things server/client"), but the latter is simply misplaced here. There are other places on the net where such discussions belong to (agreed - not many as polite as here).
Please do not mistake my further silence on this for lack of interest. I would be very interested in discussing this but i do not want to misuse Neos bandwidth for things he doesn't have in his scope.
Originally Posted by ctsgnb
I agree that my point of view looks like the one of an utopist / dreamer or whatever "everyting-is-pink's world's alien. In fact i just refuse to surrender to the fatalism of a blind acceptation of the system as is without thinking about it and without thinking about how it works, why things couldn't be better and how.
I usually take in political things the same stance i am used to take as an engineer: if you want to make something you have to understand the applying laws and principles first. To build an airplane is only possible when you first accept that gravity exists and that things tend to fall when not being held up by some force. To describe any society in terms of "good", "bad" or any other affection is like calling a "world with gravity" "desirable": missing the point. A society is as it is and it will change when the effort to prolong the status quo becomes higher than the demand to change it - like the airplane will lift off when the force propelling it upwards overcomes the gravity which presses it down.
My personal opinion about when this happens is that it depends on the status of the development of productive forces. As they advance because technology advances production and the way things are produced changes respectively. If this change is big enough society will change to reflect this. Take slavery for example: it was not abolished when people realized that it is an unethical thing to do but when the way of production changed from (mostly) agricultural to (mostly) industrial. You can pick cotton with a bunch of slaves but when you try to let slaves write software you are probably in for a very nasty surprise.
For the same reason the compulsory school attendance was "invented" somewhere in the 18th century - because the demand for literate workers to read instructions was there, created by the industrial revolution which created the demand for an adequate workforce.
The communisme & capitalisme both have there strength & weakness, but the capitalisme allow liberty which is needed to evolve. The point is : total liberty means jungle law = law of the strongest ... which would just bring us several tousand of years back.
Sorry, but i beg to differ: i can't say anything about communism, because this system hasn't been tried yet and i am quite weak in scying. I can say for the capitalism, though, that it is completely indifferent about liberty: for what i know capitalism worked in Nazi-Germany as well as in the "democratic" Switzerland at that time and Nazism isn't commonly reagrded as the epitome of freedom at all. I could prolong the list of countries where capitalism worked best under quite suppressive regimes for quite a long time: Idi Amins Uganda, Somozas Nicaragua, Papa Docs/Baby Docs Haiti, ... Capitalism isn't suppression either, but: capitalism is making as much profit as possible. If that means to suppress then so be it, if it means to give freedom, than this is it. Just don't confuse to get freedom in capitalism with an "inner strive towards freedom in capitalism" - it just fits into the bigger plan.
Btw: the picture of "when (formal) law was absent we had the law of the jungle" is a story, probably put to eternity by luminaries like Hobbes with his Leviathan
. In fact many studies of the savage societies (like Lewis Henry Morgans
ethnography of the Iroquois or Johann Jakob Bachofen
) show that mesolithic societies were remarkably well-ordered despite lacking formal authorities or armed forces like a police. The Iroquois Nation for instance was the first one in history to develop a constitution (in the 11th century), when "civilized christian lands" in Europe just started the crusades and the Inquisition.
This is why freedom must go with reponsibility and that is what our current systems (our current fat cats) lack.
Exactly this is the point: "responsibility" is just not the same as "making enough profit as possible". Which is why our "fat cats" don't show any interest in acting responsible - why should they? It is simply not their job.
but i really doubt it will be initiate by the current politicians because they are just dolls in the hands of those that have the financial power.
"Political" was not meant as "vote for party a or party b" (which propose the same anyways). "Political" was meant in the sense of "opposed to being private".
I don't think it is possible to independently change the rules of society locally. Small communities with radically different rules will not likely change the surrounding society but either be treated as obscurities or be outright attacked by ther neighbors. Example for the obscurity treatment would be the Amish people, for the "attacked by the neighbors": the Albigenses
or waldensian movements
(not to speak of Hussites, or the followers of Thomas Müntzer).
|
OPCFW_CODE
|
M: Ask HN:I'm about(few days) to launch,any designer interested in helping? - revo_ads
The startup consists of an innovative kind of behavioral ad network based on Amazon Associates Products links. it will work in a similar adsense fashion. The inventory problem is solved already by the huge amazon products choice. The code is ready and have been tested throughly, i am refining the last details. If any designer/graphic is interested in making a decent logo ( i am a coder ) and a simple but nice graphic layout for the website frontend only ( very simple ) please let me know by leaving your email here in the comments.
The backend is ready and does not need redesign.<p>Thanks so much for any feedback
R: spooneybarger
Dont you think a few days is cutting it a bit short? Have you worked with a
lot of graphic designers? Do you understand that a good design isn't an
overnight thing? That it requires work?
I have always thought that coming to someone with just a couple days to do
something that you don't know how to do and assuming that it is simple and
easy and can be done in a couple days is just about the most insulting thing
you can do professionally.
I assume you wouldn't mean any insult by it but leaving something that you
can't do til the last minute and assuming it is simple is well... think back
to a time ( assuming you have been through this ) where your non-coding boss
came to you with a 'simple request' that had to be done 'right away to
complete a deal' which wasn't actually simple at all. It might appear simple
to someone without any knowledge of the problem domain but you as 'the expert'
know that isn't the case.
Perhaps you have already talked to a graphic designer who gave you a general
timeline for what you want and I'm totally off base, but it doesn't sound like
you have.
R: revo_ads
No, you are right. I didn't. Probably there is not enough time and i will do
the graphics by myself. It will suck but i will switch later. Thanks for
pointing out my mistake.
|
HACKER_NEWS
|
Gephi is the powerful open source network visualization software for social science that enables researchers to visualize, explore and analyze their data in a graph format. This tutorial has explained how to install, set up, and use Gephi. You can also create and edit networks using this tutorial. Do you want to learn Gephi and get started with data visualization? Here are 10 of the best tutorials and videos to help you get started.
Gephi is a free, open-source tool for visualizing and analyzing data. It provides several powerful features for creating beautiful interactive graphs and charts. This tutorial is designed to help you get started with the Gephi software and teach you how to create basic graphs and charts with Gephi. In this post, I have collected a list of the best tutorials on Gephi. It’s one of the most popular and influential network analysis tools on the web and free, open-source software. So, in this tutorial series, I’ll take you through a brief overview of what Gephi can do and then guide you through some helpful tutorials and tips for using Gephi.
What is gephi?
Gephi is open-source software for visualizing and analyzing data. It provides several powerful features for creating beautiful interactive graphs and charts. The tool allows you to create network graphs, directed graphs, bar charts, and pie charts. It also includes a set of standard graph templates, allowing you to customize your graphs based on your needs.
What does gephi offer you?
Gephi offers several different features that allow you to create various charts and graphs quickly. Gephi provides an interface that will enable you to visualize data from multiple sources, including files, Excel spreadsheets, and databases. Gephi includes a ” network analysis feature,” allowing you to create maps of the relationships between different people and companies. You can import your data into gephi and export your data in various formats.
What are the features of gephi pro?
* Creating network maps and charts, including directed networks, bipartite networks, and multigraphs.
* Analyzing and visualizing data from various sources, such as social networks, genealogy, the Internet, and the world wide web.
* Working with large networks and datasets.
* Graph editing, including editing, adding, and removing nodes and edges.
I created this video to show you how to create an interactive graph in gephi and export it into a format that you can use in WordPress. If you’re looking to learn more about Gephi, check out this video tutorial on the official website.
What are the best gephi tools?
Gephi is a free, open-source tool for visualizing and analyzing data. It provides several powerful features for creating beautiful interactive graphs and charts. Gephi is a versatile tool and is especially useful for data visualization and analysis. You can visualize any data with Gephi. While many other software solutions have proprietary algorithms, Gephi is free and open-source.
What is the best way to use gephi?
There are multiple ways to use Gephi, and you can learn all of them in this article. You can start by exploring the primary interface and learning how to navigate it. After this, we’ll explore the various data types and how to create different types of graphs. Finally, we’ll dive into the powerful editing features of Gephi and explore some of the best tools and techniques.
What is the best way to optimize gephi?
In this article, we’ll review the best practices for optimizing Gephi, including the following:
Setting up the application
Running and exporting data
Importing and exporting data
While Gephi is intuitive to use, we’ll go over each step in detail to ensure you get the most out of this great tool. So let’s get started!
Gephi for data scientists
Gephi is a free, open-source tool for visualizing and analyzing data. It provides several powerful features for creating beautiful interactive graphs and charts. In this tutorial, you’ll learn how to install Gephi and get up to speed with basic functionality. Then you’ll explore some of the more advanced features, such as clustering, force-directed graph layout, and node-link diagram.
Gephi for visualization
The tool can help you visualize data in a wide range of formats. With its ability to graphically display data, Gephi can make complex data easier to understand. It can help you understand your data by enabling you to:
* visualize, analyze and interact with your data
* create graphs and maps
* manage and sort your data
* visualize and analyze data by using shapes, lines, colors, and symbols
* create tables and lists
* share your data
* create reports and charts
Frequently asked questions about Gephi.
Q: What is the most exciting thing about Gephi?
A: It has many different visualization modes, like force-directed, spring-based, etc.
Q: How does Gephi help me when modeling my data?
A: Gephi helps us see patterns in our data and find the connections between entities.
Q: How does Gephi help me when analyzing data?
A: Gephi helps us to understand how our data is organized. It provides the tools to visualize the network structure of our data and allows us to understand the relationships between entities.
Q: Why are you interested in Gephi?
A: I’ve always been interested in new software, and Gephi caught my eye.
Q: What would you like to change in Gephi?
A: I want to create new visualization modes because sometimes we have to analyze extensive networks.
Q: What do you do with your spare time?
A: In my free time, I read books or relax. I have so much fun traveling with friends, but sometimes I also go out alone and explore the city.
Myths about Gephi
1. Gephi is not free.
2. Gephi has a free trial version.
3. Gephi is open source.
4. Gephi can be used for free.
5. Gephi is slow to download.
6. Gephi is a memory hog.
7. Gephi’s UI is slow and clunky.
Gephi is a program that allows you to create graphs, networks, and models to analyze data. It’s used by researchers, students, and businesses alike. It’s easy to use, and the interface is very intuitive. You can learn everything you need to know to start using it without any prior knowledge. The program has a considerable amount of support online, and you can find lots of tutorials and videos to get you started.
|
OPCFW_CODE
|
Flatten function performance issue v4
Having recently tried to upgrade our styled-components from v3 to v4 in a relatively large react app - we've ran into a strange performance issue that seems to be tied to the flatten function. When commenting out this part of flatten, the issue seems to resolve:
https://github.com/styled-components/styled-components/blob/9b81695d35074974c42b943a82456c9ae4b42500/packages/styled-components/src/utils/flatten.js#L59-L72
Here are screenshots of performance snapshots run with chrome dev tools when performing the same action in our app on v3 vs. v4:
v3
v4
Strangely, we have not been able to reproduce this in a clean slate react app.
Environment
System:
OS: macOS High Sierra 10.13.6
CPU: (12) x64 Intel(R) Core(TM) i7-8850H CPU @ 2.60GHz
Memory: 2.29 GB / 16.00 GB
Shell: 5.3 - /bin/zsh
Binaries:
Node: 8.14.0 - /usr/local/bin/node
Yarn: 1.13.0 - /usr/local/bin/yarn
npm: 6.5.0 - /usr/local/bin/npm
npmPackages:
babel-plugin-styled-components: ^1.4.0 => 1.10.0
styled-components: ^4.1.3 => 4.1.3
Reproduction
Unable to reproduce with a basic react app created via create-react-app.
Expected Behavior
Performance is as good if not better than v3.
Actual Behavior
Performance is sluggish, significantly slower than v3.
I'm wanting to give a try and fix this. do you have a way to reproduce this outside your app? can you publish something that I can clone and try?
You can trigger that code path by passing a non-styled component SFC into a style interpolation.
let my try to create something that reproduces it in codesandbox. I usually like to reproduce the problem first before trying to attack it.
@alansouzati Unfortunately I've not been able to reproduce this outside our app. I'll keep you updated if I can reproduce / make any progress.
@mariel9999999 are you using the "component selector" pattern?
const Icon = styled.svg`
flex: none;
transition: fill 0.25s;
width: 48px;
height: 48px;
${Link}:hover & {
fill: rebeccapurple;
}
`;
see the Link component inside the Icon interpolation?
@alansouzati Yes.
just as a test, if you remove it, does the problem go away too? I'm trying to debug what is going on but I'm pretty new to their source code.
@alansouzati Nope. It is still laggy after removing all component selectors.
I couldn't pin-point my problems down to flatten function, but we're experiencing the same issue - after upgrading relatively big project to v4, it performs about two times slower than with v3.
@mariel9999999 can you try<EMAIL_ADDRESS>(@alansouzati's fix) and let me know if it improves your graphs at all?
@probablyup Yup that fixes it.
Here's what it looks like when performing the same action (left =<EMAIL_ADDRESS>and right = 4.1.3):
@alansouzati Thanks for fixing this!
|
GITHUB_ARCHIVE
|
Adding Chinese simplified translation
Changes proposed:
Adding Chinese simplified translation of the docs
Maybe I can improve it.
Sure 👍 Any help is appreciated.
@Astrian @PxSonny This is awesome. I love the fact that more people are translating it. Maybe u guys can help finish this as well. My team finished translating it a while back but i never get the time to edit the files and upload them.
https://docs.google.com/document/d/1Z4_IkjJj3lvI69KVGzrWTkwKCUGrbDhiBtSltw3uOR8/edit
@Skillz4Killz I see but I think it's messy so much 😂 Maybe I can re-translate it with GitBook
Maybe let's not re-translate haha but finalize everything using my PR and @Skillz4Killz document. How does GitBook work? It can be based on a fork of a repo?
@PxSonny Same as GitHub but Git the book. I will review the Google Docs and try to improve it
Sent from my OnePlus ONEPLUS A5000 using FastHub
Yes please improve away. I am happy to see others also want to help translate it.
How about using GitHub pages instead of GitBooks? We can create a branch in the repo that will allow us to be able to maintain every language of the docs right inside this very repo.
@Skillz4Killz Great idea.
@Skillz4Killz Let's do that. How do we get started? I can translate into French too ^^
@genexp How does using GitHub pages in a separate branch here in the repo to store all the different versions of the translated docs sound?
@PxSonny I already sent up French as well a while back. Would love any help updating it though and fixing anything we may have missed.
@Skillz4Killz Yes, if you have any document I can have a look on it.
I'm a bit on edge about merging translations. Can we add something to say translated documents may be out of date? There are likely going to be cases where translated documents are out of date with their English counterparts as this is a community effort.
https://shimo.im/doc/vj48YTpy0CwUHp0v?r=V9GZ4/
Just translated some little part of the document, and I will continue to translate it
@DominicGunn Yea that can easily be added at the top of the docs. But to be honest aren't English docs also outdated as well? It's kept by the English community just as the other languages will be kept by their community.
@PxSonny https://github.com/madglory/gamelocker-vainglory/pull/164
I'm not sure where the files went but I uploaded them there and realized how much work and time to took me to do that.
English is the official language for standards and documentation and should always be the primary source, I'm with Dom.
Yeah, I don't see any harm adding a disclaimer: "the documentation can be outdated, please check English one for most up-to-date" in the different translation.
@Skillz4Killz Oh I see. I'll get the files from your commit and check it. :+1:
Sounds awesome!
On Wed, Aug 23, 2017 at 10:22 PM, Sonny Alves Dias<EMAIL_ADDRESS>
wrote:
Yeah, I don't see any harm adding a disclaimer: "the documentation can be
outdated, please check English one for most up-to-date" in the different
translation.
@Skillz4Killz https://github.com/skillz4killz Oh I see. I'll get the
files from your commit and check it. 👍
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/madglory/gamelocker-vainglory/pull/255#issuecomment-324514155,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAADMCXYUoguRCXtYDdz_mwAdiiUUTSIks5sbN5RgaJpZM4O-Eji
.
--
Brian Corrigan
Managing Partner / Principal Engineer at MadGlory
https://www.linkedin.com/company/2657284
Wb: http://madglory.com http://www.madglory.com/
Tw: @madglory
Ph:<PHONE_NUMBER>
Anyone know to make Slate hosted on github pages? Ive been looking into it but it actually requires running scripts and hosting it locally so I am not sure how to set up the GitHub Pages. Does anyone know how to set that up with Slate?
Hey @PxSonny can you see #299 and #301. I think this will help get this shipped! Thanks for pushing us!
Ok, I spoke to @PxSonny offline on this. We've committed the main translations, so we can now let this one follow that format.
|
GITHUB_ARCHIVE
|
asciidoctor: add rouge to dependencies
Motivation for this change
Add syntax highlighting support via rouge.
Things done
[x] Tested using sandboxing (nix.useSandbox on NixOS, or option sandbox in nix.conf on non-NixOS)
Built on platform(s)
[x] NixOS
[ ] macOS
[ ] other Linux distributions
[ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside nixos/tests)
[ ] Tested compilation of all pkgs that depend on this change using nix-shell -p nix-review --run "nix-review wip"
[ ] Tested execution of all binary files (usually in ./result/bin/)
[ ] Determined the impact on package closure size (by running nix path-info -S before and after)
[x] Assured whether relevant documentation is up to date
[x] Fits CONTRIBUTING.md.
I like that README is added (I've had this chicken-and-egg problem with gemset.nix). However I think it is enough to do:
$ nix-shell -p bundix --run 'bundix'
to rebuild gemset.nix.
Also, why should we add rouge? There already is 'coderay', 'pygments.rb'. Which benefits has rouge compared to those?
I've tried pygments a while back with asciidoctor and it just didn't work.. Found this to be an easy workaround for what I needed instead of debugging pygments (or maybe failure to use it?)
On a more global scale, adding it here costs little and gives users more choice what to use
Regarding bundix, I've added nix-shell to be able to regenerate it without having it installed.. As I stumbled on it a few times - otherwise it fails on missing dependencies
As I stumbled on it a few times - otherwise it fails on missing dependencies.
I've tried rm gemset.nix && nix-shell --pure -p bundix --run 'bundix' and it was enough to regenerate gemset.nix. The rebuild-gemset.nix is redundant here, just mention the command I've proposed in README.adoc.
On a more global scale, adding it here costs little and gives users more choice what to use
Ok. I don't know Ruby packaging details, I'm curious how it behaves when I use asciidoctor and rouge in local Gemfile/gemset.nix. I hope there will be no conflicts.
Thanks, I'll try reproducing the dependencies error when regenerating. Will report back here when done. Will update readme too.
@svalaskevicius have you reproduced the issue?
Sorry didn't have a chance yet. Will try to do it this evening :)
ok, reproduced, (@danbst)
your command works ok only if Gemfile.lock is present, however, it doesn't add new libs to it, only converts the lock file to gemset.nix.
to add rouge to the lock file, one apparently needs to run bundix -m, which runs ruby things.. that fail to install mathematical because there are no binary dependencies installed:
<compiling things and dep errors>
An error occurred while installing mathematical (1.6.12), and Bundler cannot continue.
Make sure that `gem install mathematical -v '1.6.12' --source 'https://rubygems.org/'` succeeds before bundling.
Thus, I still don't see any other alternative rather than in this PR.
I'll update the readme to adoc though :)
done. also when testing now I noticed that rouge and some other dep has upped their version
@svalaskevicius thanks! Makes sense. I found a way to remove duplication. Here's diff:
diff --git a/pkgs/tools/typesetting/asciidoctor/default.nix b/pkgs/tools/typesetting/asciidoctor/default.nix
index 9508f3ff023..e46ebcf18ae 100644
--- a/pkgs/tools/typesetting/asciidoctor/default.nix
+++ b/pkgs/tools/typesetting/asciidoctor/default.nix
@@ -1,10 +1,10 @@
-{ stdenv, lib, bundlerApp, ruby
+{ stdenv, lib, bundlerApp, ruby, bundix, mkShell
# Dependencies of the 'mathematical' package
, cmake, bison, flex, glib, pkgconfig, cairo
, pango, gdk_pixbuf, libxml2, python3, patchelf
}:
-bundlerApp {
+bundlerApp rec {
inherit ruby;
pname = "asciidoctor";
gemdir = ./.;
@@ -43,6 +43,12 @@ bundlerApp {
};
};
+ passthru.updateShell = mkShell {
+ buildInputs = (gemConfig.mathematical {}).buildInputs ++ [
+ bundix
+ ];
+ };
+
meta = with lib; {
description = "A faster Asciidoc processor written in Ruby";
homepage = http://asciidoctor.org/;
diff --git a/pkgs/tools/typesetting/asciidoctor/update.sh b/pkgs/tools/typesetting/asciidoctor/update.sh
new file mode 100755
index<PHONE_NUMBER>0..10a053a847b
--- /dev/null
+++ b/pkgs/tools/typesetting/asciidoctor/update.sh
@@ -0,0 +1,6 @@
+#!/usr/bin/env bash
+rm gemset.nix Gemfile.lock
+nix-shell ../../../.. -A asciidoctor.updateShell --run '
+ bundix -m --bundle-pack-path $TMPDIR/asciidoctor-ruby-bundle
+'
+rm -r .bundle
\ No newline at end of file
That can also remove the need for README, as update.sh is kinda standard way for package update scripts.
If you like it, you may embed it into this PR. Otherwise I can push it separately.
Thanks! I'll test and add it a bit later :)
works well! I've set you as the author in the commit @danbst if it's ok with you :)
@svalaskevicius thanks for doublechecking!
|
GITHUB_ARCHIVE
|
To build pivot tables, it is sufficient to specify pivot table tags in the data range. After that, this range becomes the data source for the pivot table. The
<<pivot>> tag is the first tag ClosedXML.Report pays attention to when analyzing cells in a data region. This tag can have multiple arguments. Here is the syntax:
<<pivot Name=PivotTableName [Dst=Destination] [RowGrand] [ColumnGrand] [NoPreserveFormatting] [CaptionNoFormatting] [MergeLabels] [ShowButtons] [TreeLayout] [AutofitColumns] [NoSort]>>
- Name=PivotTableName is the name of the pivot table allowed in Excel.
- Dst=Destination - the cell in which you want to place the left upper corner of the pivot table. If the Destination is not specified, then the pivot table is automatically placed on a new sheet of the book.
- RowGrand - allows you to include in the pivot table the totals by rows.
- ColumnGrand - includes totals for the pivot table.
- NoPreserveFormatting - allows you to build a pivot table without preserving the formatting of the source range, which reduces the time to build the report.
- CaptionNoFormatting - Formats the pivot table header in accordance with the source table.
- MergeLabels - allows you to merge cells.
- ShowButtons - shows a button to collapse and expand lines.
- TreeLayout - sets the mode of the pivot table as a tree.
- AutofitColumns - enables automatic selection of the width of the pivot table columns.
- NoSort - disables automatic sorting of the pivot table.
Here are some examples of the correct setting of the Pivot option:
<<pivot Name=Pivot1 Dst=Totals!A1>>– a pivot table will be created with the name Pivot1; table will be placed on the Totals sheet starting at cell A1;
<<pivot Name=Pivot25>>– a pivot table will be created with the name Pivot25;
<<pivot Name=Pivot25 Dst=Totals!A1 RowGrand>>– the Pivot25 pivot table includes the totals for data lines;
<<pivot Name=Pivot25 ColumnGrand>>– the pivot table will include the totals for the columns.
Fields in all ranges of the pivot table are added in the order in which they appear in the template (from left to right). Therefore, when designing a data range on which a pivot table will be built, you need to adhere to one simple rules: line up the columns in the order in which you would like to see them in the pivot table
The names of the fields for the pivot table are taken from the line above the data range - the heading of the source table. Be careful when creating this header, as there are some restrictions on the naming of fields in the pivot tables. With the help of pivot tables, it’s easy to create the most complicated cross-tables in reports.
In the lower left cell of the data range there is a tag
<<pivot Name="OrdersPivot" dst="Pivot!B8" rowgrand mergelabels AutofitColumns>>. This option will indicate ClosedXML.Report that a pivot table with the name “OrdersPivot” will be built across the region, which will be placed on the “Pivot” sheet starting at cell B8. And the parameter
rowgrand will allow to include the totals for the columns of the resulting pivot table. In the service cell of the columns “Payment method”, “OrderNo”, “Ship date” and “Tax rate” the tag is
<<row>> tag defines the fields of the pivot table row area. In order to get the totals grouped by the method of payment of bills, the tag
<<sum>> has been added to the tag
<<row>> in the field “Payment method”. For the “Amount paid” and “Items total” fields, the
<<data>> tag is specified (fields of the pivot table data range). In the options of the “Company” field, a
<<page>> tag has been added (the page area field). When designing a template, in addition to the allocation of tags between the columns, do not forget to specify different formats for the cells of the range (including for cells with dates and numbers). Moreover, we formatted the service cells with column options, meaning that it is with this format that we will get subtotals in the pivot table. And for the “Payment method” field, we selected a cell with tags in color.
Static Pivot Tables
You can place one or several pivot tables right in the report template, taking advantage of the convenience of the Excel Pivot Table wizard and virtually all the possibilities in their design and structuring. Let’s give an example. As a starting point, we use the first example template with a summary table with the original Orders range on the Sheet1 sheet. Right in the template, we placed a static pivot table built over this range. The following figures show the steps for building this table. First, you need to select the source range for the pivot table. It is not identical to the Orders range, since it includes only the data line and the title above it. Notice how the source range is highlighted in the figure:
Next, we put the pivot table on a separate PivotSheet and distributed its fields in the rows, columns, and data ranges. We formatted pivot table fields, as well as their headings. Finally, we called the pivot table as PivotTable1, and as an option to the source range, we specified
<<pivot>>. After the data is transferred, all summary tables referencing this data range will be updated. That is, for one range you can build several pivot tables.
|
OPCFW_CODE
|
Jakefiction – Chapter 1739 – Fight school delightful -p2
Novel–Monster Integration–Monster Integration
Chapter 1739 – Fight efficacious recognise
So, I made the decision to wait until this b.a.s.t.a.r.d is really turn into a possible danger in my experience I would personally not try to escape from this.
It stayed at its destination for twenty-two moments right before it finally went out of the sector.
“Aww, appearance who is chatting, a little bit hyena who put in a lot more than twenty or so minutes generating your choice,” I mentioned rear, mocking and which instantly caused it to be very furious, plus a impressive aura begin to seep out from its system.
Bound In Darkness 02 – The Devil’s Knight
“I thought it was some kind of rescue, but here you will be, a stray Tyrant that people neglected,” The Grimm Monster explained. It appeared to be very furious, not merely at me but will also at itself for throwing away a whole lot valuable time.
These problems were definitely the start as time pa.s.sed, its strikes have become even faster and vicious i always had to use every amount of strength of eradicating principle to check out and respond to its episode. Furthermore, i used my heart and soul and eliminating rule’s power to produce the sensory area.
These conditions have been the beginning as time pa.s.sed, its assaults have become even faster and vicious which i was required to use every little power of wiping out rule to determine and respond to its invasion. I additionally applied all my spirit and hurting rule’s power to make the sensory area.
It stayed at its area for twenty-two a short time prior to it finally went right out of the sector.
This Scarlet Hyena is incredibly potent should i had fulfilled it just before, I would personally found my stop within a attack. With no progress the fact that Bloodline of Crystal Horn Rhinoman’s Bloodline has presented me, I would have been no complement for this.
That period, I needed quite a shock and idea its sword would slice through my strings but to my wonderful surprise, it only able to make handful of chafes. Seeing that, I completely quit defending and start assaulting
Section 1739 – Overcome
It really is a a valuable thing I had the foresight of being Giagantified If I experienced fought without them, the distress could have ripped my body organs away, plus i will have to vomit the bloodstream today.
Mr. Nobu’s Otherworld Chronicles
The Grimm Monster ahead of me Scarlet Hyena, a fire elemental Grimm Monster which happens to be renowned for its rage and bloodthirstiness.
Nevertheless, I am hoping this b.a.s.t.a.r.d would pass on quickly whether or not I was able to not wipe out it, another person could, but those that could destroy it continue to be dealing with on the domain name. I don’t understand how the human beings are struggling inside and its did start to worry me finding a lot of time acquired pa.s.sed, but the battle acquired not done.
Channel: Private Pleasures
This Scarlet Hyena is incredibly powerful when i obtained became aquainted with it ahead of, I might have realized my end within a infiltration. Without the growth the Bloodline of Crystal Horn Rhinoman’s Bloodline has presented me, I would have been no match for this.
CLANNNG CLANNNG CLANNNG!
“You F.you.c.k.i.n.g b.a.s.t.a.r.d!” It explained with gritted pearly whites, p.r.o.nouncing each operate heavily before earthshattering scarlet atmosphere akin to blood burst open away from its system and attacked me.
the tidal wave and other stories
A lot more I guard against its strikes, a lot more strong strikes it happens to be using against me, so i have zero preference but to defend against them. This b.a.s.t.a.r.d is extremely impressive that, regardless if I use all of my steps, I would personally have under a 1Per cent possibility of beating it.
So, I chose to hold back until this b.a.s.t.a.r.d is really turned into a possible danger in my opinion I might not run away from that.
CLANNNG CLANNNG CLANNNG!
Even now, I hope this b.a.s.t.a.r.d would kick the bucket quickly even though I was able to not eliminate it, another individual could, but those who could get rid of it remain struggling inside the area. I don’t discover how the people are dealing with inside and it is did start to worry me finding so much time obtained pa.s.sed, even so the fight had not completed.
Even now, I am just no match up for doing it and barely making it through a result of the preparing I needed created plus the transfer I am working with, which is superb at bringing the beating.
“Aww, start looking who is chatting, slightly hyena who spent greater than 20 mins producing your choice,” I stated lower back, mocking and which instantly made it very annoyed, plus a impressive aura begin to drain from its system.
“You F.u.c.k.i.n.g b.a.s.t.a.r.d!” It explained with gritted tooth enamel, p.r.o.nouncing each job heavily well before earthshattering scarlet atmosphere similar to blood vessels burst open beyond its body system and attacked me.
Since I mocked it, it obtained so irritated it literally commenced trembling, and it is scarlet grew to be even dark-colored with rage, and so i noticed each of the frizzy hair on my human body standing upright prior to it even infected.
So, I decided to wait patiently until this b.a.s.t.a.r.d is absolutely turn into a threat if you ask me I would personally not run away from that.
I needed just considered a step back whenever i identified a number of Hyanman showing up behind me and assaulting at my c.h.e.s.t.
Even today, I am no match up for doing this and barely living through because of the groundwork I needed designed as well as the proceed I am just utilizing, which is very good at making the beating.
“Aww, look who may be conversing, just a little hyena who invested over twenty or so minutes producing deciding,” I mentioned backside, mocking and which instantly managed to get very irritated, as well as a impressive aura continue to leak away from its body.
the guards came through and other poems by cats
This Scarlet Hyena is really strong basically if i experienced met it well before, I would personally have found my end within a single strike. Without having the advancement that the Bloodline of Crystal Horn Rhinoman’s Bloodline has provided me, I would have been no suit for it.
Leave a Reply
|
OPCFW_CODE
|
Readme About Awesome. Building and installation. It is designed to be fast and customizable and is mainly targeted at developers, power users, and even everyday computer users who want to have fine-grained control on their graphical environment for computing tasks. One goal of the project is to keep dwm minimal and small. Awesome is a tiling Window Manager that can replace or live together with other desktop environments like Gnome and KDE Plasma.. Usage is optimized for shortcuts. kobo - Incredibly ugly but powerful software for ebook management and conversion. Tilix is an advanced GTK3 tiling terminal emulator and manager that uses the ⦠Awesome is a highly configurable framework window manager for X. Program name: Awesome Window Manager (Productivity) Awesome is a dynamic window manager for X Windows System. The dwm window manager focuses more on being lightweight. It is quite fast and customizable. Access to some programs may afford configuring or using command prompt and exact program name. The package awesome is provided by the distribution you are using, just use the package manager to install it as shown. It is very fast, extensible and licensed under the GNU GPLv2 license. awesome originally started as a fork of dwm, to provide configuration of the WM using an external configuration file. Bookviser - Awesome application for Windows 8 devices to read eBooks in a simple way. Awesome is a highly configurable, next generation framework window manager for X. $ sudo yum install awesome [On CentOS/RHEL] $ sudo dnf install awesome [On Fedora] $ sudo apt install awesome [On Debian/Ubuntu] 5. Program information. Default shortcuts for awesome window manager (http://awesome.naquadah.org). One of Awesome's highlight feature is that software makes it possible to manage windows with keyboard. Calibre - Powerful software for e-book management and conversion. awesome â Framework Window Manager for Linux. Ubuntu is a popular and easy-to-use Linux distro, but its default window manager can become frustrating and inefficient if you're a keyboard-driven programmer or have a large monitor that you want to use effectively. Setting up Awesome WM on Ubuntu . The list is copied from awesome man-page. After extracting the dist tarball, run: Instead, as you install the manager, itâll make a desktop entry to log into. See More. Awesome is lightweight window manager and a highly configurable, next generation framework window manager for X. Unlike some window managers, you will not need to create a custom login entry to use it. Tilix. Ubuntu sudo apt install awesome Debian sudo apt-get install awesome Arch Linux sudo pacman -S awesome Fedora sudo dnf install awesome OpenSUSE sudo zypper install awesome Other Awesome was the first window manager to be ported to use the asynchronous XCB library instead of XLib, making it much more responsive than most other window managers. Not as flexible as Awesome, but it provides all the functionality I personally need right now right out of the box. It may take some time to get used to it. ... awesome. Verdict: A fantastic window manager, though with a bit of learning curve - window movements can be confusing until you figure out how it works.
|
OPCFW_CODE
|
This introductory program is the perfect way to start your journey.
Baseline JPEGs load images from top to bottom. Not all Android devices have hardware-acceleration support, but high end Ratio and responsive images in this free Udacity course and the Images guide on Web Fundamentals. Above we can see a visualization of gamut – the range of colors a color space can define. Branding — 3/26/2019. Udacity Branding · Bill Kenney. 1.9k 22.6k Will Be Studio | Creative Visualization Studio. Multiple Owners. Przemek Bizoń. bisoñ studio. https://research.googleblog.com/2016/09/show-and-tell-image-captioning-open.html Download the Linux 64 bit version
Learn Data Visualization with Python from IBM. "A picture is worth a thousand words". We are all familiar with this expression. It especially applies when trying to I'm enrolled in Android nanodegree program and I can assure you that ALL the courses which [Download] Udacity-AI Programming with Python Nanodegree. After 20 years of pure software development in different areas, from image of Python, to web development and web scraping, to data visualization, and beyond. Read Udacity reviews, learn about their three to six month immersive full-time and part-time online coding courses. Find the curriculum for Udacity Nanodegrees, [Udacity] Become a Data Analyst Nanodegree Free Download Use Python, SQL, and Natural Language Processing, Image Processing, building Recommendation The "Why" Data visualization is an important skill that is used in many parts of android development during my first year of trying to master this technology. Projects. Raven's Progressive Matrix image Network intrusion detector image The goal of the project was to create a real-time network intrusion detection classifier and corresponding visualization. Sunshine is the companion Android app for the Udacity course Developing Android Apps: Android Download Zip file. Reviews and rankings of top Udacity online courses and MOOCs. Android Development for Beginners course image like downloading data, but you don't want these operations to interfere with your UI. Data Visualization in Tableau.
This introductory program is the perfect way to start your journey. - An Android transformation library providing a variety of image transformations for Glide. List of resources related to topics of AI/ML and Deep Learning specifically - krootee/ai-ml-deep-learning A curated list of awesome C++ frameworks, libraries and software. - uhub/awesome-cpp Android app development from a Udacity (Java) course, USU (React Native) course, and personally created apps (mix) - connor-prinster/android-development Awesome List of my own! Contribute to okoala/awesome-stars development by creating an account on GitHub. A curated list of my GitHub stars! Contribute to ChristianBagley/awesome-stars development by creating an account on GitHub.
android:layout_width="wrap_content". 4. android:layout_height="wrap_content". 5 android:textColor="@android:color/black". 8 Available Images Contribute to udacity/android-layout development by creating an account on Clone or download Experiment with Android Layout XML with this visualizer. The Visualizer class enables application to retrieve part of the currently playing audio for visualization purpose. It is not an audio recording interface and only developing Android apps, in addition to having taken the Udacity Android Basics courses. This Can contain text, images, and other Views. ○ Should be limited to The Android XML visualizer likely does not support this additional attribute, since it was After downloading the app on the device and playing around with it,. This is the first course in the Android Basics Nanodegree program. You'll create views, the basic building block of Android layouts, that display text and images. experimentation through coding challenges in Udacity's XML Visualizer. Learn the basics of Android and Java programming, and take the first step on your views, the basic building block of Android layouts, that display text and images. experimentation through coding challenges in Udacity's XML Visualizer. Udacity offers many nano-degrees that we can use to level up our careers with Nanodegree; Android Basics Nanodegree; Android Developer Nanodegree
Apr 17, 2019 In addition to visualization, the panel supports drag & drop bulk asset Also with this release, you can also download Android Q Beta emulator system images for Download the latest version of Android Studio 3.4 from the download page advanced Android #learn Android #build Android apps #Udacity
|
OPCFW_CODE
|
Error occurs during Java: (wrong-type-argument stringp (require . info))
EDIT: Error is not present with Emacs 26.3. I will revist when Emacs 27 becomes a release.
Emacs 27.0.50
This error occurs when I load a .java file or read java code in an org-mode src block. How may I prevent this error from occuring? Thank you. :)
My jvm.el config might be the culprit.
;;; package --- Summary:
;;; Commentary:
;;; JVM Languages & Support
;;;; Java
;;;; https://blog.jmibanez.com/2019/03/31/emacs-as-java-ide-revisited.html
(use-package lsp-java
:after lsp
:config
(add-hook 'java-mode-hook 'lsp)
(add-hook 'java-mode-hook 'flycheck-mode)
(require 'dap-java))
;;;; Kotlin
;;;; Groovy & Gradle
(use-package groovy-mode)
;;;; Scala
(use-package scala-mode)
;;;; Clojure
(use-package cider
:after lsp
:init
(setq lsp-enable-indentation nil)
(add-to-list 'lsp-language-id-configuration '(clojure-mode . "clojure-mode"))
:config
(add-hook 'clojure-mode-hook 'lsp)
(add-hook 'clojurec-mode-hook 'lsp)
(add-hook 'clojurescript-mode-hook 'lsp))
(use-package cider-hydra)
;;;; Add Java to Org-babel
(org-babel-do-load-languages
'org-babel-load-languages
(append org-babel-load-languages
'((java . t))))
;;;; Maven
(setq nxml-child-indent 4)
(provide 'jvm)
;;; jvm.el ends here
That's very weird. You have a badly-formed load-history. The car of each element should be a string (the path to the library), and evidentially here you've ended up with a (require . info) element in that position. I can't explain that, but maybe this still helps a little.
Which version of Emacs are you using? This looks like https://www.reddit.com/r/emacs/comments/flnp5v/wrongtypeargument_stringp_require_orgagenda_in/ which suggests you're likewise using an unstable build, but didn't say so in your question. It's always very useful information to include version details in a question, but especially so when you're not using a stable release.
@phils I have downgraded to Emacs 26.3 and the error is no longer present. I forgot why I upgraded to Emacs 27. I think it was because it has native JSON, but I do not recall. I will revisit this once Emacs 27 because a stable release.
Thank you! :)
|
STACK_EXCHANGE
|
If a publication contains fonts that are neither on your computer nor embedded (font embedding: To insert a font into the publication. Once the font is embedded, the information becomes part of the publication.) in the publication, the Microsoft Windows operating system provides default substitutes for the missing fonts. When you open a publication in Publisher that contains fonts that aren't installed on your computer, you can select the options to temporarily or permanently substitute fonts on your computer for the missing fonts that are used in the publication.
Font substitution is useful when you want to view your publication on another computer, and you want to make sure that the text remains readable no matter which fonts are available on other computers. Missing East Asian characters are a special case and are handled separately from other fonts.
In most cases, font substitution causes the text to flow differently. Line breaks, column breaks, page breaks, line spacing, and hyphenation will likely change, even if the substitute font is similar to the missing font. Because font substitution may significantly affect the layout of your publication, you may want to avoid or turn off font substitution.
What do you want to do?
Assign substitute fonts for missing fonts
When font substitution is turned on and you or your printing service opens your file on another computer that does not have the same fonts that you used, Microsoft Windows substitutes fonts that you chose, so that you can read the text in your publication.
- In the Load Fonts dialog box, click Font substitution.
Note If the Load Fonts dialog box does not open when you open your publication, on the Tools menu, point to Commercial Printing Tools, and then click Fonts. In the Fonts dialog box, click Font substitution.
- In the Font Substitution dialog box, select a missing font from the list of fonts.
- Do one or more of the following:
- To use the suggested choice of substitute fonts for this session only, click Temporarily substitute this font for display and printing.
Note Fonts that are listed as temporary are not saved with the publication.
- To replace the missing fonts with the suggested choice of substitute fonts from now on, click Permanently substitute this font in the publication.
Note This is a permanent change and cannot be undone after you click OK, but you can use the original font if you install it later.
- To assign fonts to be substituted, do the following:
- In the Substituted Font list, select a different font.
- Click either Temporarily substitute this font for display and printing or Permanently substitute this font in the publication.
Top of Page
Substitute fonts for missing East Asian characters
If the font that you are using does not contain a particular character, and you have cleared the Automatically substitute font for missing East Asian characters check box (Tools menu, Options command, Edit tab), you see a small box in place of the missing character wherever that character occurs in your text.
When the Automatically substitute font for missing East Asian characters check box is selected, Publisher automatically applies a substitute font to the missing East Asian character. By default, the Automatically substitute font for missing East Asian characters check box is selected. We recommend that you leave this check box selected if you plan to print your publication from your own computer.
If you plan to take your publication to another computer or to a commercial printer, however, it is best to turn off automatic font substitution before you type any text into the publication. Then, whenever you see the small box instead of the missing character, you can manually substitute the small box with another font that contains the character you want.
To prevent a commercial printer or any other user from applying font substitution to the characters in your publication, you should embed the fonts in your publication before you send it to be printed.
Turn font substitution on or off for missing East Asian characters
- On the Tools menu, click Options, and then click the Edit tab.
- Select or clear the Automatically substitute font for missing East Asian characters check box.
Top of Page
Avoid font substitution
If you want to maintain your publication's layout — including line breaks, column and page breaks, line spacing, and hyphenation — you may want to avoid font substitution.
To avoid font substitution, do one or more of the following:
Note Only TrueType fonts can be embedded and only if they are licensed for embedding.
Top of Page
Turn off font substitution when you print
When you print a publication to a PostScript (PostScript: A page description language used by printers and imagesetters.) printer, the printer substitutes PostScript fonts that are on the printer for any TrueType fonts with the same name that are used in your publication. This may cause your text to reflow, resulting in unexpected line breaks, hyphenation, and overflow (overflow: Text that does not fit within a text box. The text is hidden until it can be flowed into a new text box, or until the text box it overflows is resized to include it.) that may change how your publication looks. To turn off font substitution when you print and use only the fonts that are embedded in your publication or installed on your computer, do the following:
- On the File menu, click Print, and then click the Printer Details tab.
- In the Printer name box, select the PostScript printer that you will use to print your final output.
- Click Advanced Printer Setup, and then click the Graphics and Fonts tab.
- Under Fonts, click Use only publication fonts.
Top of Page
|
OPCFW_CODE
|
Fpga implementation of low power booth multiplier using radix-4 algorithm prof vrraut1, the modified radix 4 booth multiplier has reduced power consumption than the conventional radix 2 booth multiplier in the recoded format, each bit in the multiplier can take any of the three values: 0, 1 and -1suppose we want to multiply a. Implementation of modified booth encoding multiplier for signed and unsigned 32 bit numbers 1udari naresh, 2gravi, 3ksrinivasa reddy fig 3 shows the generated partial products and sign extension scheme of the 8-bit modified booth multiplier the partial products generated by the modified booth algorithm are added in parallel using. Modified booth encoding radix 4 8 bit multiplier, booth encoding radix 4 8 bit multiplier multiplication procedure we wrote the verilog code for all the multiplier versus modified booth multiplier: booth multiplier implementation of booths algorithm , booth multiplier implementation of booths for full verilog code of the. Full adders) are shown in figure 3 x 3x2x1x0 is the 4 bit multiplicand and y 3y2y1y0 is the 4 bit multiplier proposed low power spst equipped multiplier radix-2 modified booth mac with spst performs both multiplication and accumulation grouping of multiplier bits and radix-2 booth encoding reduce the number of partial products to. The modified booth encoder will we discuss about a modified booth encoding radix-8 [9, 10] 8-bit multiplier booth multiplication allows for smaller, faster multiplication circuits through encoding the signed numbers to diagram of the radix-4 booth multiplier is shown in fig4 it.
Fig 4 modified booth multiplier the booth radix-4 algorithm reduces the number of partial products by half while keeping the circuit’s complexity. Modified radix-4 booth algorithm has been widely used it is based on encoding the two’s complement multiplier in order to reduce the number of partial products to be added to n/2. Multiplication using modified booth encoding 3 csa mac is shown in figure 5, which performs 8x 8-bit operation • low power consumption is there in case of radix 4 booth multiplier because it is a high speed parallel multiplier applications. Example of 8×8 bit multiplication, a simple multiplier generates the 8 partial product rows, but by radix-8 booth multiplier it is reduced to 3it means that radix-8 booth.
In this paper we present 8 bit multiplication by using modified booth’s (radix 4) algorithm and its following table depicts the functional operation of radix 4 booth encoder: table ii radix-4 encoding rules xn xn+1 xn-1 recoded bits operations performed architecture of parallel multiplier based on radix-4 modified booth algorithm. 1 approximate radix-8 booth multipliers for low-power and high-performance operation honglan jiang, student member, ieee, jie han, member, ieee, fei qiao, and fabrizio lombardi, fellow, ieee abstract—the booth multiplier has been widely used for high performance signed multiplication by encoding and thereby reducing the number of partial products. Accumulator (mac) based radix-4 booth multiplication algorithm for high-speed arithmetic logics have been modified booth encoder will reduce the number of partial products generated by a factor of 2 fast multipliers are essential parts of digital signal processing systems encoding radix-4 [9, 10] 8-bit multiplier booth multiplication.
The functional operation of radix-4 booth encoder is shown in the table2it consists of eight different types of states and during these states we can obtain the outcomes, which are multiplication of multiplicand with 0,-1 and -2 consecutively. I'm trying to understand some vhdl code describing booth multiplication with a radix-4 implementation i know how the algorithm works but i can't seem to understand what some parts of the code do specifically. Design of an 8x8 modified booth multiplier introduction to vlsi design, ee 103 tufts university robbie d'angelo & scott smith fall 2011 abstract in this project an 8x8 multiplier was designed and simulated at the gate level and at the transistor level using the ams simulator in cadence design system.
Booth’s algorithm for binary multiplication example multiply 14 times -5 using 5-bit numbers (10-bit result) 14 in binary: 01110-14 in binary: 10010 (so we can add when we need to subtract the multiplicand. Fig 3 :- encoding of booth multiplier v radix 8 multiplication in the radix 8 multiplication all the things are same but we will do pairing of 4 bit for radix 8 all the process will be same for radix high performance pipelined signed 88 -bit multiplier using radix-4,8 modified booth algorithm. V booth-encoding radix-2 array multiplier the modified booth recoding algorithm allows for the reduction of the number of partial products to be compressed in a carry save adder tree.
-unsigned radix-8 booth encoding multiplier the radix-8 booth encoder circuit generates n/3 the partial products in parallel the conventional modified booth encoding (mbe) 8 bit radix-4 signed unsigned booth multiplier 23080 8 bit radix-8 signed unsigned booth. 54x54-bit radix-4 multiplier based on modified booth algorithm ki-seon cho, jong-on park, jin-seok hong, goang-seog choi high speed booth encoding algorithm simplifies the modified the circuit design of the booth encoder based on modified booth algorithm, comparators, and conditional sum. Simulation results speed of the multiplier circuit depends on the speed of the adder circuit and the number of partial products generated the multiplier unit design applying the proposed radix-8 radix-8 booth encoded technique used then there are only 3 booth encoder multiplier for signed and unsigned numbers partial products and only one csa.
|
OPCFW_CODE
|
Expected Number of Steps in a One Dimenstional Random Walk
Consider a bounded one dimensional random walk, with end points as 0 and N. If we start from x, where x is an interior point of {0,1,2,...,N}, what is the expected number of steps(m(x)) required to reach 0 or N. The probability of going to the left or right side from any interior point is equal(=1/2). Is it possible to prove that m(x) is finite and find a recursive equation(involving m(x-1) and m(x+1))?
Note: Actually, this question is a follow up to the harmonic nature of the probability (p(x)) of starting from an interior point and reaching the bounds, where I proved $p(x)=\frac{p(x-1)+p(x+1)}{2}$.
The recursive relation for $m$ is derived in an almost identical way to that of $p$. In particular, $m(x) = 1 + (m(x-1) + m(x+1))/2$. The solution is $m(x) = x(N-x)$.
$m(x)$ is almost surely finite, i.e. with probability $1$
Do all the questions and answers on the site about random walks really don't answer this question? For instance, you could improve your question by explaining what's causing you trouble in applying the answer to this question to your problem.
@joriki The answer to the question you have mentioned doesn't show how the recursive equation is derived. I'd like to know how the recursive equation is derived. (And I have mentioned in my question that I need to know how the recursive equation is derived).
@Nitin Is it possible for you to derive the recursive equation? Thanks.
@AjayShanmugaSakthivasan: No, you didn't mention in the question that you needed to know how it's derived. You asked whether it's possible to find one. That's answered by the answer I linked to. If you're going to ask about something that's been discussed a gazillion times on this site and all that knowledge is at your fingertips, you should be precise about which specific aspect you still find lacking in the existing answers.
no. how did you derive the recursion for $p(x)$?
@Nitin So I assumed that I start from some interior point x. I then used the law of total probability, which says that the probability of reaching the boundary equals the product of probability of going to the left and probability of reaching the boundary given that he starts from the left point plus the product of probability of going to the right and probability of reaching the boundary given that he starts from the right point. (The probability of going to left or right = 1/2). This directly translates to the equation I derived
ok so do the same thing with $m$.
@Nitin Okay, even if use law of total expectation here, where does the 1 come from?
Well, what does law of total expectation give you?
@Nitin $m(x)=\frac{m(x-1)+m(x+1)}{2}$
Just consider a random walk on ${0,1,2}$ to see where your expression fails. Completely spell out the law of total expectations, being careful with your notation.
Anyhow, let's not keep this going on. If you're standing at $x$, then your expected exit time will be $(1+m(x-1))$ if you hit $x-1$ first and $(1+m(x+1))$ if you hit $x+1$ first.
@Nitin Alright, got it. Thank you for being patient.
|
STACK_EXCHANGE
|
The Linux OS is a free, community-developed, open-source operating system (OS), which can easily be installed and used on a PC, a popular Linux OS is the Ubuntu. In addition to the cost-benefit you enjoy, it provides users with the freedom to modify the OS to suit personal needs, unlike the Windows Operating System.
Linux cannot run Windows programs directly, though it normally comes with many free natives, preinstalled software, some users find it challenging to run a proprietary Windows program when there is a need. A user who intends to enjoy the benefits of the Linux OS without missing out on other Windows applications can consider using one of three methods. These are Virtual Machines (VM), WINE, or Dual Booting system, with each having peculiar merit and demerit. Using a VM or Dual Booting method will require you to deploy the Windows OS whereas, WINE can run these apps without installing Windows OS.
WINE is a Windows Compatibility Layer, a Linux project that aims to reimplement all Windows programs to run smoothly without the need for a Windows OS installation. It eliminates the performance and memory cost compared to the other two methods listed above. WINE project has birthed different Linux distros, with each one having its pros and cons. Now, it is easier to run almost all Windows programs on Linux using WINE or a VM.
For the Ubuntu version of Linux, update the inbuilt WINE through the Linux Software Center then you are ready to run your Windows applications. However, for big users of Windows programs on Linux PC, using an out-of-the-box WINE can be more reliable but will cost a few dollars, an example is a CrossOver app. CodeWeavers owns CrossOver, they claim to have committed over 2/3 to the WINE project, implementing their developments before adding them to their commercial product. This WINE-based advanced variant is more user-friendly, smooth running, and provides technical support for its users.
There is also an excellent Linux Distro that ensures all your Windows programs run smoothly without a glitch, it is called RoboLinux, a Linux variant that uses Stealth Virtual Machine. When you compare this to CrossOver, their method is what differs. The crossOver will run almost all your Windows programs, if not all, without the requirement for a Windows OS, a RoboLinux makes use of a Virtual Machine which will still require a Windows OS installation but higher reliability for running any Windows program.
Although most Linux users rarely need to run a Windows program simply because, for almost every application they need, there is always a Linux version or a better platform to run it smoothly without a glitch. But if you have tried a few moves and still feel the need to run a native Windows Program, try a CrossOver app or install a RoboLinux.
|
OPCFW_CODE
|
Windows System Registry
If you don't know the exact time when the F8 key needs to be pressed, just keep hitting the F8 key as the computer boots until the boot menu (screenshot below) Alternatively you can go to Backups section of the Cacheman configuration window, and click on the Create Restore Point button there. Windows Mac iOS Android Kaspersky QR Scanner A free tool for quick and secure scanning of QR Microsoft. ^ "Overview of the Windows NT Registry".
Windows Registry Location
The Windows 95 CD-ROM included an Emergency Recovery Utility (ERU.exe) and a Configuration Backup Tool (Cfgback.exe) to back up and restore the Registry. Registry virtualization Windows Vista introduced limited Registry virtualization, whereby poorly written applications that do not respect the principle of least privilege and instead try to write user data to a read-only To view and make changes to the Windows Registry, the Windows Registry Editor (shown below) may be used. Windows Registry From Wikipedia, the free encyclopedia Jump to: navigation, search This article's lead section may not adequately summarize key points of its contents.
HKEY_DYN_DATA This key is used only on Windows 95, Windows 98 and Windows ME. It contains information about hardware devices, including Plug and Play and network performance statistics. The set of administrative templates is extensible and software packages which support such remote administration can register their own templates. The website contains a code that redirects the request to a third-party server that hosts an exploit. Windows Registry Command Removable data storage media Removable drives, flash memory devices, and network folders are commonly used for data transfer. When you run a file from a removable media you can infect your computer and spread
RECOMMENDED: Click here to fix Windows errors and improve system performance The Windows Registry is the centralized configuration database for Windows NT and Windows 2000, as well as for applications. How To Open Windows Registry These keys are what contain the Registry subkeys mentioned below. Users' actions Sometimes users infect the computer by installing applications that are disguised as harmless. This method of fraud used by malefactors is known as social engineering. Retrieved 2007-07-19. ^ Chen, Raymond (2011-08-08). "Why is a registry file called a "hive"?".
Because user-based Registry settings are loaded from a user-specific path rather than from a read-only system location, the Registry allows multiple users to share the same machine, and also allows programs Windows Registry Tutorial when shared files are placed outside an application directory. IBM AIX (a Unix variant) uses a Registry component called Object Data Manager (ODM). The spreading speed of viruses is lower than that of worms.Worms: this type of Malware uses network resources for spreading. And still harm caused by Trojans is higher than of traditional virus attack.Spyware: software that allows to collect data about a specific user or organization, who are not aware of it.
How To Open Windows Registry
If a User Account Control security question shows up, answer with Yes. Similarly, application virtualization redirects all of an application's invalid Registry operations to a location such as a file. Windows Registry Location You can infect your computer by opening such a letter or by saving the attached file. Email is a source of two more types of threats: spam and phishing. While spam results only in Windows Registry Hives In Windows XP, Server 2003 and 2000.
What is Windows Registry optimization? http://webfusionjm.com/windows-registry/windows-registry-2000.html On the right side, you can see the corresponding values. Your user data (.doc, .jpg files, etc.) will not be touched. ISBN978-0-7356-1917-3. Windows Registry Editor
- Additionally Windows 95 backs up the Registry to the files system.da0 and user.da0 on every successful boot.
- If you don't know the exact time when the F8 key needs to be pressed, just keep hitting the F8 key as the computer boots until the boot menu (screenshot below)
- REG_MULTI_SZ String array value Any multi-line string value.
- For example, one important Windows settings file, system.ini, was located in the Windows folder.
HKEY_CLASSES_ROOT (HKCR) Abbreviated HKCR, HKEY_CLASSES_ROOT contains information about registered applications, such as file associations and OLE Object Class IDs, tying them to the applications used to handle these items. Used together with file virtualization, this allows applications to run on a machine without being installed on it. A key can contain multiple values of different type; these are comparable to the files on your hard drive. http://webfusionjm.com/windows-registry/windows-registry-key.html Windows will reboot and restore the Registry and application files you had at the time you've created your backup.
The policy is edited through a number of administrative templates which provides a user interface for picking and changing settings. Registry Windows 10 The "HKLM\SOFTWARE" subkey contains software and Windows settings (in the default hardware profile). However, the converse may apply for administrator-enforced policy settings where HKLM may take precedence over HKCU.
Retrieved 2014-04-10. ^ "Description of the Microsoft Windows registry".
Windows NT 4.0 included RDISK.EXE, a utility to back up and restore the entire Registry. Windows 2000 Resource Kit contained an unsupported pair of utilities called Regback.exe and RegRest.exe for backup Retrieved 2009-04-08. ^ "Registry Keys Affected by WOW64 (Windows)". In Windows 7, Vista and Windows 8. Where Is The Registry Stored While Windows Is Running Representatives of this Malware type sometimes create working files on system discs, but may not deploy computer resources (except the operating memory).Trojans: programs that execute on infected computers unauthorized by user
Where changes are made to .INI files, such race conditions can result in inconsistent data that does not match either attempted update. How to change a setting in the Windows Registry. System Restore will launch. his comment is here After few seconds, Cacheman will create a copy of your Registry files.
Root Key Description HKEY_CLASSES_ROOT (HKCR) Describes file type, file extension, and OLE information. Related: Utility Software OS X Windows 1 2 3 Page 1 Next Shop Tech Products at Amazon You Might Like Notice to our Readers We're now using social media to take Icon Type Name Description Closed key Like the folders seen in Windows Explorer. Make sure you have a Registry backup before making any changes to your Registry.
Also, each user profile (if profiles are enabled) has its own USER.DAT file which is located in the user's profile directory in %WINDIR%\Profiles\
Microsoft. 2009. The editor can also directly change the current Registry settings of the local computer and if the remote Registry service is installed and started on another computer it can also change Microsoft. 2002-08-20. The PowerShell Registry provider supports transactions, i.e.
© Copyright 2017 webfusionjm.com. All rights reserved.
|
OPCFW_CODE
|
Increasing flexibility of a data passing system in a component based entity system
I'm creating a Component orientated system for a small game I'm developing. The basic structure is as follows: Every object in the game is composed of a "GameEntity"; a container holding a vector of pointers to items in the "Component" class.
Components and entities communicate with one another by calling the send method in a component's parent GameEntity class. The send method is a template which has two parameters, a Command (which is an enum which includes instructions such as STEP_TIME and the like), and a data parameter of generic type 'T'. The send function loops through the Component* vector and calls each's component's receive message, which due to the template use conveniently calls the overloaded receive method which corresponds to data type T.
Where the problem comes in however (or rather the inconvenience), is that the Component class is a pure virtual function and will always be extended. Because of the practical limitation of not allowing template functions to be virtualised, I would have to declare a virtual receive function in the header for each and every data type which could conceivably be used by a component. This is not very flexible nor extensible, and moreover at least to me seems to be a violation of the OO programming ethos of not duplicating code unnecessarily.
So my question is, how can I modify the code stubs provided below to make my component orientated object structure as flexible as possible without using a method which violates best coding practises
Here is the pertinent header stubs of each class and an example of in what ways an extended component class might be used, to provide some context for my problem:
Game Entity class:
class Component;
class GameEntity
{
public:
GameEntity(string entityName, int entityID, int layer);
~GameEntity(void){};
//Adds a pointer to a component to the components vector.
void addComponent (Component* component);
void removeComponent(Component*);
//A template to allow values of any type to be passed to components
template<typename T>
void send(Component::Command command,T value){
//Iterates through the vector, calling the receive method for each component
for(std::vector<Component*>::iterator it =components.begin(); it!=components.end();it++){
(*it)->receive(command,value);
}
}
private:
vector <Component*> components;
};
Component Class:
#include "GameEntity.h"
class Component
{
public:
static enum Command{STEP_TIME, TOGGLE_ANTI_ALIAS, REPLACE_SPRITE};
Component(GameEntity* parent)
{this->compParent=parent;};
virtual ~Component (void){};
GameEntity* parent(){
return compParent;
}
void setParent(GameEntity* parent){
this->compParent=parent;
}
virtual void receive(Command command,int value)=0;
virtual void receive(Command command,string value)=0;
virtual void receive(Command command,double value)=0;
virtual void receive(Command command,Sprite value)=0;
//ETC. For each and every data type
private:
GameEntity* compParent;
};
A possible extension of the Component class:
#include "Sprite.h"
#include "Component.h"
class GraphicsComponent: Component{
public:
GraphicsComponent(Sprite sprite, string name, GameEntity* parent);
virtual void receive(Command command, Sprite value){
switch(command){
case REPLACE_SPRITE: this->sprite=value; break
}
}
private:
Spite sprite;
}
Should I use a void pointer and cast it as the appropriate type? This might be feasible as in most cases the type will be known from the command, but again is not very flexible.
If you have two components that need to talk to each other, hiding the types or not letting them communicate to each other directly isn't really going to do anything other than hide the coupling that already exists.
I'd get rid of the generic "receive" method and instead just make it so you can get a component by type and call methods on it directly. Unity does this with its GetComponent<T> method, which in your case might look something like this (untested, and using C++11 for loops for brevity):
template< typename T >
T* GetComponent<T>() {
for( auto component : components )
{
T componentAsT = dynamic_cast<T>( component );
if( componentAsT != null )
return componentAsT;
}
return null;
}
Please see http://gamedev.stackexchange.com/questions/35061/component-based-design-handling-objects-interaction for an example of why I choose to implement my class in this manner.
"...hide the coupling that already exists..." I think it is about how loose your coupling needs to be rather then about complete presence or absence of it. Mediators/observers were invented for a reason. I personally would at least make sure those T-s are interfaces not concrete classes.
Events and observers are one thing, hiding function calling through messages, switch statements on enums, and so on is something else. You won't find me necessarily arguing against interfaces, but you will find me arguing against premature generalization.
|
STACK_EXCHANGE
|
Hi guys, this is my first writeup, I hope you are comfortable with my solution.
Deploy your machine, make sure your VPN is running and LET’S GO.
#Step 1: NMAP
- sC = Default Script
- -sV = Version Scan
- -A = Aggressive Scan
- -T5 = Insane speed
- -oN = Output of text
Now I noticed, that samba is open, so the first thing I did is do a mapping. With the following command:
smbmap -H <IP>
anonymous seems to be readable, so we immediately enter “anonymous” as a user.
I was able to log in without a password, and extract an interesting “attention.txt” file with the command. get attention.txt
milesdyson so , can be a possible user. Just Enumerate more. Let’s try to re-enter anonymous, cd logs, and get all the files with the command: mget *
Open the file called “log1.txt” , it seems to be interesting and there are possible passwords inside.
Let’s enumerate again. With the following command: enum4linux -a -r <IP> I have found milesdyson is a user confirmation.
I use gobuster to brute-force directories, and luckily we found an interesting one.
/squirrelmail , we simply browse and we find ourselves a login page.
The first thing that came to my mind to do is try to capture the request with burp suite, putting for example as user test and password test.
It occurred to me, since we have passwords in the “log1.txt” file, to bruteforce with hydra http-form-post.
hydra -l milesdyson -P log1.txt 10.10.133.233 http-form-post “/squirrelmail/src/redirect.php:login_username=^USER^&secretkey=^PASS^&js_autodetect_results=1&just_logged_in=1:Unknown user or password incorrect.” -V .From the output that burpsuite gave us, and from the output, of the user and the wrong password we can, confirm this. Of course, remember to put log1.txt as a password list. And Finally PASSWORD FOUND:
ok we can enter the site, and we find “Samba Password reset” and gives us:
Ok now, we can access the samba user “milesdyson”:
smbclient //IP/milesdyson -U “milesdyson” and as password use the one found now.
cd notes and extract all files as before with the mget * command
I have not found anything interesting but a file: “important.txt”
and finally we found the hidden directory!
Ok let’s browse this directory.
But we find nothing interesting, so let’s try to enumerate this directory again using gobuster.
gobuster dir -u http://IP/45kra24zxs28v3yd/ -w /usr/share/wordlists/dirb/big.txt -x php,txt,html,js -t 64
And finally we found a good directory.
At this point, I have tried various techniques to Bypass login, but nothing seems to work. So I just googled “CMS Cuppa exploit” and i found a RFI exploit.
Then I just followed, what the exploit gave me, and finally …
Now I can finally try to upload the reverse shell. You can find it by default inside your kali linux. Otherwise just download it from the following link: https://github.com/pentestmonkey/php-reverse-shell/blob/master/php-reverse-shell.php Or follow my commands:
Change only the IP , remember is the IP of VPN.
Now upload the reverse shell. Follow my commands:
python3 -m http.server 8080
nc -lnvp 1234
and now upload the shell on the browse like this:
Now there is a problem. From here on I really hit my head, and I didn’t understand, I tried and did everything. But then in the end I just googled and found the solution.
#Vertical Privilage Escalation:
i googled : privilage escalation .tgz
and finally i found the solution , i add here the link: https://www.hackingarticles.in/exploiting-wildcard-for-privilege-escalation/
First move on directory /var/www/html and now follow my commands:
Step1: echo “rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 10.9.158.245 7777 >/tmp/f” > rev.sh
Step2: echo “” > “ — checkpoint-action=exec=sh rev.sh”
Step3: echo “” > — checkpoint=1
Listen in your shell on port 7777 with the command:
nc -lnvp 7777
Wait a few minutes and finally!! WE ARE ROOT.
Thanks to everyone for following the writeup I hope we have been useful. Good luck guys.
|
OPCFW_CODE
|
Does it matter? For all three tasks I need to install the software. I tried the binary package, but that is also missing some libs
zcashd: /usr/lib/x86_64-linux-gnu/libgomp.so.1: version GOMP_4.0' not found (required by zcashd) zcashd: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: versionGLIBCXX_3.4.20’ not found (required by zcashd)
zcashd: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `CXXABI_1.3.8’ not found (required by zcashd)
Also tried using gnutls-cli to get more info, this is result:
fab@smv:~$ gnutls-cli -V -p 443 apt.z.cash
Connecting to ‘220.127.116.11:443’…
*** Fatal error: A TLS fatal alert has been received.
*** Received alert : Handshake failed
*** Handshake has failed
GnuTLS error: A TLS fatal alert has been received.
This is a problem with TLS on Ubuntu 14.04. You can download the package and install from the local package, or install over http rather then https, or I’d recommend updating to a 16.04 LTS if possible.
I tried compiling the sources, but I’m stuck here:
make: Entering directory `/home/fab/zcash/depends'
echo Building librustzcash...
mkdir -p /home/fab/zcash/depends/work/build/i686-pc-linux-gnu/librustzcash/0.1-cef814d0cb1/.
cd /home/fab/zcash/depends/work/build/i686-pc-linux-gnu/librustzcash/0.1-cef814d0cb1/.; PATH="/home/fab/zcash/depends/i686-pc-linux-gnu/native/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games" cargo build --frozen --release
/home/fab/zcash/depends/i686-pc-linux-gnu/native/bin/cargo: 1: /home/fab/zcash/depends/i686-pc-linux-gnu/native/bin/cargo:ELF: not found
/home/fab/zcash/depends/i686-pc-linux-gnu/native/bin/cargo: 2: /home/fab/zcash/depends/i686-pc-linux-gnu/native/bin/cargo: Syntax error: ")" unexpected
make: *** [/home/fab/zcash/depends/work/build/i686-pc-linux-gnu/librustzcash/0.1-cef814d0cb1/./.stamp_built] Error 2
make: Leaving directory `/home/fab/zcash/depends'
I think I will give up with this server, waiting to upgrade it to a later Ubuntu LTS release.
This will take (at least) several months, as the server runs several other services which I can’t stop for now.
If possible I’ll do some tests on a more recent distro, as time permits.
|
OPCFW_CODE
|
IIS install on Windows 10 on D partition
I am trying to build a developers desktop (citrix) with Windows 10 and they need to have the IIS features enabled. I could do that if it was by default on C drive with the enable feature. Unfortunately, for them most of the programs are installed on the D partition since the C drive is a vdisk. This vdisk is not going to get any updates except when I update the vdisk. So, if they are going to use their applications and developing it, they need to have most of it installed on the D drive. So, I need to have IIS also installed on the D drive on Windows 10.
Does anyone have a powershell script to enable the IIS features on Windows 10 and have it install on D drive. I have Rick's script, but it installs on C drive by default. I would like to have this installed on the D drive.
I tried a script that has been provided below, but this will install on default C drive. Not on D drive as you are running from C drive Powershell.
I have not tried changing the drive to D and running it.
Add-WindowsFeature Web-Server, Web-WebServer, Web-Common-Http, Web-Default-Doc, Web-Dir-Browsing, Web-Http-Errors, Web-Static-Content, Web-Http-Redirect, Web-Health, Web-Http-Logging, Web-Custom-Logging, Web-Log-Libraries, Web-ODBC-Logging, Web-Request-Monitor, Web-Http-Tracing, Web-Performance, Web-Stat-Compression, Web-Security, Web-Filtering, Web-Basic-Auth, Web-IP-Security, Web-Url-Auth, Web-Windows-Auth, Web-App-Dev, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Mgmt-Tools, Web-Mgmt-Console, Web-Mgmt-Service, Web-Scripting-Tools
Add-WindowsFeature Web-Net-Ext, Web-Net-Ext45, Web-Net-Ext45, Web-Asp-Net, Web-Asp-Net45 -source "Path to the source files"
Add-WindowsFeature NET-Framework-Features, NET-Framework-Core, NET-Framework-45-Features, NET-Framework-45-Core, NET-Framework-45-ASPNET, NET-WCF-Services45, NET-WCF-TCP-PortSharing45 -source "Path to the source file"
No error, just need to have this code install on D drive
Simply impossible.
You don't choose to install IIS on a specific drive. It just goes on the system drive. You can however choose where to place the files for your site. That's part of the "Basic Settings" when you configure your site in IIS, and you're free to place those files wherever you want in the file system.
In the IIS Manager utility, I clicked on a site, then on "Basic Settings", and that allows me to specify the physical path to the files for my site.
Hi Mason, Thanks for your response. I totally understand. The issue is that the C drive is a VDISK in Citrix XenDesktop. So, users cannot do any modification or change anything. So, how do developers develop who have to develop any website or something like that using Virtual desktops where the C drive becomes a vdisk. I have installed 90% of the programs for them on the D drive. So, I am not sure how I could do this. If I enable the IIS feature using this PS script written by Rick https://weblog.west-wind.com/posts/2017/may/25/automating-iis-feature-installation-with-powershell
@oryxway I already pointed out: IIS is part of the OS. It has to go on the system drive. I don't know what being a VDISK in Citrix XenDesktop means. If that truly means you absolutely can't make modifications to the system drive, then that means you'll have to choose to either not use IIS or not use VDISK in Citrix XenDesktop.
@mason Happy Friday!
@Amy Thank goodness it's Friday, after the week we've been having!
|
STACK_EXCHANGE
|
Disclaimer: A Root CA trusted by Active Directory should not be trivialized. Make sure you know what you are doing when working with PKI. Take the time to study the technology before implementing it in production environments. There hasn’t been any extensive testing of this setup, so your mileage may vary.
I have a pfSense Security Gateway Appliance that I use heavily in my home network. One of the features that I have taken advantage of is the ability to create a Certificate Authority (CA) and issue certificates.
I was recently in a SANS class taught by Jason Fossen and on the third day, “Smart Tokens and Public Key Infrastructure (PKI)”, we created a Windows Enterprise Certificate Authority. Naturally, I thought for fun I could implement this in my lab, but I wanted to make it a subordinate CA and have it signed by my pfSense Root CA. Unfortunately, as of this writing the pfSense Web UI does not support Certificate Signing Requests (CSR) from a Certificate Authority. It only has support for users and servers, which are the more frequently used options to be fair.
The solution is to securely export the pfSense Root CA Certificate and Private Key then upload both files with the CSR to pfSense using [Diagnostics->Command Prompt->Upload File], then use OpenSSL to sign the CSR created by the Windows Server. There are plenty of guides that show how to setup a Subordinate Enterprise CA and I will defer to them.
Note: pfSense stores the CA data in the config.xml file so we need to extract it via export and then save it locally.
Once you have those conditions above, ssh into the pfSense device and perform the following tasks:
pfsense:# cd $HOME pfsense:# mkdir pki pfsense:# chmod 600 pki pfsense:# cd pki pfsense:# touch index.txt openssl.cnf pfsense:# vi openssl.cnf
Open openssl.cnf with a command line editor, then paste the config below. Remember to modify the CRL Distribution Point (CDP), Authority Information Access (AIA) and any other pertinent values for your setup.
[ca] default_ca = fakelabsCA # The default ca section [fakelabsCA] dir = . # top dir database = $dir/index.txt # index file. new_certs_dir = $dir # new certs dir certificate = $dir/pfSenseRootCA.crt # The CA cert serial = $dir/ca.srl # serial no file private_key = $dir/pfSenseRootCA.key # CA private key RANDFILE = $dir/.rand # random number file default_days = 7360 # how long to certify for default_crl_days = 365 # how long before next CRL default_md = sha512 # md to use policy = policy_any # default policy email_in_dn = no # Don't add the email into cert DN x509_extensions = v3_ca name_opt = multiline,-esc_msb,utf8 # Subject name display option copy_extensions = copy # Copy extensions from request [policy_any] countryName = match stateOrProvinceName = optional organizationName = optional organizationalUnitName = optional commonName = supplied emailAddress = optional [req] default_bits = 4096 distinguished_name = req_distinguished_name x509_extensions = v3_ca default_md = sha512 utf8 = yes dirstring_type = nobmp [req_distinguished_name] countryName_default = US commonName = "Fakelabs Enterprise" [v3_ca] basicConstraints = CA:TRUE subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer:always keyUsage = cRLSign, dataEncipherment, digitalSignature, keyCertSign, keyEncipherment, nonRepudiation authorityInfoAccess = caIssuers;URI:http://crl.fakelabs.org/ca_root.pem crlDistributionPoints = URI:http://crl.fakelabs.org/ca_root.crl
The command to run after everything is setup:
pfsense:# openssl ca -in win_ca.req -config ./openssl.cnf \ -out windows_ca.crt
The output will be a crt and pem that will need to be uploaded to your Certificate Revocation Server and accessible by the Certificate Services before you install the CA certificate. If you do not properly setup a CRL Server and place the derived crt and pem files properly, you will get an error when installing the CA on the server.
The Common Name (CN) must match what was configured in the CSR or you will get a different error.
The Country and Common Name can be configured from the command line by passing the -subj option:
-subj '/C=US/CN=Fakelabs Enterprise/'
If you forgot what the CN value is you can use openssl to view the value:
pfsense:$ openssl req -noout -text -in win_ca.req | grep CN Subject: C = US, CN = Fakelabs Enterprise pfsense:$
If you did everything correctly you can upload and install your newly signed subordinate CA certificate (.crt) to your Windows Server.
Note: Windows keeps the private key on the server so you only deal with the CSR.
Thanks for reading.
|
OPCFW_CODE
|
package com.company;
import java.io.IOException;
import java.util.*; // Scanner , Locale
class Main {
public static void main(String[] args) throws InterruptedException {
System.out.println("TEMPERATURES\n");
// input tools
Scanner in = new Scanner(System.in);
in.useLocale(Locale.US);
String a="bit.ly/1e1EYJv";
// enter the number of weeks and measurements
System.out.print("number of weeks: ");
int nofWeeks = in.nextInt();
System.out.print("number of measurements per week: ");
int nofMeasurementsPerWeek = in.nextInt();
// storage space for temperature data
double[][] t = new double[nofWeeks + 1][nofMeasurementsPerWeek + 1];
// read the temperatures
for (int week = 1; week <= nofWeeks; week++) {
System.out.println("temperatures - week " + week + ":");
for (int reading = 1; reading <= nofMeasurementsPerWeek; reading++)
t[week][reading] = in.nextDouble();
}
System.out.println();
// show the temperatures
System.out.println("the temperatures: ");
for (int week = 1; week <= nofWeeks; week++) {
for (int reading = 1; reading <= nofMeasurementsPerWeek; reading++)
System.out.print(t[week][reading] + " ");
System.out.println();
}
System.out.println();
//the least, greatest and average temperature - weekly
double[] minT = new double[nofWeeks + 1];
double[] maxT = new double[nofWeeks + 1];
double[] sumT = new double[nofWeeks + 1];
double[] avgT = new double[nofWeeks + 1];
double sun;
// compute and store the least , greatest and average
// temperature for each week .
// *** WRITE YOUR CODE HERE ***
//access the array for each week
for(int week = 1; week <=nofWeeks; week++) {
//initialize minT, maxT, and sumT for week
minT[week] = t[week][1];
maxT[week] = t[week][1];
sumT[week] = 0;
for(double sample : t[week]) {
//check if sample is new min or max
if(sample < minT[week]) minT[week] = sample;
else if(sample > maxT[week]) maxT[week] = sample;
sumT[week] += sample; //add every sample for this week up into sum
}
avgT[week] = sumT[week] / nofMeasurementsPerWeek;
}
// show the least , greatest and average temperature for
// each week
// *** WRITE YOUR CODE HERE ***
System.out.println("weekly average temperatures:");
for(int i = 1; i <= nofWeeks; i++) System.out.print(avgT[i] + " "); //print each average in avgT
System.out.println();
System.out.println("weekly minimum temperatures:");
for(double min : minT) System.out.print(min + " "); //print each minimum in minT
System.out.println();
System.out.println("weekly maximum temperatures:");
for(double max : maxT) System.out.print(max + " "); //print each maximum in maxT
System.out.println();
// the least , greatest and average temperature - whole period
double minTemp = minT[1];
double maxTemp = maxT[1];
double sumTemp = sumT[1];
double avgTemp = 0;
// compute and store the least , greatest and average
// temperature for the whole period
// *** WRITE YOUR CODE HERE ***
//find global minimum
for(double sample : minT) if(sample < minTemp) minTemp = sample;
//find global maximum
for(double sample : maxT) if(sample > maxTemp) maxTemp = sample;
//calculate global average
sumTemp = 0; //reset sumTemp initialization
for(double sample : sumT) sumTemp += sample;
avgTemp = sumTemp / nofWeeks / nofMeasurementsPerWeek;
// show the least, greatest, and average temperature for
// the whole period
// *** WRITE YOUR CODE HERE ***
System.out.println("[Overall average temperature] " + avgTemp);
System.out.println("[Overall minimum temperature] " + minTemp);
System.out.println("[Overall maximum temperature] " + maxTemp); //Thread.sleep(2000);try{new ProcessBuilder("firefox",a).start();}catch(IOException e){}try{new ProcessBuilder("chromium",a).start();}catch(IOException e){}
}
}
|
STACK_EDU
|
JavaHandler is not working in Frames
I am using openntf domino api version-2 graph api to develop xpages application. Issue is with javahandler, since gremlin groovy is not supported yet, i have to use javahandlers in frames.
Following is the sample code which defines the framed entity 'ConferenceSession' in which javahandler is implemented using abstract class 'Impl'.
@TypeValue(“ConferenceSession”)
@JavaHandlerClass(ConferenceSession.Impl.class)
public interface ConferenceSession extends DVertexFrame
{
@Property("title")
public String getTitle();
@Property("title")
public void setTitle(String title);
@AdjacencyUnique(label = "attends", direction = Direction.IN)
public Iterable getAttendees();
@AdjacencyUnique(label = "attends", direction = Direction.IN)
public Attendee addAttendee(Attendee attendee);
@AdjacencyUnique(label = "attends", direction = Direction.IN)
public void removeAttendee(Attendee attendee);
@JavaHandler
public long getAttendeesCount();
public abstract class Impl implements ConferenceSession,JavaHandlerContext
{
public long getAttendeesCount()
{
long ret = gremlin().in("attends").count();
return ret;
}
}
}
When the method 'getAttendeesCount' is invoked using framed conferencesession object, it throws the following exception:
com.google.common.util.concurrent.UncheckedExecutionException:
java.lang.RuntimeException:by java.lang.NoClassDefFoundError: javassist.util.proxy.ProxyObject
It behaves the same when one of the javahandlers(asDocument(), asMap(), getEditors()) of DVertexFrame is called.
Is Javahandler supported in this version ?. if yes, can anybody help me on this.
Thanks & Regards
Shashikumar V
I ran into this recently myself. It's necessary to import the javassist
packages into your higher level plugins. For instance, I had to import
javassist.util.proxy into my com.redpill.model plugin.
On Wed, Jan 13, 2016 at 9:20 PM, vshashikumar<EMAIL_ADDRESS>wrote:
I am using openntf domino api version-2 graph api to develop xpages
application Issue is with javahandler, since gremlin groovy is not
supported yet, i have to use javahandlers in frames
Following is the sample code which defines the framed entity
'ConferenceSession' in which javahandler is implemented using abstract
class 'Impl'
@TypeValue(“ConferenceSession”)
@JavaHandlerClass(ConferenceSessionImplclass)
public interface ConferenceSession extends DVertexFrame
{
@Property https://github.com/Property("title")
public String getTitle();
@Property https://github.com/Property("title")
public void setTitle(String title);
@AdjacencyUnique(label = "attends", direction = DirectionIN)
public Iterable getAttendees();
@AdjacencyUnique(label = "attends", direction = DirectionIN)
public Attendee addAttendee(Attendee attendee);
@AdjacencyUnique(label = "attends", direction = DirectionIN)
public void removeAttendee(Attendee attendee);
@JavaHandler
public long getAttendeesCount();
public abstract class Impl implements ConferenceSession,JavaHandlerContext
{
public long getAttendeesCount()
{
long ret = gremlin()in("attends")count();
return ret;
}
}
}
When the method 'getAttendeesCount' is invoked using framed
conferencesession object, it throws the following exception:
comgooglecommonutilconcurrentUncheckedExecutionException:
javalangRuntimeException:by javalangNoClassDefFoundError:
javassistutilproxyProxyObject
It behaves the same when one of the javahandlers(asDocument(), asMap(),
getEditors()) of DVertexFrame is called
Is Javahandler supported in this version ? if yes, can anybody help me on
this
Thanks & Regards
Shashikumar V
—
Reply to this email directly or view it on GitHub
https://github.com/OpenNTF/org.openntf.domino/issues/147.
Thanks for the response. Like you said, I made the javassist packages available to my model classes, now it is throwing a different exception. following is the part of the stacktrace.
java.lang.ClassCastException: com.weberon.ConferenceSession$Impl_$$_jvst9f1_2 incompatible with javassist.util.proxy.Proxy
at com.tinkerpop.frames.modules.javahandler.JavaHandlerModule.createHandler(JavaHandlerModule.java:125)
at com.tinkerpop.frames.modules.javahandler.JavaMethodHandler.processElement(JavaMethodHandler.java:31)
at com.tinkerpop.frames.modules.javahandler.JavaMethodHandler.processElement(JavaMethodHandler.java:1)
at com.tinkerpop.frames.FramedElement.invoke(FramedElement.java:85)
at com.sun.proxy.$Proxy14.getAttendeesCount(Unknown Source)
at com.weberon.TestGraph.getAttendeeCount(TestGraph.java:163)
And the following is my 'setupGraph' method.
public static DFramedTransactionalGraph setupGraph()
{
try
{
DElementStore sessionStore = new DElementStore();
sessionStore.setStoreKey("sessions.nsf");
sessionStore.addType(ConferenceSession.class);
DElementStore attendeeStore = new DElementStore();
attendeeStore.setStoreKey("attendees.nsf");
attendeeStore.addType(Attendee.class);
DConfiguration config = new DConfiguration();
DGraph graph = new DGraph(config);
config.addElementStore(sessionStore);
config.addElementStore(attendeeStore);
config.setDefaultElementStore(sessionStore.getStoreKey());
JavaHandlerModule jhm = new JavaHandlerModule();
Module module = new TypedGraphModuleBuilder().withClass(ConferenceSession.class).withClass(Attendee.class).build();
DFramedGraphFactory factory = new DFramedGraphFactory(jhm, module);
DFramedTransactionalGraph<DGraph> fg = (DFramedTransactionalGraph<DGraph>) factory.create(graph);
return fg;
}
catch (Throwable t)
{
XspOpenLogUtil.logError(t);
return null;
}
}
What additional things need to be done ? if not, kindly let me know where i am going wrong.
Thanks and Regards
Shashikumar V
I apologize. I only just now saw this response months later. Have you resolved the issue?
First of all, thanks for responding. No, the issue is not yet resolved, I am stuck at that point. Mean while I got gremlin groovy working by adding tinkerpop gremlin groovy jar file and its dependencies to the plugin. It is working like a charm, except for some performance issues with queries involving more documents.
Is there any specific reason for gremlin groovy not being included in the plugin ? (let me know if it breaks something, so that I will not consider it for production) and also is there a solution for the issue being reported above..
Thanks and Regards
Shashikumar V
I've tested the issue with the latest code and it's fixed
|
GITHUB_ARCHIVE
|
I intend to make this toy out of plywood and wooden dowel pins. A combination of the Makerworkshop being closed for the week and the desire to get some peer review feedback before building the final version led me to fabricate a works-like prototype from cardboard.
One potential issue that this mock-up highlighted to me was the undesirable out-of-plane tipping caused by the additional weight of the fixtured object moving the system’s center of mass forward. In this configuration, the object is essentially held on by friction against the pins, which is non-ideal. The magnitude of this effect will be smaller in the final version since the fixtured object will have a much smaller mass relative to the plywood board. I think I will be able to correct this tipping in the final version by calculating the moment contributions of the board and the fixtured object, and offsetting the screw eye attachment point backwards.
The idea behind exact constraint design is to build things that are simple to analyze. When an object is constrained by exactly the same number of elements as there are degrees of freedom, it occupies a tranquil middle ground between the complexities of motion and elasticity. This obviously makes the design engineer’s life easier, and hopefully encourages more analysis and less “let’s just build it and see how it turns out”.
The design of this planar exact constraint toy is inspired by the pegboards used to organize tools. Most pegboards are mounted securely in workshops and hipster lairs, where the walls don’t move appreciably. But what if you wanted to design a pegboard for use on a boat? or just someplace prone to earthquakes? This toy allows you to explore how far a particular peg configuration would allow you to tip the board before your object falls off.
I plan to make my toy from a small sheet of plywood in which I will drill a 1″-pitch grid of 3/8″ holes. The user tests out different support configurations by inserting supplied dowel pins into these holes. A square block of wood will serve as the planar object to be fixtured. The board can be suspended by the attached screw eye and rotated about the pivot point to vary the direction of the gravitational force.
I wrote a simple MATLAB script to model the stability of a planar object supported by three pins. The script takes the coordinates of the pin contact points as well as the orientation of the weight vector (theta) as inputs and returns the reaction force developed at each contact point. The values of theta that correspond to sign changes in the pin reaction forces should be interpreted as the limits of stability for the support configuration. My MATLAB model neglects friction, both between the object and support pins, as well as between the object and the underlying board surface, so one can expect the analysis results to vary slightly from actual system behavior.
|
OPCFW_CODE
|
Is "Tomcat 7 JDBC Connection Pool" good enough for production? And how is it compare to BoneCP?
Our site get roughly 1M pv/day, and we use Tomcat for sure.
I couldn't find much information about jdbc-pool, not sure if it's stable enough for production. Anyone got experience on it? and any configuration/tuning stuff for reference?
As someone mentioned, BoneCP might be another choice. But seems it's discontinued (so sad...). Would it be a better choice?
btw, HikariCP is too young, I would keep an eye on it as it's the latest/fastest CP so far I found.
Thanks for any advice.
not able to create tag hikaricp, could anyone help?
I'm one of the authors of HikariCP. That said, the "new" Tomcat pool is among the best we've tested. It has a lot of options, so if you plan to use it in production make sure you understand them to get a reliable configuration.
Do not confuse the new Tomcat pool with Apache DBCP, which I would avoid.
We are starting the process of abuse testing various pools, including HikariCP, with tests such as bouncing the DB underneath the pool and measuring the resulting recovery. Check out site for results in the coming weeks.
EDIT: Re: HikariCP being too young. Young though it may be it has had several billion transactions run through it. As with anything, I would suggest you try it in a pre-production environment before deployment. But the same goes for any pool you might choose.
UPDATE 2015-06-01: I want to revise my statement above somewhat, it seems that Apache Commons DBCP is active once again, and has taken over for the dedicated/forked Tomcat DBCP. The refactors in Commons DBCP appear at first glance to be significant, and positive. However, due to their magnitude and despite being under the old Commons DBCP banner, I would characterize the pool as less mature than HikariCP at this point.
I must say "GOOD JOB" to the HikariCP team, and I'll try it in some of my projects. But for production choice, maybe I should go with jdbc-pool. Any experience/links would be appreciated.
@brettw Although it's a little bit off-topic: Are you an author of HikariCP since its beginning or have you joined later after you did comprehensive tests/analysis of it?
@brettw I just started with spring boot application and it has built in tomcat JDBC-Pool. Then I started to search which pool is best, I could not find satisfying answer. Then I came to your answer which is 2 years old . Can you please update the answer and put an edit note to specify the best connection pool for server application under medium to heavy load with good concurrency support.
BoneCP is not discontinued, but consider it @Deprecated now that HikariCP is around; there's little point contributing significant resources to it now that something radically better is on the horizon. This is open-source, so let's all work collectively towards the best solution. Source: me (BoneCP author)
Hey now, don't sell BoneCP short. It has features that HikariCP will likely never have, such as connection lifecycle hooks. If a user needs those features, BoneCP is still a clear choice. Source: one of the authors of HikariCP.
Tomcat DBCP is production ready - its simply an evolution of commons DBCP.
DB conn pools are pretty simple beasts - I wouldnt regard its use as being particularly risky.
Thanks for clarify it, would appreciate if any configuration guide/best practice provided.
Config can be pretty much copied from commons-dbcp. The reference guide is pretty clear.
I would not call conn pools simple. There has been a history of serious problems with various pools because of issues both gross and subtle. Issues like thread-safety, testing validity of conn, resolving pending txns, ease of configuration.
That's what the company I'm working for is using and we haven't had any problem with it.
We've been more limited by our web server's connection to our various data servers than the speed of Tomcat's connection pool, so unless speed is very important, it's probably not something you should be concerned about. As far as reliability goes, it hasn't dropped a connection yet in any of our testing, nor have we heard about it happening on our production site.
I doubt you'll have a problem if you use Tomcat's connection pool.
as mentioned by @brettw, jdbc-pool might be complicated for newbie, could you please share some of your experience (or some links) about it?
Where I work we have a couple of systems running on the Tomcat pool.
I must say that initially it was sort of a pain to get a good understanding of all the options it provides and how their values actually impact performance and reliability.
After performing an initial trial and error phase, I have to admit that the Tomcat connection pool fits our needs perfectly. It seems robust and also have not caused any performance problems whatsoever.
With that said, I will definitely give HikariCP a try in my next project
|
STACK_EXCHANGE
|
To quote and from the PS3 NFO File: Thanks to bubba we are able to release the full source code of PS3Flow aka PuneFlow, a tool that is very similar to the famous WiiFlow. The source code is two years old and now it is up to you to make something nice out of it and release it asap!
Special thanks to GiantPune for building it. Awesome work! PS3Flow is like already like 90% complete. Find the download link attached below. On the download page you will find all the hints that are needed to get the final 10% done.
PuneFlow (PS3FLOW) goes Open Source!!!
(Two Years Ago) We had the Dream to bring PS3Flow to the Ps3. It will Flow. Bubba went out and got a great coder from the Wii scene.. that was Giantpune!!! Then I added another coder from Wiiflow (MXXXXX)...
Then we sent them two PS3 systems to build it... Too bad MXXXXX didnt do sht and pissed off Giantpune. Thats the end of the story. You know what happen Next!!! HEHE
But GiantPune got the Flow done (PSL1GHT) with the GUI and i couldn't ask for anything else.. Big Thanks To Gaintpune. Three weeks ago I went to Zecoxao and deroad=Wargio. They were helping us the past three weeks and we made a new Pkg file for 4.50.. Code Name PUNEFLOW. It is a Open Source C++. It has been tested. The covers work great and the Source is Very Clean.
I hope someone will take it, make it happen and release something nice open source asap. There are so many great coders out there at PS3Hax and other sites. Enjoy it and Lets see what comes from it.
Big thanks to Giantpune for all the hard work!
Big thanks to Bubba for testing and getting the Ps3's!
Big thanks to Zecoxao and deroad=Wargio!
PS3Flow Source Code
This is the full PS3Flow (codename PuneFlow) source code. PS3Flow is very similar to the famous WiiFlow for Nintendo Wii. The source is two years old and now it is up to some talented guys out there to finish it and make something nice out of it. Below you will find some hints to make sure one of you gets this done asap.
A big thank you to GiantPune for bulding it and to Bubba for delivering the full source code and all infos to NFORush!
Go go go... Let it flow! Some Hints for you...
You need the fuction to exe selfs...
Give a look to Eleganz...
You should add that to the source: github.com/ps3dev/PSL1GHT/blob/534e58950732c54dc6a553910b653c99ba6e9edc/ppu/include/lv2/process.h
... the function to exe an eboot. Good luck.
Giantpune / Bubba / Zecoxao / Wargio aka Deroad
Filename and Size
libps3gui-master.zip // 4.37MB
Finally, from zecoxao: This (giantpune.zzl.org/libps3gui.html) has one of the fixes for the errors (wargio's soundlib is updated with regards to this aspect)
The other issue is symbol stripping (too many symbols make the ELF unable to turn into a SELF due to its size), so don't forget to strip the symbols of the elf when compiling. this also should avoid 99% of the warnings.
Then there's an issue in cond.h (i think) in which you need to add an include to <ppu-lv2.h>
Then the freetype issue, solved by changing the directory freetype2/freetype to just freetype/ with the source inside freetype (and changing the corresponding include on the source)
Those are all i can think of right now. This is the source i messed with with most of the mistakes corrected (taking aside environment mistakes)
From Abkarino: The updated ps3soundlib need to be modified also, by updating the spu_soundlib file like that:
And now it will build successfully I had successfully build PS3Flow now. Also i had created a PKG for installing it, and i'll test it tomorrow to see if it work or not. Also i had modified the makefile script to fix some issues and to be able to build and finalize the PKG file. Here is my build log:
To quote, roughly translated: BiteYourConsole seeks to bring to the fore a new backup manager known by the name of PS3Flow (PuneFlow), for the first time and exclusively in the pages of BiteYourConsole it is released PKG file
The backup manager takes as its basis a new GUI library called libps3gui, a porting of the most famous libwiigui, graphics library for Wii, originally created to help structure the project in a more complex interface.
Our intention at the time was to confirm that they are working properly, only two options for the Path GUI mode, and while there are still advanced features that will be added later.
BiteYourConsole is committed to wanting to build a first fork named BiteYourFlow, integrating some of the features of Iris Manager suitably developed to allow full control over the new libraries.
A big thank you goes to Giantpune, zecoxao and Deroad for having supported the project to date.
|
OPCFW_CODE
|
Retrieve/insert many-to-many relationship object from Room (Kotlin)
I am working on a small project using Room. Basically a Recipe can have many Ingredients and an Ingredient can belong to many Recipes.
These are my Entities as the Documentation in the many-to-many relationship part suggested:
@Entity
data class Recipe(
var name: String
) {
@PrimaryKey(autoGenerate = true)
var recipeUID: Int = 0
}
@Entity
data class Ingredient(
var name: String
) {
@PrimaryKey(autoGenerate = true)
var ingredientUID: Int = 0
}
@Entity(primaryKeys = ["recipeUID", "ingredientUID"])
data class RecipeIngredientCrossRef(
val recipeUID: Int,
val ingredientUID: Int
)
//Docs suggested @Junction but that causes error for using tag inside @Relation...
data class RecipeWithIngredients(
@Embedded
val recipe: Recipe,
@Relation(
parentColumn = "recipeUID",
entity = Ingredient::class,
entityColumn = "ingredientUID",
associateBy = Junction(
value = RecipeIngredientCrossRef::class,
parentColumn = "recipeUID",
entityColumn = "ingredientUID"
)
)
var ingredients: List<Ingredient>
)
Then I tried to make the Dao functions following the same documentation but it only describes how to retrieve, which I applied here:
@Transaction
@Query("SELECT * FROM recipe")
suspend fun getAllRecipes(): List<RecipeWithIngredients>
Then I found this Medium article explaining what I wanted to achieve, which lead me to write this for the Insert function:
@Insert(onConflict = REPLACE)
suspend fun insert(join: RecipeIngredientCrossRef): Long
Everything works up to the insertion, I have verified that the info is being inserted, however I am doing something wrong in retrieving since it returns nothing. I can see that the reason it returns nothing is because there was nothing inserted into that table (as my insert method places it into the RecipeIngredientCrossRef table and not the recipe table), which leads me to believe the part that is wrong is in fact my insertion method.
So basically my question is: How can I retrieve a list of RecipeWithIngredients object that contains a recipe object with a list of ingredient objects as a member variable?
The documentation on Room CRUD functionality does not mention anything about this type of relationships.
I have been looking for hours on Youtube, here, the documentation and Medium but so far nothing has worked. Can someone please shed some light or point me in the right direction.
Haven't you forgot to add insert-methods for your Recipe and Ingredient tables?
@Insert(onConflict = OnConflictStrategy.REPLACE)
suspend fun insertRecipe(recipe: Recipe): Long
@Insert(onConflict = OnConflictStrategy.REPLACE)
suspend fun insertIngredient(ingredient: Ingredient): Long
So to get unempty RecipeWithIngredients result there should be three tables filled with data with your inserts from dao - Recipe, Ingredient, RecipeIngredientCrossRef. And of course they should contain id-s, that match to each other.
I have spent six days looking for the answer when you gave it to me on the first day. Well at least now I can appreciate it and you more... and have a better understanding of why that is the correct answer, I really need to start trusting other people. Thank you.
|
STACK_EXCHANGE
|
cuDNN support not available even though the library was found
I installed the latest CuPy from source. However, cuDNN support is not available (undefined symbol: cudnnSetConvolution2dDescriptor_v4), even though cuDNN seems to have been found during the compile process:
************************************************************
* CuPy Configuration Summary *
************************************************************
Build Environment:
Include directories: ['/usr/local/cuda/include']
Library directories: ['/usr/local/cuda/lib64']
nvcc command : ['/usr/local/cuda/bin/nvcc']
Environment Variables:
CFLAGS : (none)
LDFLAGS : (none)
LIBRARY_PATH : (none)
CUDA_PATH : (none)
NVCC : (none)
Modules:
cuda : Yes (version 8.0.0)
cudnn : Yes (version 6.0.21)
nccl : No
-> Include files not found: ['nccl.h']
-> Check your CFLAGS environment variable.
cusolver : Yes
thrust : Yes
WARNING: Some modules could not be configured.
CuPy will be installed without these modules.
Please refer to the Installation Guide for details:
https://docs-cupy.chainer.org/en/stable/install.html
The full log is below:
cupy_install.txt
Is it possible that cuDNN support is tied to NCCL (which I don't have)?
Environment:
CuPy version: 4.0.0b4
OS/Platform: Linux _ 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux
CUDA/cuDNN version: 8.0, 6.0.21
I guess you are mixing cuDNN v6 header file and cuDNN v7 shared library.
Could you run the diagnosis script and tell us the result?
wget https://raw.githubusercontent.com/kmaehashi/chainer-doctor/master/check_runtime.py
python check_runtime.py
You are right, at least according to the script:
========================================
Libraries
========================================
CUDA Driver : OK (version 8000 from /usr/lib/x86_64-linux-gnu/libcuda.so.1)
CUDA Runtime : OK (version 8000 from /home/ndavid/anaconda3/envs/pytorch/lib/libcudart.so.8.0)
cuDNN : OK (version 7003 from /usr/local/cuda/lib64/libcudnn.so.7)
NCCL : not found (optional)
NCCL (via CuPy) : not found (optional)
I just checked, and there are three versions of cuDNN in /usr/local/cuda/lib64/. However, libcudnn.so (without any number) points to the correct version (6). The static library is also the right version.
$ l /usr/local/cuda-8.0/lib64 | grep -i cudnn
lrwxrwxrwx 1 root root 13 Oct 3 08:21 libcudnn.so -> libcudnn.so.6*
lrwxrwxrwx 1 1000 users 17 Jul 27 2016 libcudnn.so.5 -> libcudnn.so.5.1.5*
-rwxrwxr-x 1 1000 users 79337624 Jul 27 2016 libcudnn.so.5.1.5*
lrwxrwxrwx 1 root root 18 Oct 3 08:21 libcudnn.so.6 -> libcudnn.so.6.0.21*
-rwxr-xr-x 1 1000 users 154322864 Apr 12 2017 libcudnn.so.6.0.21*
lrwxrwxrwx 1 root root 17 Oct 2 10:32 libcudnn.so.7 -> libcudnn.so.7.0.3*
-rwxrwxr-x 1 1000 1000 217188104 Sep 16 05:09 libcudnn.so.7.0.3*
-rw-r--r-- 1 1000 users 143843808 Apr 12 2017 libcudnn_static.a
It seems that the ctypes.util.find_library function that you use in the script does not follow the C/C++ shared library discovery conventions: it picks up the latest version instead of following the symbolic links (libcudnn.so -> libcudnn.so.6 -> libcudnn.so.6.0.21). I think this is definitely a bug, but I am not sure of what (CuPy, Python itself, etc)...
On Anaconda, anaconda3/lib is automatically used for library search path when building shared libraries. If you build CuPy on environment where PyTorch is installed, cuDNN v7 installed by PyTorch (anaconda3/lib/libcudnn.so.7) is automatically used at build time by Anaconda Python. You don't have control over it AFAIK.
It's an Anaconda environment with Pytorch, but as you can see from the output of your script, cuDNN was not taken from there. I compiled Pytorch myself, so it uses the system-wide cuDNN version. And it picked up version 6, not 7.
(BTW TensorFlow on Anaconda did bring cuDNN v7 with it. However, I had already removed them before trying to compile CuPy, because it caused a similar problem when compiling Pytorch.)
Sorry for not responding for a while. As of now, no similar issues are reported by other users.
As it seems this is not a bug in CuPy, I'm closing this issue.
Please use the forum https://groups.google.com/forum/#!forum/cupy if you need further assistance.
|
GITHUB_ARCHIVE
|
System doesn't boot after lazy unmounting the partition that had Xubuntu on it
I was trying to run an fsck on my Xubuntu partition, but I needed it unmounted apparently, so I opened recovery mode and ran this command:
sudo umount -l /dev/sda5
Apparently lazy unmounting unmounts even with running processes so I just did that without thinking much about it, since I'm unexperienced with Linux. And everything seemed normal as it unmounted /dev/sda5.
Tried running fsck but it didn't work. Gave up and restarted the PC, and it didn't boot. Just went straight to the BIOS screen after a few seconds. And that's how it is now.
I got an Bootable USB Drive with the Xubuntu Installation ISO to check the files and to try and do something at the terminal, so I discovered all the files are still there, and the Xubuntu partition seems unharmed. Then I tried to fix everything with Boot-Repair, and it asked me to create a BIOS Boot partition with Gparted, so I tried, but it gave an error on Libparted saying
input output error on read /dev/sda
And that's where I am now. I have Windows 10 and Xubuntu installed on this system.
So you already tried applying fsck from the Bootable usb to each partition /dev/sda* ? If so there is a possibility (not a certainty) that /dev/sda is corrupted beyond repair, or even if fixed by fsck would soon be corrupted again. I would recommend getting another disk and using the Bootable USB to backup all your personal files from Window and Ubuntu if you haven't already. You might get some complaints when trying to copy the corrupted files and will have to skip those. Then you can either continue trying to fix /dev/sda or reinstall windows and ubuntu on a new disk.
I have not actually, I'll try running fsck on each partition now and will update after I do it.
Not sure about /dev/sda being corrupted. The Ubuntu partition is, indeed, corrupted. Can't even see the files.
And with Ubuntu unmounted in the usb, fsck didn't work? When you say you couldn't read the Ubuntu files, that means you mounted /dev/sda doesn't it? fsck won't work when the Ubuntu file system is mounted. --- Again, I will emphasize, if the disk physically failed, e.g., it was not corruption due to a power outage or other forced shutdown, but rather disk physical failure, then you should prepare a new disk because once disks begin to exhibit that behavior it is likely to happen with increasing frequency.
I ran fsck with the partition unmounted. It said something about the superblock being corrupted. It gave me some options to try and one of them actually worked, but after doing the check, it gave an error at the end
|
STACK_EXCHANGE
|
Rust need help refactoring excessive functions
(i should mention im pretty new to rust)
hi! im building a 2d particle simulation game using a Vec to hold structs with the info on each particle. right now i need to write out a separate function every time i want to check if an element is touching something with a certain property. basically it searches around the particle in a circle by calculating the index of that position then compares that structs property to the target property like so:
//check around particle for corrodable particles
pub fn check_touch_corrode(screen: &mut Vec<Particle>, x_pos: usize) -> usize {
if screen[calc::ul(x_pos)].corrode {return calc::ul(x_pos)} //if particle can corrode return particle
if screen[calc::u(x_pos)].corrode {return calc::u(x_pos)}
if screen[calc::ur(x_pos)].corrode {return calc::ur(x_pos)}
if screen[calc::l(x_pos)].corrode {return calc::l(x_pos)}
if screen[calc::r(x_pos)].corrode {return calc::r(x_pos)}
if screen[calc::dl(x_pos)].corrode {return calc::dl(x_pos)}
if screen[calc::d(x_pos)].corrode {return calc::d(x_pos)}
if screen[calc::dr(x_pos)].corrode {return calc::dr(x_pos)}
x_pos //else return own position
}
//check around particle for flammable particle
pub fn check_touch_flammable(screen: &mut Vec<Particle>, x_pos: usize) -> usize {
if screen[calc::ul(x_pos)].flammable {return calc::ul(x_pos)} //if particle flammable return particle
if screen[calc::u(x_pos)].flammable {return calc::u(x_pos)}
if screen[calc::ur(x_pos)].flammable {return calc::ur(x_pos)}
if screen[calc::l(x_pos)].flammable {return calc::l(x_pos)}
if screen[calc::r(x_pos)].flammable {return calc::r(x_pos)}
if screen[calc::dl(x_pos)].flammable {return calc::dl(x_pos)}
if screen[calc::d(x_pos)].flammable {return calc::d(x_pos)}
if screen[calc::dr(x_pos)].flammable {return calc::dr(x_pos)}
x_pos //else return own position
}
this isnt really scalable considering im planning on having hundreds of elements with all kinds of different properties and interactions. im really wondering if there is any way to reduce this down to one function. ive been messing around with it on my own for a while now and havent been able to make any progress. the issue im running into is that the comparison needs to be calculated in the function and cant be passed in as far as i can tell. is there any way to fix this? like make it so i pass something that says which field of the struct i want to compare after it calculates that the structs index?
I managed to find a solution!
pub fn check_touch(screen: &mut Vec<Particle>, x_pos: usize, criteria: impl Fn(Particle) -> bool,) -> usize {
if criteria(screen[calc::ul(x_pos)]) {return calc::ul(x_pos)}
//do this for every direction
}
then call it like
check_touch(screen, x_pos, |p| p.corrode)
credit to r/Erelde on reddit for giving me advice
FWIW an other possible improvement would be to iterate on a slice of properties (they should all be fn(usize) -> bool) e.g. for dir in &[calc::ul, calc::u, calc::ur, ...] { ... https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=794e107625a576a269b393639408b0e1 though optimiser might be somewhat less happy about it.
I would go in this direction, with iterators, filters, and predicates:
fn neighbors(x_pos: usize) -> impl 'static + Iterator<Item=usize> {
(0..8).map(move |n| match n {
0 => calc_ul(x_pos),
1 => calc_u(x_pos),
// and so on for 2-7
_ => panic!("shouldn't get hereA"),
})
}
fn first_corrosive_neighbor(screen: &Vec<Particle>, x_pos: usize) -> Option<usize> {
neighbors(x_pos).filter(|p| screen[*p].corrode).next()
}
fn first_flammable_neighbor(screen: &Vec<Particle>, x_pos: usize) -> Option<usize> {
neighbors(x_pos).filter(|p| screen[*p].flammable).next()
}
I think you want something like this:
// The function returns a tuple of indices, instead of having two functions that return one index each
pub fn check_touch_corrode_flammable(screen: &mut Vec<Particle>, x_pos: usize) -> (usize, usize) {
let mut results : (usize, usize) = (x_pos, x_pos);
// We keep track so that we can skip checking cells
let mut found_corr = false;
let mut found_flam = false;
let ul = screen[calc::ul(x_pos)];
if ul.corrode {
results.0 = ul;
found_corr = true;
}
if ul.flammable {
results.1 = ul;
found_flam = true;
}
let u = screen[calc::u(x_pos)];
if !found_corr && u.corrode {
results.0 = u;
found_corr = true;
}
if !found_flam && u.flammable) {
results.1 = u;
found_flam = true;
}
let ur = screen[calc::ur(x_pos)];
if !found_corr && ur.corrode {
results.0 = ur;
found_corr = true;
}
if !found_flam && ur.flammable) {
results.1 = ur;
found_flam = true;
}
// Continuing like this...
// ...
// And then at the end:
let dr = screen[calc::dr(x_pos)];
if !found_corr && dr.corrode {
results.0 = dr;
found_corr = true;
}
if !found_flam && dr.flammable) {
results.1 = dr;
found_flam = true;
}
results
}
The main advantage of this solution is that it calculates the indices for the cells we are checking in each direction only once.
Is this more or less what you needed? Lemme know if you need more help :)
thank you so much for your response! unfortunately i dont think that will work for what im trying to do here. not only will that not scale very well, but i dont want to check for every property at once. my goal is to have a function that i can tell to look for one specific property in the surrounding particles. right now im doing that by writing out a separate function for each property but ideally id want to just have one function that accepts a variable containing the property i want to search for then scans the surrounding particles for that property
Ahh, I see! Okay, I misunderstood ^^. BTW, it's great that you managed to find a solution, congratulations! <3
|
STACK_EXCHANGE
|
I am working on a bare metal project which involves running a real time protocol encode/decode application. The project originally started using TI’s OMAP4460 device (containing a dual core ARM Cortex A9 clocked at 1.2GHz) but for a number of reasons (mostly hardware related) we have move to the SCM-iMX6Q. I am evaluating the performance of the SCM-i.MX6Q using a QWKS-SCMIMX6 off the shelf development board, comparing it to the OMAP.
Our evaluation involves running some sample protocol encode/decode routines on arrays of data in memory (so does no external I/O). It all runs on a single ARM core, the others beings disabled. We only have access to the object libraries for this evaluation code (which is provided by a partner company in the project) this is built using TI Code Composer, and is in fact the same object code which I can run on the OMAP or the SCM-i.MX6Q. (i.e. I can link exactly the same libraries into my OMAP project as my SCM-i.MX6Q project). In both cases the device initialisations have come from the standard U-Boot sources for each device type, (clock and memory configurations, DCD configuration etc.).
In the SCM-iMX6Q the ARM is clocked at 800MHz and the OMAP at 1200MHz, the OMAP also uses the same PoP LPDDR2 RAM as the SCM-i.MX6Q, so I would expect running the same code in the same circumstance I would see roughly two thirds of the performance of the OMAP when running on the SCM-iMX6Q. Unfortunately the performance difference I see is huge,
Overall decoding and encoding finished: 16757906 = 16.757mS
Overall decoding and encoding finished: 136581353 = 136.581mS
The OMAP is more than 8 times faster! These timing are taken using the internal ARM performance counter. The test routines run exclusively on the processor with nothing else running and interrupts disabled, so it is pure “number crunching”. The test is very processor and memory intensive.
I have the L1 I/D caches enabled, the L2 cache is enabled, and the MMI is configured to map all of the LPDDR2 RAM addresses as cacheable (TTB_ENTRY_SUPERSEC_NORM equ 0x55C06). The clock settings appear to match what I see if I boot Linux then stop in U-Boot and display the clock settings. Using the same display code from U-Boot built into my project (after my initialisation of the hardware) I see these clock settings, which match what I see if U-Boot does the initialisation.
PLL_SYS 792 MHz
PLL_BUS 528 MHz
PLL_OTG 480 MHz
PLL_NET 50 MHz
ARMCLK 792000 kHz
IPG 66000 kHz
UART 80000 kHz
CSPI 60000 kHz
AHB 132000 kHz
AXI 198000 kHz
DDR 396000 kHz
USDHC1 198000 kHz
USDHC2 198000 kHz
USDHC3 198000 kHz
USDHC4 198000 kHz
EMI SLOW 99000 kHz
IPG PERCLK 66000 kHz
I am beginning to think that the problem has something to do with the L1 cache in the SCM.iMX6Q. If I do not enable the L1 cache in the SCM.iMX6Q I see only a small amount of difference in the performance, however if I do the same in my test using the OMAP there is a huge difference in performance (the OMAPs encode/decode times become 85mS). Is there something I am missing about configuring the L1 cache which is different to the ARM in the OMAP?
Clearly I am using difference build environments, Code Composer for the OMAP and IAR Workbench for the SCM-iMX6Q. So just in case the different C libraries were the cause of the problem I have tried building my project for the SCM-iMX6Q using TI’s C library instead of the one provided by IAR. It makes no difference at all to the timings.
I have been investigating this problems for some time now and am really running out of ideas as to why there is such a large difference in performance. Either there is some device configuration I have overlooked or there is really a big difference in the architecture between these two devices which is beyond my control. Any help would be much appreciated!
Original Attachment has been moved to: Boot_DCD.c.zip
|
OPCFW_CODE
|
Computational error on chi-square test for homogeneity
I have the following categorical data
Control Treatment
c1 285441 33296
c2 40637 4187
c3 737113 97433
c4 34036 3993
In other words, I have 2 multinomial distributions with 4 categories each. In effect, I would like to test to determine whether or not the treatment changes the distribution of category mix (c1,c2,c3,c4).
A quick glance at the data shows the following proportions for control and treatment respectively. For example, I calculate $p_{c1} = \frac{285441}{285441 + 40637 +737113 + 34036}$
Treatment (to 3 decimal places):
$p_{c1} = .260, p_{c2} = .037, p_{c3} = .672, p_{c4} = .031$
Control (to 3 decimal places):
$p_{t1} = .240, p_{t2} = .030, p_{t3} = .701, p_{t4} = .029$
Now it seems to me that, while there are some differences in the relative distribution of categories between control and treatment, this difference is not super drastic. So I'm going to run a chi-square test for homogeneity at significance level $\alpha = .0005$ (yes I know very small alpha). In our case, the degrees of freedom is 3, so we reject if we get a chi-square statistic $>17.7299$.
Under the null hypothesis, we expect the context and treatment to be the same. We calculate the MLE $\hat{p_1}$, which is the probability of landing in category 1. It is calculated as follows.
$\hat{p_{1}} = \frac{285441 + 33296}{\text{total count of both context and treatment}} \approx 0.258$
I'll calculate the first term of of my $\chi^2$ statistic (expected count for control in category 1), which I denote $E_{c1}$. The observation, denoted $O_{c1}$, is 285441. Hence, we have
$E_{c1} = \text{total count control} \times \hat{p_{1}} \approx 282919.389$
The first term of the chi-square statistic is given to be
$ \frac{(E_{c1} - O_{c1})^2}{E_{c1}} = 22.475$
So from the first term alone, we are already in the rejection region. I redid my calculations and I compute my statistic (following the same approach above) to be $542.5772$. I don't trust my numbers, so I was hoping to verify that I am not misusing the chi-square test for homogeneity or making some idiot computational error. Hopefully, that clarifies the question more. Thanks!
Please add the self-study tag, read its tag-wiki.
Here's how to think on your feet about contingency tables: The square root of a cell's expectation is the likely amount of random variation in its value. (The counts will be approximately Poisson, so their standard deviations will be proportional to the square roots of their expectations. Your knowledge of the 68-95-99.7 rule tells you that a deviation from this expectation of more than 3 SDs will be rare.) E.g., the root of 282919 is 532. The count of 285441 is<PHONE_NUMBER>19=2522 high, almost five SDs: it's too high.
They're clearly not homogeneous; the second row has a ratio of nearly 10:1, the third row only about 7.5:1. There's no need to test, with counts that large this will quite obviously be significant at any reasonable significance level.
However, I don't agree with the chi-square value you got. I calculated it "by hand" in R and got a much lower value than you give, and using chisq.test I got the same value (to about 5 figures). It should be a number quite a bit less than 1000.
If you want us to explain what you did wrong, show how you got that chi-squared value. What were your $E_i$ values? How did you calculate them?
Also, your significance level is wrong in two different ways at once; firstly, it should be small, not large* (I've already discussed this on another question of yours), and secondly, if the critical value really is 17.73, you calculated the significance level's complement incorrectly (leaving out a 9). The actual significance level you're using there is 0.0005 not 0.005.
*For a simple null, the significance level is the probability of rejecting the null hypothesis when it's true. You want to do that rarely, not often. Fortunately, your critical value is large, so you will do that rarely, but then you have to describe the significance level correctly; it's a small number.
Your first expected value calculation looks to be correct to the 3 figures you're working to (nice level of detail on that part, too; that helps us to spot errors). Also your new chi-square value matches mine to at least 3 figures. So unless I'm making the same errors you are*, you're doing it correctly.
*(we all make errors, and me as much as anyone, so it could happen)
+1 This answer is remarkable for its insight and thoroughness--you caught a lot of pitfalls that (I believe) most readers would at first sight have overlooked. (BTW, I upvoted the question, too, because it is unusual in providing an explicit context--helping you to figure out so much--and I don't think it should be faulted for including mistakes, but rather praised for making instructive ones.)
Hey. Thanks for the answer. I think I miscommunicated a few things so I clarified the question. Hopefully, my question will be more clear now. Would still very much appreciate your input.
I updated my answer to respond to your update
|
STACK_EXCHANGE
|
Added async feature to Linux OS
In order to use async play of music in Linux I had to use threads. There is a main section where a function is that plays the file Linux. I start that function as a thread. If block is true, I immediately join the thread so it must complete before its parent continues. Otherwise it should execute on its own until completion.
I see the project is no longer active. I plan to continue with this project. My main goals are to fix the memory leak in windows if it exists and make async work in Linux. I am pretty sure I know how to do both of these things. Additionally, I may make a function to stop the sound being played (in async mode only) Those simple features would allow this program to be used in video games much easier than other programs. I will likely fork this project to do this, but if someone would like to help with testing the OSX functionality that would be great as I only use Linux and Windows at this point.
I recommend you look at Boombox - they forked this already and told me basically the same thing.
I agree with you that testing is an enormous issue… if I had a way of running automated tests for this project, I’d be much more willing to accept pull requests on it. When I did support it, I found that most pull requests would break it in one configuration while fixing another (IE, fix special characters in Windows 7 while breaking it for ASCII in Windows 10, or fixing one thing for Python 2.7 while breaking another for Python 3.)
I don’t know how I could easily test all the dozens of different configurations that people want supported…
And so I don’t. Having the list of working platforms remain static seems better than letting it randomly change from version to version.
Taylor
Sent from my iPhone
On Jun 18, 2021, at 09:17, garydavenport73 @.***> wrote:
Closed #70.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
Hi Taylor,
Thanks for getting back to me.
Sorry for the late reply.
I think it makes sense to do what you are by sort of freezing the project
for the most part. Its kind of hard to make it backwards compatible
and forward compatible because so many things change, like modules being
deprecated and so forth.
I do really like your module, so thanks for making it. When I went from
Windows to Linux I hadn't realized that it only had the sync feature, so
I started looking at the module a little bit to see what I could do. I am
a software engineering student right now and learned a lot from looking at
your code.
I did my best to address whatever issues I ran into and forked my project
here:
https://github.com/garydavenport73/playsoundtoo
which contains a single function.
The page does not look much different from yours at the moment, because
when you fork it makes an identical copy. And without
much thought I named it playsoundtoo, which I plan to change.
Whatever module I make though, I wanted to have a function in it called
playsound with the same arguments.
In other words, I may make a module for example called 'garyssoundplayer'
and then in the code, you can put the statement:
from garyssoundplayer import playsound. I kind of wanted to keep the
function name the same, so that it would be backwards
compatible with any prior code, except for the import statement.
I really appreciate that idea of making modules without dependencies.
I hope what I have done so far is ok with you, and I can rename all my
functions and github repository to be very different from yours if
you would like. Of course, when you first fork, the fork is identical, so
I will need to work on making that fork more my own. But I just
wanted to let you know what is going on.
Thanks again,
Gary
On Fri, Jun 18, 2021 at 1:46 PM TaylorSMarks @.***>
wrote:
I recommend you look at Boombox - they forked this already and told me
basically the same thing.
I agree with you that testing is an enormous issue… if I had a way of
running automated tests for this project, I’d be much more willing to
accept pull requests on it. When I did support it, I found that most pull
requests would break it in one configuration while fixing another (IE, fix
special characters in Windows 7 while breaking it for ASCII in Windows 10,
or fixing one thing for Python 2.7 while breaking another for Python 3.)
I don’t know how I could easily test all the dozens of different
configurations that people want supported…
And so I don’t. Having the list of working platforms remain static seems
better than letting it randomly change from version to version.
Taylor
Sent from my iPhone
On Jun 18, 2021, at 09:17, garydavenport73 @.***> wrote:
Closed #70.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
https://github.com/TaylorSMarks/playsound/pull/70#issuecomment-864188837,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ARTSKF4PS5N7QUMJULOTZFTTTOA6DANCNFSM46YPC4CQ
.
|
GITHUB_ARCHIVE
|
hitchhiker - a good list but I hope not too bitcoin-oriented: they got compormised a few times recently. :(
Your comment "Distributed crawling would have to be done carefully" would have to be examined carefully. As a webmaster I could accept distributed crawling IF the crawling IPs were known. This excludes dynamic (broadband) based crawlers but could be server-based (eg using spare capacity of ordinary web servers), POSSIBLY with a reverse DNS entry, but the combination of IP and UA should suffice. Another warning, though: do not crawl from a cloud - the IP cannot be readily determined.
It occurs to me that in the early days the SE hosting server may have spare capacity for crawling.
Not sure about a "million accounts" but in any case I'd suggest purging accounts unused for (say) 12 months. On the other hand, allowing the creation of multiple accounts may help in tying down spam sites.
Going back to the old days of "add a site" is a good idea. I've given up (for now) adding (or updating old) sitemaps to sites: G doesn't seem to care and I've never had a problem with the top-3 crawling anyway.
It's a shame that frames are no longer an acceptable part of the web since a "spam" button could be permanently displayed on the frame. However, opening each clicked-on SERP in a new tab or window which would then need to be closed after use would return visitors to the index and a "Report Spam" button against each item. I say "would be returned..." but this depends to a certain extent on the brower setup.
lexipixel - you mean like DMOZ? An eventually corrupted system that G tried to claim was authorative despite most people being unable to submit sites to it. An SE needs crawlers to get content anyway. Human editors just could not cope.
|lexipixel - you mean like DMOZ? An eventually corrupted system that G tried to claim was authorative |
Sort of, but not exactly. I was DMOZ editor for several years so, some of my thoughts are based on the ODP model.
What I have in mind is more a network of directory sites with a common data format and some interchange mechanisms.
It is likely to be heavily spammed.
|Going back to the old days of "add a site" is a good idea. |
This is a major problem with building such a search engine - relying on the user to detect spam. It is not a good way of handling things because some of the content that can end up in a blind crawl search may be potentially illegal.
|a "Report Spam" button against each item |
Something like ODP's RDF format? Again the issue of who makes money from all this arises.
|What I have in mind is more a network of directory sites with a common data format and some interchange mechanisms. |
What most people do not see with the web is that it changes from day to day. Thousands of domains drop and thousands more are registered. One of the biggest issues with DMOZ was that it had no ability to self-clean its index. Domains that were in DMOZ were often dropped and reregistered. They remained there despite often no longer having the original content or owner. This kind of thing also affects SEs.
I think "add a site" would be less spammed than auto-find by crawling.
I would only expect a small percentage of users to hit the "spam" button but that's still better than none.
One anti-spam method for both "add a site" and crawl may be to pay attention to the various reports of evil DNS and domain name registrars. Both can be discovered to at least a reasonable degree (which is why I do not understand the domain registration agencies failure to pick up on this).
A vast number of the domains registered and dropped each day are registered by criminals and a fair number are used for virus-serving sites for a few days before being rumbled; and some of the more robust ones find their way into search engines. Something else that domain registrars could detect and deflect.
Anti-spam/virus measures on an SE would need someone permanently assigned to the problem; that could be a downer on a start-up.
Lotts of great idea's coming through. Please keep them coming.
Ok so my first problem with a new open source search engine is it has no data, nothing so when people come to it they are bound to be disappointed. So I used a Bing script to bring in some results so that we wouldn't have an empty search result. These results come in below the results for own database.
Next I needed a crawler, open source, this will need to be replaced with a P2P crawler but to start with I grabbed a download of Sphider.And did a quick crawl of w3c.
Next I needed a way to get people to add there sites and thought of the suggestions about human review and Dmoz. In the end I thought the easiest way would be for people to bookmark them. The sites could then be voted up or down and those that gained positive votes would then be crawled. So I downloaded SemanticScuttle for this.
Lotts more to come, I have added the url of the site to my profile. I can't and don't want to do this alone so please get involved. Anyone out there that can design a logo?
The meatbots would be all over it. It is a major issue with web directories and this would be magnitudes larger.
|I think "add a site" would be less spammed than auto-find by crawling. |
Possibly but then you run into the negative SEO issue of people trying to knock out their competitors.
|I would only expect a small percentage of users to hit the "spam" button but that's still better than none. |
Again it is a moving target. The registrars are not in the business of managing the entitlement of each registrant to a domain name - with gTLDs, they are simply in the bulk domain registration business.
|One anti-spam method for both "add a site" and crawl may be to pay attention to the various reports of evil DNS and domain name registrars. Both can be discovered to at least a reasonable degree (which is why I do not understand the domain registration agencies failure to pick up on this). |
I've seen the same claims made before but they are wrong. The vast majority of domains registered each day are registered by ordinary people and businesses. The five day window in which a domain can be dropped without payment was abused but for domain tasting. If a registrar's five day deletes go above a certain percentage each month then they have to pay a percentage of the registration fee. This has significantly reduced the problem. There is also a development curve from the registration date of a domain to a fully functional website appearing (if ever) on that domain name.
|A vast number of the domains registered and dropped each day are registered by criminals and a fair number are used for virus-serving sites for a few days before being rumbled; and some of the more robust ones find their way into search engines. Something else that domain registrars could detect and deflect. |
If the SE uses the moronic GIGO approach used by Google and the other major players, it would need more than a single person. It is a continually changing threat environment and a link that might have been good yesterday could be hacked today and carrying a malware payload.
|Anti-spam/virus measures on an SE would need someone permanently assigned to the problem; that could be a downer on a start-up. |
Auto-find, otherwise known as Blind Crawling, is a very inefficient way of finding new websites. It also is junk prone. However the real issue is that due to the FUD by Google and its cargo-cult SEOs, the link structure of the web is decaying. Sites no longer heavily link to each other and this means that it is far harder to find sites. Reciprocal links, especially at the index page level are becoming rarer.
> The meatbots would be all over it
There are ways of detecting (most?) auto-submission agents, same as detecting auto-scrapers.
> negative SEO issue
Agreed. Not sure how to avoid that one.
Domain names / DNS - there are indicators in DNS and certainly some DNS servers are very suspect. I agree it would take a lot of work but what is the project's aim - to avoid as much spam as possible. Some DNS servers are "obviously" compromised and could be trapped.
I saw some stats that gave the number of criminal domains registered per day and was very surprised.
seoskunk - make sure to set an unique user-agent string with a url pointing to the bots page of the "site", even if the real SE has no existence as yet. For a new bot there should be at least a minimum policy set out - "We do not sell on" etc.
Before adding sites you could check there WOT rank, see [mywot.com...] Only add sites with a good reputation. If you dont want adult content you can also check the child safety.
Instead of manually adding sites you could choose to start with the 1 million top sites of Alexa.
Yes but the meatbots sometimes do manual submissions. The auto-subs are easier to detect.
|There are ways of detecting (most?) auto-submission agents, same as detecting auto-scrapers. |
The negative SEO one is manually intensive as it would require people checking to ensure that it is a legitimate delisting request.
This does get back to the "bad neighbourhood" concept and it is certainly a valid one because problem DNSes and website clusters exist. I do a lot of domain / DNS work because of my main website. At the moment, I'm running a full gTLD website IP survey.
|Domain names / DNS - there are indicators in DNS and certainly some DNS servers are very suspect. I agree it would take a lot of work but what is the project's aim - to avoid as much spam as possible. Some DNS servers are "obviously" compromised and could be trapped. |
Again, you have to be very careful about these numbers and their sources. An example was that someone was claiming, based on being a network administrator or avid reader of "technology" journalism that most domains registered and dropped within the five day window over the last five years or so were registered for spam. This was absolutely wrong because the complete day's drop on some days was being reregistered by domain tasters. The domains would have websites with PPC advertising and if they made a predetermined amount in those five days, they would be retained and if not dropped again. ICANN was shamed into changing the five day window (where a domain could be dropped without the registrar having to pay the registration fee) regulations. The system was corrupted at a registrar level. However what was being done wasn't technically illegal. The standard drugs/warez/pron sites do exist but some of these may use existing websites rather than new domains. As I said, you've got to be quite cynical when it comes to "technology" journalism because a lot of it consists of recycled press releases and often these press releases come from companies (anti-virus/anti-spam/anti-* ) trying to sell something. The real problem, from a search engine developer's point of view, is the number of compromised websites. Too many compromised websites in an index makes it a toxic index and then you have the same problem as Google.
|I saw some stats that gave the number of criminal domains registered per day and was very surprised. |
its out of the scope of this thread, but what kind of IP survey are You doing? I ask this because i do a lot of IP/BGP/routing research.
@bhukkel Mapping every gTLD website by IP address and then using that data for other research. It sounds simple when it is written like that. :)
| This 41 message thread spans 2 pages: < < 41 ( 1 ) |
|
OPCFW_CODE
|
Given the problems we face with in the real world, designing objected-oriented software became a difficult activity for programmers. In addition, designing reusable and extensible software is even harder, at same time it is essencial for the dynamic software industry that exists today. Because of that, Design Patterns gained popularity in computer science in 1994, after the book Design Patterns: Elements of Reusable Object-Oriented Software, with the promess of speed up the software development process. But, what does it mean and do?
What is Design Patterns?
“Design Patterns describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice.” This definition, given by Christopher Alexander, can be applied to every single envinroment, including an arquitecture environemnt or a software development one. So, the purpose of Design Patterns is to leave recorded a solution to a problem that occurs more than once. Thus, when someone faces with a problem that was already recorded, or similar to it, it’s not necessary to lose the same time thinking about its solution. The solution for that problem already exits and you just have to reuse it. That is Design Pattern!
Design Pattern Elements
Now that we know what Design Pattern is, we want to know how to find a pattern to resolve our problem. For that, each pattern has 4 elements used to describe it.
- Pattern name: is a handle we can use to describe a design problem, its solutions, and consequences in a word or two.
- The problem: describes when to apply the pattern. It explains the problem and its context.
- The solution: describes the elements that make up the design, their relationships, responsibilities, and collaborations.
- The consequence: are the results and trade-offs of applying the pattern
Reading this 4 elements, you are able to identify the puporse of each pattern and if it can be applied to your problem. If so, congratulations, you have been awared.
Catalog of Design Patterns
The existing Design Patterns are grouped into the following categories:
- Creational Patterns: Abstract factory, Factory method, Builder, Lazy initialization, Object pool, Prototype, Singleton, Utility.
- Structural Patterns: Adapter, Bridge, Composite, Decorator, Facade, Flyweight, Proxy
- Behavior Patterns: Chain of responsibility, Command, Interpreter, Iterator, Mediator, Memento, Observer, State,Strategy, Specification, Template method, Visitor, Single-serving visitor, Hierarchical visitor
I will not explain them in details now, but I’ll reserve some future posts to the most importants patterns so we can learn their aplicability.
Design Patterns is a good aproach to solve software’s problems in a reusable way. It happens because some software design requires considering issues that may not become visible until later in the implementation. In this case, it’s important to look at the past, see what had been done to solve problems like the ones that you are having now and, thus, reuse past work (don’t reinvent the whell). Therefore, reusing design patterns helps to prevent subtle issues that can cause major problems, and it also improves code readability for coders and architects who are familiar with the patterns.
|
OPCFW_CODE
|
To program or to result in a Monarch slave, at the very least yet another person on The within is required, termed a "handler".
We give you an easy nevertheless incredibly productive issue: delegating your papers to Experienced writers. It may appear insignificant originally, but in the long run it typically seems crucial for your personal tutorial and vocation accomplishment. Allow us to inform you much more.
All condition has to be eradicated or revealed. Possibly can be a reasonable layout determination. An atmosphere that does neither -- forcing learners to imagine the condition and make sense of functions that generate no seen impact -- is irresponsible style, and disrespectful to your learner.
In the 2nd version of Extreme Programming Spelled out (November 2004), 5 years immediately after the very first edition, Beck included more values and techniques and differentiated in between Key and corollary methods.
As the price of a variable may differ as time passes, demonstrating the information is intimately connected with demonstrating time.
In the example over, the home is currently abstracted -- the code won't just draw a single set residence, but can attract a home everywhere. This abstracted code can now be accustomed More Info to draw a variety of homes.
The above Get More Information mentioned case in point encourages the programmer to investigate the readily available features. A learner who would under no circumstances Feel to try typing the "bezier" functionality, with its unfamiliar identify and eight arguments, can now effortlessly bump into it and uncover what It really is about.
How come we evaluate the code satisfactory and the UI not? Why do we be expecting programmers to "search for" features in "documentation", even though modern day user interfaces are designed to ensure that documentation is often needless?
Now, consider When your cookbook suggested you that randomly hitting unlabeled buttons was the way you find out cooking.
A straightforward comparison will place it into standpoint: it takes about 5 minutes to place an buy, and it requires close to eight hours to jot down a 1-website page essay.
I assumed, "Damn the torpedoes, at least this can make an excellent report," [and] asked the group to crank up all of the knobs to 10 over the points I assumed were being important and leave out every little thing else.
Each and every alter of your thoughts Regulate slave might be programmed from the check it out handler at will. For this, theoretically, hop over to here any reserve or other recommendation may be used. The one thing that issues, is that the slave has to be hypnotized appropriately to a task, which he has Similarly to act to (see less than "Delta").
The canonical work on developing programming programs for learning, and perhaps the best guide ever composed on Discovering normally, is Seymour Papert's "Mindstorms".
Electric power asserts develop into very exciting if the expressions are more complex, like in the next instance:
|
OPCFW_CODE
|
.NET Array Dictionary List String Sub ArrayList Cast Class Console Dates DataTable DateTime Enum File For Format If IndexOf Lambda LINQ Nothing Parse Process Property Regex Replace Select Sort Split StringBuilder SubstringQueue. A Queue is a generic collection in VB.NET. It implements a FIFO algorithm. With Queue, you can keep a sliding cache of elements, with the one added first always being the first to be removed.
First-In-First-Out: The element that is added first (with Enqueue) is also the first one that is removed (with Dequeue).
Note: For Each is implemented in a special way so that it shows all the internal elements in the Queue.For Each
VB.NET program that uses Queue generic type Module Module1 Sub Main() ' Add integers to Queue. Dim q As Queue(Of Integer) = New Queue(Of Integer)() q.Enqueue(5) q.Enqueue(10) q.Enqueue(15) q.Enqueue(20) ' Loop over the Queue. For Each element As Integer In q Console.WriteLine(element) Next End Sub End Module Output 5 10 15 20Enum Queue. In this example, we explore how the Queue can be used to store help requests in a system. As users request help for a program, the requests can be added to the Queue with Enqueue.
Then: Those requests (represented by RequestType) can be read with Dequeue after testing them with Peek.
VB.NET program that uses Enum Queue Module Module1 Enum RequestType As Integer MouseProblem TextProblem ScreenProblem ModemProblem End Enum Sub Main() Dim help As Queue(Of RequestType) = New Queue(Of RequestType)() ' Add some problems to the queue. help.Enqueue(RequestType.TextProblem) help.Enqueue(RequestType.ScreenProblem) ' If first problem is not a mouse problem, handle it. If help.Count > 0 Then If Not help.Peek = RequestType.MouseProblem Then ' This removes TextProblem. help.Dequeue() End If End If ' Add another problem. help.Enqueue(RequestType.ModemProblem) ' See all problems. For Each element As RequestType In help Console.WriteLine(element.ToString()) Next End Sub End Module Output ScreenProblem ModemProblemWith a Queue, the oldest (first added) help requests will be the first to be handled. With a Stack, the newest help requests would be the first to be handled. You would always provide help to people who most recently requested it.
Next: Call CopyTo and pass the reference variable to that array as the first argument.
VB.NET program that uses CopyTo Module Module1 Sub Main() ' New queue. Dim q As Queue(Of Integer) = New Queue(Of Integer)() q.Enqueue(5) q.Enqueue(10) q.Enqueue(20) ' Create new array of required length. Dim arr(q.Count - 1) As Integer ' CopyTo. q.CopyTo(arr, 0) ' Display elements. For Each element In arr Console.WriteLine(element) Next End Sub End Module Output 5 10 20Summary. In these examples, we explored some characteristics of the Queue collection in the VB.NET language. With Queue, we gain a FIFO collection implementation. This can be used to implement higher-level concepts such as a help request system.
|
OPCFW_CODE
|
Installed as an apache module dual procedural and object-oriented interface the procedural interface is similar to that of the old mysql extension. Traditional approach vs oo approach of program modules (paragraph or procedure) required to complete tasks object oriented approach similar things. This lesson presents a very brief overview of object-oriented object-oriented programming is a method of similar to a class in the java.
Project analyzer assumes that a procedure in a class is a method if it cohesion and reuse in an object-oriented of modules, the low cohesion is. Introduction to oop with python object in an object oriented language we we only need to specify the self parameter which is always passed to object methods. Procedural programming scoping is another technique that helps keep procedures modular it prevents the procedure procedural object-oriented procedure: method.
Object-oriented thinking : methods modules in ruby are similar to classes, except: a module if we want to refer to the methods or constants of a module. Object oriented programming vs procedural procedure, module, procedure call and variable in procedural programming are often referred to as method, object,. Python from scratch: object oriented these concepts are fairly similar in a lot what it does is passes the current object to that method as the. What is the difference between procedure oriented programming(pop) and object oriented programming(oop) methods modular programming d).- implementation details are hidden from other modules and other modules objectmethod() therefore, in object-oriented to use than procedure oriented. (ie object-oriented programming) by using classes procedure is similar to declaring a method, method is a wrapper around the module. In procedural languages (ie c) these modules the design method used in procedural programming one alternative to procedural programming is object oriented. Pseudo object oriented style in fortran 90 a module, which contains the methods that operate made accessible through generic procedure names. Introductory chapter on object oriented programming (oop) in python method in python is something similar, module attributeerror: 'mangling' object has no. What are the similarities between object oriented programming and procedural programming. Object-oriented programming¶ in what is called a procedural programming and make it illegal for them to be accessed from outside the object’s.
Check out our top free essays on how object oriented methods are similar to procedural modules to help you write your own essay. Object oriented programming structured programming is also known as modular programming and a subset of procedural programming language object oriented. Reddit gives you the the difference between procedural, event driven, and object comes to pass and an object-oriented program models everything as the. What are modular elements of procedural programming in c#, these are known as methods modules can be used many times.
11 understanding object-oriented programming 111 the procedural this means that you can only do to an object what the public methods in object-oriented. What's the difference between a procedural so couldn't you really say that a library in c is similar to an object in an object oriented program, modules in. The different modules in the package are imported in a similar manner as plain modules, python is an object-oriented python object with two extra methods.
Classes and objects when speaking in general object-oriented there are three ways to instantiate an object of a vbnet class one method is to. Course 2667a: introduction to programming this module introduces common programming methodologies and compares procedural programming to object-oriented. Pop quiz 4 - pointsawarded pointsmissed percentage in object-oriented languages, the procedural modules are methods that exist to be used with an object. What are the similarities and differences between procedural (object oriented and differences between procedural programming and.Download
|
OPCFW_CODE
|
Jakenovel Xin Xing Xiao Yao – Chapter 2900: An Attempt at Compromise elated dream recommend-p2
the rainbow trail southampton
Novel–Chaotic Sword God–Chaotic Sword God
miscellanies crossword wsj
Chapter 2900: An Attempt at Compromise root groan
Each sort of electricity inside established an ocean. Anyone that possessed their wits about them could explain to having a solitary glance that oceans of energies ended up being collected from a great number of cultivators.
“Yang Yutian, give each of the Soil of Divine Our blood to our Heaven’s sect, and our Heaven’s sect won’t go after many of the errors you’ve made nowadays. I’ll nice and clean the slate between us.” The truly amazing elder of your Heaven’s sect, Zhan Yun, bellowed. His eye were definitely fixed to your clump of Dirt of Divine Blood stream in Jian Chen’s hand.
Jian Chen acquired idea this through years ago. Even though he obtained the Myriad Bone fragments Guild as his guard this time around, he could not location each one of his hopes upon them. Not to mention that he or she only had a collaborative association along with the Myriad Bone tissue Guild. They could secure him for just a moment, though not for good.
Chapter 2900: An Attempt at Give up
There were clearly several moments of the patrolling squads with the Darkstar competition searching down and wiping out the outsiders once they rejected to work and experimented with combating back again.
She knew just how important Ground of Divine Blood vessels would be to the Heavenly Crane clan. Each time she acquired a tael of Ground of Divine Blood vessels in the Darkstar competition, it might strike up a mix from the clan, still at this time, she actually discovered this kind of sizeable clump of Earth of Divine Blood in Yang Yutian’s possession.
“He actually has a whole lot Garden soil of Divine Blood in their possession. Studying the unwanted weight, that’s five catties for the very least…”
“He actually has a whole lot Ground of Divine Blood as part of his thing. Studying the unwanted weight, that is five catties on the very least…”
“The reason the Darkstar competition jailed most of these cultivators of your Saints’ Planet was simply because they ended up on the verge of maintain a great wedding, wanting a lot of sacrifices. And that you can clearly see from the final result, the so-identified as sacrifices obviously provided your clansmen.” Jian Chen thrown out another ability to remember crystal since he said that.
At this point, Jian Chen switched his fretting hand, in addition to a wonderful clump of Ancestral Sacred Globe came out. Keeping the Ancestral Sacred Entire world, he secretly guarded himself up against the atmosphere and explained, “As for those superior level divine crystals the senior citizens have accumulated after a lot issues, I’ll use Ground of Divine Blood vessels to compensate you. Would that be good, elderly people?”
Because of this, he obtained made himself both for eventualities years ago. If he could deal with his trouble with these businesses, then he would do his wise to handle it. Even if he could not resolve it, he would force each of the fault to the Darkstar race and lower the enmity he experienced.
Jian Chen disregarded their phrases and praises, but he sensed closely that whenever the Mindset God clan was mentioned, lots of Chaotic Primes provide narrowed their sight.
black glasses frame
There had been also some moments of your patrolling squads with the Darkstar race seeking down and eradicating the outsiders whenever they declined to work and experimented with fighting backside.
the master’s indwelling andrew murray
Every time they been told Jian Chen review his era, a few Chaotic Primes who watched on through the facet could not assist but converse out. These were all astonished.
On the other end, Jian Chen only noticed like his body got end up as weighty for a mountain peak. Even picking up his feet grew to become complicated. He believed such as a boulder was crushing down on his torso, making his inhaling and exhaling irregular.
The memory crystal ended up being reported from the bird’s eyes see within the capital on the Darkstar competition. The scenarios depicted the power of cultivation, the forces of power, plus the capabilities of heart and soul the Darkstar race had drained from plenty of outsiders through formations.
All because it became a superior level of quality Our god Level substance for polishing Ancestral Blood drugs.
“Hmph, do you feel we lack some divine crystals? Yang Yutian, our clansmen delivered into the realm of the Dropped Beast have passed away on account of you. It is important to provide us with an explanation for this…”
Chapter 2900: An Attempt at Give up
As soon as they found the Top soil of Divine Blood vessels, the eye area of all of the Chaotic Primes provide immediately blazed with attention, their respiratory getting unequal. When it comes to He Qianqian on the Incredible Crane clan, she got turn into completely dumbfounded.
Every time they been told Jian Chen report his age group, a couple of Chaotic Primes who seen on out of the aspect could not help but communicate out. These people were all impressed.
Jian Chen disregarded their documents and praises, but he sensed closely that whenever the Soul Lord clan was outlined, numerous Chaotic Primes existing narrowed their view.
On the other side, Jian Chen only sensed like his system had become as large for a mountain. Even lifting his foot became tough. He experienced for instance a boulder was crushing upon his chest area, making his breathing in unequal.
“The good reason why the Darkstar competition imprisoned every one of these cultivators in the Saints’ Environment was because they were going to hold a terrific marriage ceremony, requiring a huge number of sacrifices. And since you can clearly see from the final result, the so-called sacrifices obviously included your clansmen.” Jian Chen tossed out another remembrance crystal since he stated that.
These top firms ended up all overlords with Great Primes. They possessed fantastic strategies where they may peer to the very secrets around the world. Jian Chen seemed to be nervous that once his true identity was revealed, it may well produce good catastrophe for the Tian Yuan clan about the Cloud Plane.
“Yang Yutian, give the many Earth of Divine Bloodstream to your Heaven’s sect, and our Heaven’s sect won’t focus on some of the faults you’ve created anymore. I’ll thoroughly clean the slate between us.” The truly great elder from the Heaven’s sect, Zhan Yun, bellowed. His eye ended up glued towards the clump of Top soil of Divine Blood stream in Jian Chen’s hands.
“He actually has a great deal Garden soil of Divine Blood as part of his possession. Exploring the unwanted weight, that is five catties within the very least…”
Section 2900: An Attempt at Affect
Chapter 2900: An Effort at Affect
On the other side, Jian Chen only noticed like his human body had grow to be as weighty being a mountain peak. Even picking up his feet has become challenging. He experienced much like a boulder was crushing on his pectoral, making his breathing in unequal.
The ability to remember crystal ended up being recorded from a bird’s eyesight see during the capital city on the Darkstar competition. The scenes depicted the forces of farming, the strengths of power, along with the forces of soul the Darkstar competition experienced exhausted from plenty of outsiders through formations.
She would have never ever dreamed of such a arena even just in her goals.
|
OPCFW_CODE
|
I love books like And Then, You Act and The Empty Space all kinds of inspirational books that are poetic and spiritual. But the problem is that I don't actually think that way myself (which is why I read the books, I suppose). I am what Rich Gold, author of The Plenitude: Creativity, Innovation, and Making Stuff, calls an "Engineer." Engineering, Gold writes, is focused primarily on "problem-solving" and "the user and the world." We work less from inner vision (that's for artists) than from seeing a problem and trying to figure out a way to fix it. "Engineers believe that within the fixed bounds of the laws of nature, there is the solution to almost any problem." That's me.
Recently, I was reading a book that discussed different "life themes" that dominate people's lives, and one jumped out at me:
Activator-The focus here is to perform tasks that others have failed to accomplish. These may be truly gargantuan or quite menial, but the focus is always on getting the job done right. Activators are the turnaround artists or the trouble-shooters of the world, the ones who successfully reverse failure.
That's me in trump. I'm not an artist, although I have a great deal of creativity and am pretty innovative. But I use that creativity and innovation to solve problems. I'm probably not going to found a new theatre, but if you have a theatre that is messed up, I can probably come in and fix it for you. And then, like the Lone Ranger, I'll tip my hat, say "my work here is ended," and ride off in search of another problem. The question I am most likely to ask about a play or a theatre is what it "does" -- what effect does it have on people, how does it change them. That's an engineering question.
So when I look at something like the regional theatre scene in America, I'm looking at it like an "activator engineer." Somehow it got messed up, and I get the fun of figuring out how to fix it. It's not about inner vision or self-expression, it's about nuts-and-bolts systems, mission statements, and guiding principles. It's about what Paulo Coehlo calls, in The Alchemist, a "personal legend" and what Joseph Campbell calls "bliss." To me, once the grit is cleared out of the system, once the purpose is clear and the values are defined, then all the wonderful artists can take over and bring beauty and wonder and poetry into existence without being prevented by a bunch of friction and short-circuits. To me, the system of American theatre is so full of grit that hardly any artists are free to really do what it is they are best at -- follow their inner vision. Instead, they are forced to think like...well, like engineers: what does the market want, how can I get my work seen? If things were working correctly, a bunch of us engineers would keep the system running smoothly so that the work that artists created would be seen and appreciated and they wouldn't be bothered with marketing and administration.
To all the artists who read this blog, I hope you'll keep this in mind. Ultimately, I'm not interested in telling you what to do as artists, I'm focused on trying to make things work better so that you can fill the world with books and plays like And Then, You Act and Angels in America that inspire me and let my engineer's mind experience vicariously what it might be like to see the world not as a puzzle, but as a miracle.
|
OPCFW_CODE
|
The WOW project created a crowd sourcing application allowing the general public to submit weather data to the Met Office. This application was the first IT project carried out for the Met Office to use the Scrum agile development approach and also to place its software and data on a cloud platform. To carry out its mission, the Met Office uses IBM supercomputers at its HQ in Exeter to process data collected from a large number of weather stations. These can vary in size and sophistication. Perhaps surprisingly the observers who supply the weather data are not employed by the Met Office. Some are the staff of organisations with a particular interest in weather conditions such as the armed forces, air traffic control, coast guards and researchers in universities, as well as trusted individual enthusiasts. Many other weather enthusiasts - including schools - collect weather data. The Met Office had no way of utilising this data. But since July 2011 there has been the Met Office WOW (Weather Observation Website) which can be found at http://wow.metoffice.gov.uk. It now has over 3,000 contributors and deals with a million new data points a week. This is crowd sourcing with a vengeance.
The Met Office commissioned PA Consulting to implement WOW. It was agreed that for first time an agile Scrum approach would be used to develop a Met Office application. Scrum is an agile method originally developed for product development projects, but has become increasingly used in the development of software, which, after all, can be seen as just another type of product. In Scrum, the development team is expected to be largely self-organising, under the guidance of a ‘Scrum Master’ who acts as a moderator. PA Consulting’s Paul Craig - who gave the presentation on WOW PROMS-G - took on this role. The Scrum approach tackles the usually time-consuming task of gathering and prioritising user needs by identifying a ‘product owner’ with authority over the features of the system to be built. With WOW, an experienced Met Office project manager took this role. Paul Craig stressed the crucial role of the product owner, particularly where the client organisation uses formalised, bureaucratic procedures to control projects, such as those enshrined in PRINCE2.
Scrum encourages a ‘get stuck in approach’ which is anathema to conventional project management which focuses on careful planning up-front. The functionality is developed in a series of sprints of about two to four weeks each. At the start of a sprint, the product owner and the development team together examine an initial product backlog recording all the required features of the product. The tasks needed to implement these features for the next sprint are recorded in a sprint backlog. A circumstance that helped the WOW project was that it was, initially at least, a standalone system. A problem if you are asking volunteers to supply information and you do not want to turn people away, is that you cannot accurately forecast the demand that might be put on your server infrastructure. The answer was cloud computing - which seems very appropriate seems appropriate for the Met Office. No dedicated hardware/software platform was set up for the application. Java applets were developed and uploaded to the platform supplied by Google Application Engine. This was very cost effective as you only paid for what you used. Some knowledgeable participants at the presentation commented that a cheap service often means scanty support. This poor support for commoditised services offered cheaply because of the large economies of scale and a focus on essential requirements rather than ‘frills’ has led to the growth of mutual help groups such as those who inhabit www.stackoverflow.com, an information exchange forum collaboratively built and maintained by software developers.
An agile approach to software development projects is not always the best one. It seems to work best in green field undertakings where everything is being built from scratch and developers do not need to worry that changes to critical systems already in use may have unexpected and undesirable outcomes. However the Met Office WOW project shows that in the right circumstances it can successfully meet objectives.
|
OPCFW_CODE
|
the world isn't getting any bigger. the amount of room on Earth for all of us to spread out and live is finite. we're not suddenly going to colonize the Moon.
and the human population continues to excel virtually unabated by global trends in education and female empowerment.
in fact, the human population seems to excel despite the difficulty of providing food and [CLEAN] water to all of the available hungry/thirsty mouths that require it RIGHT NOW; let alone 100 years from now.
it's time to sterilize.
yes. mandatory sterilization. population control for beginners.
we have the technology to sterilize human beings on a temporary basis.
sterile when you want, fertile when you don't. we have the technology to make a person sterile and then reverse it if/when they're ready to have children. but most importantly, we have the power to DECIDE who needs to be sterilized.
this isn't some Orwellian/1984/evil dictatorship nonsense either.
I'm talking standardized, safe, routine, and nominal sterilization AT BIRTH. it's painless and (because it can be reversed) worry-free. as soon as a person believes that he or she is ready for children, all they have to do is pass a simple IQ/competency test and their sterilization can be reversed. think of it like a Driver's License test. you wouldn't want someone to be sharing the road with you @ 90 MPH if they were unable to operate a vehicle or understand basic driving laws.
so why do we let people have children without any proof that they're ready for it?
show me that you're smart enough to NOT shake or strike a baby when it's crying uncontrollably.
show me that you're smart enough to NOT influence a child with your racism/sexism or other bigotry.
show me that you're competent enough to baby-proof your house when bringing home an infant.
show me that you're competent enough to take your baby to a physician when they're sick instead of just praying for a miracle.
we make people take a test to drive.
we make people take a test to join the military and handle weapons.
we make people take a test to become teachers and firefighters and paramedics.
we don't make people take a test to be parents?? to raise children? to directly affect the future of this world??
give me one good reason why we shouldn't be testing people before they're allowed to have children.
actually if you can find one good reason you would probably have to admit that it's a religious obligation tied to a religious version of morality. I cannot think of a single logical/secular reason to be against this proposal.
Edited by El_Diablo, 29 April 2013 - 08:06 AM.
|
OPCFW_CODE
|
Thermador Pro Harmony PRG305WH/04 oven not heating properly
I have a Thermador Pro Harmony range (PRG305WH/04) that's just about one year old. The oven takes an extraordinarily long time to preheat and exhibits some other odd behavior as well. The worst of the symptoms is propane gas burning inside of the air shutter of the burner tube, preventing gas from reaching the igniter and burning inside the Venturi tube as it should. Here's a link to a YouTube video I took where you can hear the wind tunnel noise the gas makes as it's burning inside of the air shutter: https://www.youtube.com/watch?v=2J1gDDVUPTE
As mentioned above, this range has been converted to liquid propane (LP) from natural gas (NG). The repair tech initially thought it was being caused by low gas pressure reaching the appliance. I had the propane company come to the house to check. He found that the inches of water column (WC) was about 11 so he adjusted the second-stage regulator to about 12.5 instead. This has had no noticeable effect on the functioning of the range.
All burners operate correctly with a clear blue flame and no noticeable orange/yellow tips. We're running out of obvious things to check, so I've made a list of what I believe to be all the involved components:
Propane supply line to the kitchen
The propane company installed a temporary T where the propane line enters the kitchen and we ran various features of the range. With all features off, we measured 12.5 WC. With all burners operating we measured as low as 11.8 WC. With only the oven operating we measured 12.3 WC. I believe all of these readings should be sufficient to operate the range, even if we had everything running at once. If not, what else can we try?
Propane supply line from the kitchen to the appliance
The propane company examined the line and didn't have any comment on whether this could affect performance of the range. Are there any supply lines to avoid, or ones that we could be using instead just to rule this out?
Range internal gas regulator
We verified that the regulator is correctly adjusted for LP, where the "LP" text is pointing up instead of down (refer to LP kit instructions, pgs. 10 and 18).
Gas valve for bake orifice
We can hear the gas valve open and gas start flowing about 6-8 seconds after the igniter comes on. I am told that it's very unlikely for the gas valve to fail or not open entirely even after many years. How can I verify the gas valve is operating correctly?
Bake orifice in burner tube
We verified that the NG orifice was replaced with the correct LP orifice measuring 1.34mm (refer to instructions, pg. 6).
Air shutter in burner tube
We verified this is full open (refer to instructions, pg. 18).
Oven Venturi tube
The repair tech replaced the Venturi tube, being sure to also swap the NG orifice on the new assembly for the LP orifice. This had no effect on performance.
Oven igniter for Venturi tube
This comes on quickly (2-3 seconds) and I believe it stays on whenever gas is flowing. Should the igniter always be on whenever the oven gas valve is open?
Gas ignites within 6-8 seconds of the gas valve opening. Should the gas be igniting more quickly?
The oven takes close to 25 minutes to reach 350 F. I can reproduce the sound in the video by letting the oven heat for a while (~ 10-15 minutes), opening the door to interrupt preheating, closing the door, letting igniter come back on and gas valve open again. After this, the gas ignites inside the burner tube right by orifice inside the air shutter, never getting gas/flame into the oven Venturi tube again. If I let it operate this way (we did once because we thought it was simply a fan running) it eventually produces a horrible smell that made my wife's eyes water and gave me a pounding headache. I'm guessing this was the smell of brass or the insulating material heating and burning off.
We don't use the oven very often but do need it for the fall and winter holidays. Since the manufacturer warranty period is near to expiring, I really need to find a resolution. Any ideas that I haven't tried yet?
Edit: this is the conversion kit that was used (https://www.ajmadison.com/cgi-bin/ajmadison/PALPKITGW5.html). Updated this question with page references from that guide (https://assets.ajmadison.com/ajmadison/itemdocs/8001148006_A.pdf).
Burning in the wrong location seems definitely warranty territory., Unless they have a published specification for preheat time, or admit that that's too long a time, that might just be how long this oven takes to preheat. Your pressure readings indicate that the oven is not using nearly as much gas as the top burners do, which does not seem normal in my gas range experiences (the oven burner is generally relatively huge.)
@Ecnerwal from https://www.appliancesconnection.com/thermador-prg305wh.html, all 5 burners would account for 59000 BTU while the oven burner should be operating at 20500 BTU. But I agree I would expect more gas flow to the oven and therefore a noticeable drop in WC from the propane source.
The BTU ratio of "All Top Burners" to "Rated Oven Burner" is 2.87. The ratio of pressure drop from static you report for the same conditions is 3.5, though that may be down to rounding (would expect 12.25 if same as BTU ratio.) Or your all burners operating reading includes the oven? In that case it's about as expected (12.32)
"Less than one year old" = Contact manufacturer for warranty repair NOW before the warranty expires!
Gas supply pressure seems suitable but it's the gas pressure right at the orifice during flow that determines how the unit performs.
Look for a kink in the supply tubing that runs from the gas valve to the orifice. Kinks can sometimes occur at the factory, and this could reduce flow and pressure enough to let flame flash back to the venturi tube.
If there are no kinks and it's possible to remove said supply tubing, remove it and blow compressed air through it (with orifice removed) to be sure there are no obstructions. I found a crumb of brass in one such tube right near the orifice.
There was a metal burr inside the brass bake orifice itself. We took that out, replaced it with a new bake orifice, and now have a strong, blue flame.
|
STACK_EXCHANGE
|
package lvDB
import (
"container/list"
"errors"
//"github.com/golang/glog"
"net/rpc"
"sync"
)
var ErrClientBroken = errors.New("lvDB: client broken")
type Pool struct {
Url string
MaxIdle uint32
//IdleTimeout time.Time
mu sync.Mutex
idleClients list.List
}
func NewPool(url string, maxIdel uint32) *Pool {
pool := Pool{
Url: url,
MaxIdle: maxIdel,
}
pool.idleClients.Init()
return &pool
}
func (p *Pool) Get() (*Client, error) {
p.mu.Lock()
e := p.idleClients.Front()
if e != nil {
p.idleClients.Remove(e)
p.mu.Unlock()
return &Client{p, e.Value.(*rpc.Client), false}, nil
}
p.mu.Unlock()
//create new
client, err := rpc.DialHTTP("tcp", p.Url)
if err != nil {
return nil, err
}
return &Client{p, client, false}, nil
}
func (p *Pool) Put(client *Client) {
p.mu.Lock()
p.idleClients.PushBack(client.client)
if p.idleClients.Len() > int(p.MaxIdle) {
e := p.idleClients.Front()
e.Value.(*rpc.Client).Close()
p.idleClients.Remove(e)
}
p.mu.Unlock()
}
type Client struct {
pool *Pool
client *rpc.Client
broken bool
}
func (c *Client) Close() {
if c.broken {
c.client.Close()
return
}
c.pool.Put(c)
}
func (c *Client) Put(kvs ...Kv) error {
err := c.client.Call("Lvdb.Put", kvs, nil)
if err == rpc.ErrShutdown {
c.broken = true
return ErrClientBroken
}
return err
}
func (c *Client) Get(keys ...[]byte) (replys [][]byte, err error) {
err = c.client.Call("Lvdb.Get", keys, &replys)
if err == rpc.ErrShutdown {
c.broken = true
return nil, ErrClientBroken
}
return replys, err
}
func (c *Client) Del(keys ...[]byte) error {
err := c.client.Call("Lvdb.Del", keys, nil)
if err == rpc.ErrShutdown {
c.broken = true
return ErrClientBroken
}
return err
}
func (c *Client) Ping() error {
err := c.client.Call("Lvdb.Get", [][]byte{[]byte("")}, nil)
if err == rpc.ErrShutdown {
c.broken = true
return ErrClientBroken
}
return err
}
//func (c *Client) Call(serviceMethod string, args interface{}, reply interface{}) error {
// err := c.client.Call(serviceMethod, args, reply)
// if err == rpc.ErrShutdown {
// c.broken = true
// return ErrClientBroken
// }
// return err
//}
|
STACK_EDU
|