Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Cryptography is the study of techniques for ensuring the secrecy and authentication of the information, public key encryption schemes are secure only if the authenticity of the public key is ensured. The importance of security of data is ever expanding with increasing impact of internet as means of communication and e-commerce. It is essential to protect the information from hackers and eavesdroppers. In this paper, with the help of finite state machine (mealy machine) a secure communication method is designed for encryption and decryption. If you need assistance with writing your essay, our professional essay writing service is here to help!Essay Writing Service Many mathematical models are taking an active role in the process of encryption and inverse of an element or of a function resulting in decryption process. The main idea behind such constructions is designing a mathematical function in such a way that tracing the inverse of the function is not easy. Cryptography is the art of coding & decoding information. It is used as security mechanism in electronic communication. The message we want to send is called ‘plain text’ and the disguised message is called the cipher text. The process of converting plain text into cipher text is called ‘encryption’ and the reverse process is called ‘decryption’. There is a scope for a wide range of application of automaton theory in the field of cryptology. In automata theory, a branch of theoretical computer science, a deterministic finite automaton (DFA)-also known as deterministic finite state machine-is a finite state machine that accepts/rejects finite strings of symbols and only produces a unique computation (or run) of the automaton for each input string. ‘Deterministic’ refers to the uniqueness of the computation. In search of simplest models to capture the finite state machines, McCulloch and Pitts were among the first researchers to introduce a concept similar to finite automaton in 1943. The finite automaton is a mathematical model of a system with discrete inputs and outputs. The system can be one of a finite number of internal configurations or states . The finite automaton is a mathematical model or a system, with discrete inputs and outputs. When the finite automata is modified to allow zero, one, or more transitions from a state on the same input symbol then it is called a nondeterministic finite automata. For deterministic automata the outcome is a state, i.e., an element of Q. For nondeterministic automata the outcome is a subset of Q, where Q is a finite nonempty set of states. Automata theory is the study of abstract computing devices or machines. It is a behavior model composed of a finite number of states, transition between those states and actions in which one can inspect the way logic runs when certain conditions are met. Recently finite state machines are used in cryptography, not only to encrypt the message, but also to maintain secrecy of the message. In this paper, new secret sharing scheme is proposed using finite state machines. A finite state machine or finite-state automaton is a mathematical model of computation used to design both computer programs and sequential logic circuits . It is conceived as an abstract machine that can be in one of the finite states. The machine is only in one state at a time, the state it is in at any given time is called the current state. It can change from one state to another when initiated by triggering event or condition, this called a transition. Automata theory is a key to software for verifying systems of all types that have a finite number of distinct states, such as communication protocols or protocol for secure exchange of information. In Mealy Machine every finite state machine has a fixed output. Mathematically Mealy machine is a six-tuple machine and is defined as : M= ( ) : A nonempty finite set of state in Mealy machine : A nonempty finite set of inputs. : A nonempty finite set of outputs. : It is a transition function which takes two arguments one is input state and another is input symbol. The output of this function is a single state. : Is a mapping function which maps x to, giving the output associated with each transition. : Is the initial state in Mealy machine can also be represented by transition table, as well as transition diagram. Now, we consider a Mealy machine . Fig 1 Mealy machine ..In the above diagram 0/1 represent input/output. Recurrence matrix is a matrix whose elements are taken from a recurrence relation . The recurrence matrix in this paper is defined as Rn is a symmetric matrix. where n and ‘s are .either taken from Fermat’s sequence or Mersenne’s sequence. The sequence 0, 1, 3, 7, 15, 31, . . . is the Mersenne sequence, and 2, 3, 5, 9, 17, 33, . . . is the Fermat sequence. These are just powers of 2 plus or minus 1 2. Proposed Algorithm Let the plain text P be a square matrix of order n, n>0. Here, a 9 lettered word is represented as a square matrix of order 3. Define Finite state machine through public channel. Here Mealy machine is publicized. All the elements of the plain text are added and converted into binary form. This is the input. With the help of mealy machine output is found. Define cipher text at each stage. .cipher text at qi+1thstage = cipher text at qith stage + recurrence matrix. The elements of recurrence matrix differ at each stage. When input=0,the elements of recurrence matrix are taken from Fermat’s sequence and when input =1,elements of recurrence matrix are from Merssen’s sequence. The value of n of the recurrence matrix at each = output at each stage .Residue mod 26 is calculated for the cipher text at the end. The numbers are again converted into alphabets and sent to the receiver. The message is decrypted using the inverse operation and key to get the original message. 3 Performance analysis Algorithm proposed, is a simple application of addition of two matrices. But the recurrence matrix and elements of recurrence matrix are different at each stage depending on the input and output. It is very difficult to break the cipher text without proper key, defined operation and the chosen finite state machine.The key is defined as the sum of all the elements of the plain text. Let the sum of the output of the finite state machine for k bit secret key is r, let be the time required for each multiplication and be the time required for each addition. Let secret k bit key consists of number of ‘0’s and number of ‘1’s. Then the total time required for k bit secret key ( (sum of output for ‘1’ bit) (sum of output for ‘0’ bit) 4. Security analysis Extracting, the original information from the Cipher text is difficult due to the selection of the recurrence matrix, secret key and chosen finite state machines. Brute force attack on key is also difficult because of the key size. Table 1 Security analysis Name of the attack Possibility of the attack Cipher text attack It is difficult to crack the cipher text. Because of the chosen finite state machine and the key. Known plain text attack Because of the chosen finite state machines and key. Chosen plain text attack Because of the chosen finite state machine and key. Adaptive chosen plain text attack Because of the chosen finite state machines and different individual keys. Chosen cipher text attack Difficult to crack Cipher text Because of the chosen finite state machine , key and the recurrence matrix. Adaptive chosen cipher text attack Difficult to crack Cipher text Because of the chosen finite state machine, key and chosen recurrence matrix at each stage. We assign 1 to letter a, 2 to letter b and so on and 26 to the letter z. Let us encrypt the word ‘WONDERFUL’. As per the algorithm, we construct a square matrix of order n as under: Where, let ‘P’ be the plain text. The sum of all the elements of plain text is equal to 118 = (1110110)2 This is the key. The output of the above key is found with the help of Mealy machine. The recurrence matrix is The elements of recurrence matrix depends as defined in the algorithm. The value of n depends on output at each stage. Then the cipher text at each state is as follows: Out put / Transition On calculating residue modulus 26 for the matrix we get Thus, ‘WONDERFUL’ takes the form ‘DDUSLKMNR’. Algorithm proposed, is based on finite state machine and different operations on matrices. Secrecy is maintained at four levels, the secret key, the chosen finite state machine, the different operations, and the recurrence matrix. The obtained cipher text becomes quite difficult to break or to extract the original information even if the algorithm is known. Cite This Work To export a reference to this article please select a referencing stye below: Related ServicesView all DMCA / Removal Request If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please:
OPCFW_CODE
Do I have Windows Defender ATP? In Windows 10, the Windows Security Center icon should be present in the system tray with a green checkmark if Defender is running. You can also complete the following steps to confirm Microsoft Defender ATP is running on your Windows 10 or Windows 8.1 device: Open Task Manager and click the Details tab. Is Microsoft Defender already installed on Windows 10? In Windows 10, version 1703 and later, the Windows Defender app is part of the Windows Security. Settings that were previously part of the Windows Defender client and main Windows Settings have been combined and moved to the new app, which is installed by default as part of Windows 10, version 1703. Does Windows Defender come free with Windows 10? Is Windows Defender free? Yes. Windows Defender is automatically installed for free on all PCs that have Windows 7, Windows 8.1, or Windows 10. What licenses include Microsoft Defender ATP? Microsoft Defender Advanced Threat Protection requires one of the following Microsoft Volume Licensing offers: - Windows 10 Enterprise E5. - Windows 10 Education A5. - Microsoft 365 E5 (M365 E5) which includes Windows 10 Enterprise E5. - Microsoft 365 A5 (M365 A5) What does Windows Defender ATP do? Windows Defender Advanced Threat Protection (ATP) is a Microsoft security product that is designed to help enterprise-class organizations detect and respond to security threats. ATP is a preventative and post-detection, investigative response feature to Windows Defender. How do I activate ATP in Windows Defender? To enable Defender ATP - Sign in to the Microsoft Endpoint Manager Admin Center. - Select Endpoint security > Microsoft Defender ATP, and then select Open the Microsoft Defender Security Center. - In Microsoft Defender Security Center: … - Return to Microsoft Defender ATP in the Microsoft Endpoint Manager Admin Center. Is Windows Defender enough 2021? In essence, Windows Defender is good enough for your PC in 2021; however, this was not the case sometime ago. … However, Windows Defender currently provides robust protection for systems against malware programs, which has been proved in a lot of independent testing. Do I need antivirus with Windows Defender? Windows Defender scans a user’s email, internet browser, cloud, and apps for the above cyberthreats. However, Windows Defender lacks endpoint protection and response, as well as automated investigation and remediation, so more antivirus software is necessary. Why Windows Defender not working? If Windows Defender is not working, that’s usually caused by the fact that it detects another antimalware software. Make sure you uninstall the third-party security solution completely, with a dedicated program. Try checking the system file by using some built-in, command-line tools from your OS. Is Microsoft release Windows 11? Microsoft has confirmed that Windows 11 will officially launch on 5 October. Both a free upgrade for those Windows 10 devices that are eligible and pre-loaded on new computers are due. Can Windows Defender remove malware? The Windows Defender Offline scan will automatically detect and remove or quarantine malware. How can I tell if Windows Defender is on? Open Task Manager and click on Details tab. Scroll down and look for MsMpEng.exe and the Status column will show if it’s running. Defender won’t be running if you have another anti-virus installed. Also, you can open Settings [edit: >Update & security] and choose Windows Defender in the left panel. How much does defender ATP cost? The new Microsoft Defender for Endpoint standalone retail cost via CSP is $5.20/mo per user for up to 5 machines. Is Microsoft ATP free? Microsoft Defender for Endpoint offers a free trial and several different pricing plans from $10 per user per month up to $57 per user per month. For more information, visit microsoft.com/en-us/microsoft-365/compare-microsoft-365-enterprise-plans. What ATP 1 plan? Helps protect against unknown malware and viruses by providing robust zero-day protection. … Includes features to safeguard from harmful links in real time. ATP has rich reporting and URL trace capabilities to spot attacks happening in your organization.
OPCFW_CODE
#Welcome to the CAPP-Reporter wiki! Are you an RPI dual major? Have you ever noticed that if you try to view your CAPP report on SIS you only get the CAPP report for your primary major? We have, and that is why we are creating CAPP Reporter! This application is being made to allow single, and even dual majors to get an accurate CAPP report! As you enter the courses you have taken, this open source application will allow you to view what courses you must take to graduate! No more needlessly worrying and fussing that you may have made a mistake while trying to discover what you need by hand! This application will have a user friendly GUI and should update the courses that need to be taken as you type. Furthermore, since you enter the courses manually you can enter the courses you will have taken by the end of next semester if you wish! Currently this application is being made only CS, MATH, and CS/MATH dual majors (due to time restrictions) however we intend to extend it in the future. The process for adding majors is simple, but very detalied and requites meticulous work. To extend this project to incorporate many majors, it would take the effort of a few dedicated developers as each combination of majors has their specific requirements and special rules. However, the language that is used to implement the logic is documented well enough that anyone that desires to do so could make a pull request and implement whatever majors they wish. This application only handles general cases. Many 'corner cases' may occur, many of which are not listed RPI's websites. Please be aware, because of this, the results may not be 100% accurate. However, we are making this application pessimistic, so it will likely not tell you that you have satisfied a requirement which you have not. Regardless, as this application is still in it's infancy, we strongly recommend you double check your results by hand. One important, still un-implemented, feature of this is recognizing cross listed classes. If you enter a cross listed class, it will only consider it for the major you entered it as. This application requires the use of the graphics library QT. It requires qmake 5.7 and C++ 11 to build. First, cd into the directory you would like to install this application in Then git clone this repository git clone https://github.com/zwimer/CAPP-Reporter Create your build directory as follows mkdir CAPP-Reporter/build && cd CAPP-Reporter/build/ After that run qmake and make with the command below qmake ../GUI/CAPP_Reporter.pro && make Finally relocate the 'Database' to the location of the created binary. With that, you should have a functional CAPP_Reporter application! ##Usage This application takes no arguments. Just open it like you would any application.
OPCFW_CODE
As in our organization there is a different team having admin access on the Kubernetes Cluster, I just wonder what are the resources that require a Cluster Admin access to be created prior to the Kong Ingress Controller installation on Kubernetes. I see that ClusterRole and ClusterRoleBinding require admin access. Probably the Custom Resource Definition (CRD) as well, but what other resources being part of the Kong Helm charts should be created by the Team having Admin access on the K8S cluster prior to running the Kong Ingress Conroller Helm installation charts? For a fresh installation using the default configuration: $ helm template example -n helmgress /tmp/symkong | grep -i kind - kind: ServiceAccount - kind: ServiceAccount I believe only the items you’ve already mentioned (CRDs and the ClusterRole* resources) will normally require special permissions (ability to create cluster-wide resources, for the most part) CRDs can be handled via https://github.com/Kong/charts/blob/master/charts/kong/README.md#crds-only or by sending https://github.com/Kong/charts/blob/master/charts/kong/crds/custom-resource-definitions.yaml through kubectl apply: Helm 3 doesn’t manage CRDs as part of the release (it only creates them at install if needed) and we don’t have any templating in that file, so in practice it’s often easiest to have a cluster admin create the CRDs directly. They will require updates occasionally, but UPGRADE.md will indicate when that’s necessary. The cluster RBAC resources may be a bit more difficult to work with because they are templated (mainly to reference the ServiceAccount’s name). We may want to explore reduced-permissions templates in the future to work with the single-namespace deployment model discussed in Kong Ingress Controller without ClusterRole creation, but don’t have anything like that currently. In lieu of support in the existing templates, that’d probably require merging permissions from the ClusterRole into the Role by hand and maintaining your own fork of the chart until there’s native support for it (we don’t have a timeline, but I’ll mark it down as something to look into). Thanks Travis for your feedback. So, if I understand well your reply: - I can just let my K8S Cluster Admin team create all the CRDs mentioned in ([https://github.com/Kong/charts/tree/master/charts/kong/crds](http://KONG CRDS)) - Change the ClusterRole and ClusterRoleBindings mentioned in ([https://github.com/Kong/charts/blob/master/charts/kong/templates/controller-rbac-resources.yaml](http://Kong ClusterRole)) into Role and RoleBindings, keeping the other Role and RoleBindings unchanged. - Delete the CRDs and ClusterRole and ClusterRoleBindings from the Helm charts (https://github.com/Kong/charts/tree/master/charts/kong) - Run the Helm Install command with the CONTROLLER_WATCH_NAMESPACE flag to the specific Namespace Kong Ingress will apply to. Are those steps enough to deploy Kong Ingress Controller to a specific Namespace without creating ClusterRoles and Bindings? Thank you again They should be–this is still a bit untested territory, so what’s presented so far are more high-level guidelines, and some level of trial and error will probably be necessary. Please keep us updated with questions on anything that doesn’t work and/or what you wind up with for a successful configuration. That will help inform our future work to implement this as a standard configuration in the chart/controller. For (2) I’d originally intended to merge permissions from the ClusterRole into a single Role, but what you’ve proposed (creating two Roles, one of which contains the permissions originally in the ClusterRole) should work also (and will probably be easier to template later). (3) isn’t necessary: once the CRDs are in place (after (1)) Helm will just ignore them. Thanks Travis for your reply. Just one more question. The ClusterRole template in (https://github.com/Kong/kubernetes-ingress-controller/blob/master/deploy/single/all-in-one-dbless-k4k8s-enterprise.yaml) mentions the verbs “-list, -watch” on “secrets” resources. Is that really required? Do you think it would be possible to remove “secrets” from the resource list, as this may be a security risk? It is, yes. We use Secrets for storing sensitive plugin configuration, credentials for consumers, and several other purposes. We would like to reduce our access to them (this concern comes up often), but currently Kubernetes RBAC doesn’t afford us any way to restrict our access further (e.g. by labeling Secrets that the controller should have access to): you either get access to all Secrets (in a namespace or cluster-wide depending on role scope) or none. Thanks Travis for your reply again. In that case, how could I transform the ClusterRole into a simple Role, given the fact that in the ClusterRole template there are mentions to nodes, endpoints, secrets, etc… I do not really know how could I transform a ClusterRole template into a Role template? Which elements should I keep in the Role template? It’s exercise for the reader territory. I don’t know myself Intuitively, you should be able to just change the type and add a namespace (ditto for the binding). The rulesets shouldn’t need to change as they’re PolicyRule arrays in both: Where I think you may run into issues is with cluster-level resources, namely KongClusterPlugin. I’m not sure how K8S RBAC handles namespaced roles that include actions for cluster-level resources, though I don’t see anything mentioned in the docs I reviewed–it may fail gracefully. If it doesn’t, you should be able to remove KongClusterPlugin from its rule without issue–the controller should be able to operate normally without that access, gracefully pretending that there aren’t any Very much appreciated your support. I will keep you posted. I tried to deploy the ingress-controller and the proxy by changing the ClusterRole into Role and adding the namespace. The proxy container starts correctly, however the Ingress-Controller generates the following error: Failed to list *v1.KongClusterPlugin: kongclusterplugins.configuration.konghq.com is forbidden: User “system:serviceaccount:kong:kong-kong” cannot list resource “kongclusterplugins” in API group “configuration.konghq.com” at the cluster scope it is expected as the Namespaced Role does not have Cluster level permissions. Do you think that if I do not create the CustomResourceDefinition KongClusterPlugin (which is Cluster scoped), I can make it work? Thank you again for your support. With or without KongClusterPlugin in the role? If with, you’re probably blocked on https://github.com/Kong/kubernetes-ingress-controller/issues/717 and we’d need to address that in the controller code for you to proceed. This happens with and without the KongClusterPlugin in the role. The role is by the way a Namespace Role, not a ClusterRole. Harry created the https://github.com/Kong/kubernetes-ingress-controller/issues/717 following my initial question with respect to the KongClusterPlugin CustomResourceDefinition. Would it be possible to make to code change and deliver a patched image of the kong-ingress-controller, just for me to test? Not officially yet. Do you have a local registry you can push images to? Although we don’t have a pre-built image, you should be able to check out the controller code locally, apply the change suggested in andrevtg’s comment , and then build/push it like so: export REGISTRY=gcr.io/grozny-ivan-1581; export TAG=0.9.0-dev; make container; docker push gcr.io/grozny-ivan-1581/kong-ingress-controller:0.9.0-dev Sub in your registry for the gcr.io example there and update your deployment to use the custom image. That change ignores KongClusterPlugin entirely, which will work for your use case, and should suffice for testing, but it’s not what we’ll actually do in the end. We need to support both environments with cluster-wide access and those without, so we’d need to add some sort of toggle between those modes. I do have a Dockerhub private registry. Will you be able to push the kong-ingress-controller:0.9.0-dev image to the DockerHub registry, so that I could take it from there? I would really like to try out this new version of the image to see if it will fit our case. Sorry–to clarify, you’d need to handle the patch and custom image build. We can answer any questions you have about building a custom image, but we can’t do it on your behalf. The command sequence in my previous post should work for that. Did you have any questions about applying the patch and/or building and using the custom image? Sorry, I think misunderstood your previous comments. So, the new Kong Ingress image “kong-ingress-controller:0.9.0-dev” is available already? Is that correct ? I will need your help to understand how can I build the new image based on the 0.9.0-dev image. What exactly should I do? Thank you for your help It is not–again, we won’t be building it ourselves, you’ll need to check out a copy of the source code, apply the change, build your own image from it, and push it to a registry you control. ClusterRole and ClusterRoleBinding creation covers the steps in a bit more detail; which of those do you have questions on? Thanks again Travis, Apologies for the misunderstanding. I will build the image. Would you please confirm the steps below: - Getting the Kong Ingress Controller source code from: - Changing the code in the main.go file - commenting the line below: //informers = append(informers, kongClusterPluginInformer) - Building a the new image Would you have an example of a Dockerfile that builds the image of the Kong Ingress Controller (to make sure that I do not make mistakes)? Please ignore my last message. I was able to build the image using the source code from https://github.com/Kong/kubernetes-ingress-controller/tree/master/cli/ingress-controller repo. I have just commented the line 341 in the main.go file like below: //informers = append(informers, kongClusterPluginInformer) Then, I run the make container" command and generated the image, which I pushed to our private registry under a new tag. However, when I deploy Kong with the new kong-ingress-controller image, using the https://github.com/Kong/kubernetes-ingress-controller/blob/master/deploy/single/all-in-one-dbless-k4k8s-enterprise.yaml manifest, I get a strange error: “1 main.go:561] Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration). Reason: Get “https://XX.XXX.X.1:443/version?timeout=32s”: badgateway Refer to the troubleshooting guide for more information: https://github.com/kubernetes/ingress-nginx/blob/master/docs/troubleshooting.md”. The strange thing is that when I deploy the standard version of the image (kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.9.1) using the same manifest file, on the same Kube Cluster, I do not have this error. Any idea what could be the reason for that error? I was able deploy the new image and it seems to work properly. Now, I do not need to create ClusterRole and ClusterRoleBindings. With that change, Kong Kubernetes for Enterprise fits our need now. Thank you for your support. Let me know if you need more information about the configuration that I have done to make it work.
OPCFW_CODE
We are looking for • CI: Setup Git, Jenkins • Ensure backup of all code in S3 • Every code change is versioned • Use code convention checks (lint) before check-in done • Use compilation and build verification checks (Use TDD approach) • Use Gradle for build management • Provide Cloud IDE as an option but also have a developer ...example problems being things such as not being able to receive cellular reception in a building, or wasting time stuck in traffic, or the world not having the right social infrastructure to end world hunger. Generally, the more entries you are able to list the better. You are encouraged to perform research in order to extend the entries on your lists ...a new website. I need you to design and build a website for my small business. I need a small business website for my company. Our firm is into electrical and civil infrastructure projects. My website should contains :- [url removed, login to view] profile and the projects completed by us, [url removed, login to view] provided by our firm [url removed, login to view] I need are Ho... Using twitter sentiment analyze on DOCKER with Django a NLP project which detecting water, energy and bus infrastructure anomalies and indexing them in Apache Solr ...Vietnam people. My organisation is planning to go Ho Chi Minh City, Vietnam and visit some large scale civil project over there. And Now, i am needed to collect around 15 Infrastructure project information in Ho Chi Minh City, [url removed, login to view], I am Hong Kong guys and i am not familiar Ho chi Minh city enough, i don't know vietnam as well, so i need help of ...example problems being things such as not being able to receive cellular reception in a building, or wasting time stuck in traffic, or the world not having the right social infrastructure to end world hunger. There is no particular number of items that I am looking for in either list, but generally the more the better. You are encouraged to perform research I have to apply for an important international organization as computer infrastructure system manager but I'm not proficient in written English so I need help. My purpose is to find a person who has good skills in technical English and Italian. I will give you detail the text to translate in Italian and after the translation I will look if the technical ...Silicon Valley of India where we have the following: single point of contact for all your services 24x7 support US, UK, EMEA and Asia time zone working hours Latest Infrastructure without downtime. ( 99% Uptime Guaranteed ) We assure you quality work as per SLA and Client Guidelines, On time delivery, Affordable price and Stunning output. ...and not be blocked by major email service providers (gmail, hotmail, etc) or major spam filters. We will be sending high volumes and will require someone who can build an infrastructure that is reliable and can support this. Requirements: • Email Marketing Software Recommendation • MTA software • Email software must not heavily modify html email ...data science / trading platform. In case of good performance and possible physical proximity to Madrid or Budapest, there will be followup projects which go deeper in our infrastructure. Other than Python, knowledge of network architectures, VPN, Cisco, AWS, Atlassian tools is a plus. This first task is about implementing a cron/rsync based backup framework Hi. Cyber Security Hive is a company which provid...e-commerce, payments, etc. Services we provide: - Penetration testing - PCI compliance check - GDPR UK compliance - Automated trainings - Phishing Simulation - Infrastructure scanning - vulnerability assessment We also provide customised services to our clients who have requirement. Hi guys, I'm in need of a very simple website. It needs to be in PHP/HTML, fully responsive, fully customisable and mobile optimised. This ...good frontend skills (HTML/PHP) can complete in 1 or 2 hours. I will share what the website will need to look like I will provide you with the assets once you have the infrastructure in place (Logo). find some website or someone who does not know DNS so we can help them troubleshoot their mission critical DNS infrastructure and refer to us so we can fix and/or replicate DNS/website Requirement: 1) Read XML file and parse all data and store into an array. The sam...Functions required: a) LoadXML b) ReadXMLintoArray c) WriteXMLfromArray Current infrastructure: OS: CentOS release 6.9 (Final) PHP 5.5.33 (cli) Laravel Framework version 5.1.46 (LTS) The solution should be deploy-able on the above given infrastructure. I will be speaking to high school seniors, promoting a construction/infrastructure technical training. I like to deliver in a Ted Talks format. Because of the age of the students, I need help developing a presentation that is inspirational the high school seniors. ...aim of participating in the infrastructure development of the Nation by providing professional architecture & engineering services. The Firm is mainly established and structured to fulfill the specific needs of National programmers related to public utilities, community services, industrial facilities and infrastructure projects. The firm’s head office
OPCFW_CODE
#!/bin/python import pandas as pd import numpy as np from sklearn.linear_model import LinearRegression PCAT_PAN = "Pathology.Categories_pancreas" PCAT_LIV = "Pathology.Categories_liver" FPCT_PAN = "Fat,Percentage_pancreas" FPCT_LIV = "Fat,Percentage_liver" class Model(): """ This model computes fat percents for liver and pancreas (independently) based on the mean fat percents observed for each single disease present in the "Pathology.Categories_XX" rows. If such categories is not available for given record, it will fall back to age alone as the criteria. """ name = "Disease CAT Mean" def __init__(self, training_data): self.BANKS, self.LRMODELS = BuildModel(training_data) def Predict(self, Individue): age = Individue["~Age"] pancreas_fat = self._PredictEach(Individue[PCAT_PAN], age, self.BANKS["DISEASE_PANCREAS"], self.LRMODELS["PANCREAS"]) liver_fat = self._PredictEach(Individue[PCAT_LIV], age, self.BANKS["DISEASE_LIVER"], self.LRMODELS["LIVER"]) return [liver_fat, pancreas_fat] def _PredictEach(self, disease, age, BANK, fallbacklr_model): Diseases = SplitDiseaseField(disease) values = [] if Diseases: values = [BANK[d] for d in Diseases if d in BANK.keys()] if not values: values = fallbacklr_model.predict([[age]]) return np.mean(values) def SplitDiseaseField(field): try: return [f.strip() for f in field.split(",")] except: return [] def CalculateFatForDisease(disease_values): disease_probs = {} for key in disease_values.keys(): disease_probs[key] = np.mean(disease_values[key]) return disease_probs def UpdateBank(bank, diseases, fat_percent): for d in diseases: if d not in bank.keys(): bank[d] = [] bank[d].append(fat_percent) def BuildModel(training_data): """ Builds data structures used by the model to predict: FAT probabilities associated with each single disease. """ # A dict of disease:[float] BANKS = { "DISEASE_PANCREAS": {}, "DISEASE_LIVER": {}, "AGE_PANCREAS": {}, "AGE_LIVER": {} } MODELS = { "LIVER": [[], []], "PANCREAS": [[], []] } def model_fit(X, Y): m = LinearRegression() m.fit(np.array([X]).reshape(-1, 1), np.array([Y]).reshape(-1, 1)) return m for i, row in training_data.iterrows(): dp = SplitDiseaseField(row[PCAT_PAN]) dl = SplitDiseaseField(row[PCAT_LIV]) fat_pancreas = row[FPCT_PAN] fat_liver = row[FPCT_LIV] age = row["~Age"] UpdateBank(BANKS["DISEASE_PANCREAS"], dp, fat_pancreas) UpdateBank(BANKS["DISEASE_LIVER"], dl, fat_liver) MODELS["LIVER"][0].append(age) MODELS["LIVER"][1].append(fat_liver) MODELS["PANCREAS"][0].append(age) MODELS["PANCREAS"][1].append(fat_pancreas) # UpdateBank(BANKS["AGE_PANCREAS"], [age], fat_pancreas) # UpdateBank(BANKS["AGE_LIVER"], [age], fat_liver) return {k: CalculateFatForDisease(BANKS[k]) for k in BANKS.keys()}, {k: model_fit(*MODELS[k]) for k in MODELS.keys()}
STACK_EDU
Understanding the Importance of Debugging Before we dive into the practical aspects of debugging Ajax, let’s first grasp why it’s so important. Debugging is the process of identifying and resolving issues or bugs in your code. With Ajax, as with any code, bugs can arise from various sources, such as syntax errors, network problems, or logic issues. These issues can lead to unresponsive web pages, incomplete data transfers, or other undesirable outcomes. As a programming assignment help provider, the reputation and success of your website largely depend on delivering error-free solutions. Users expect seamless experiences, and if they encounter issues while using your web application, they are more likely to seek assistance elsewhere. Therefore, mastering debugging techniques in the context of Ajax is crucial to ensuring that your web applications run smoothly and that your programming solutions are impeccable. Common Ajax Issues Ajax issues can be diverse and tricky to pinpoint, but some common problems tend to crop up regularly when working with this technology. Let’s explore a few of them: => Syntax Errors: Like any code, Ajax code can contain syntax errors that can cause issues. A missing semicolon, a typo, or an incorrect variable name can lead to unexpected behavior. => Cross-Origin Resource Sharing (CORS) Errors: CORS issues arise when you attempt to make Ajax requests to a domain other than the one hosting your web application. Browsers enforce security policies that restrict such requests, often leading to CORS-related problems. => Server-Side Errors: Sometimes, the issue isn’t in your Ajax code but on the server. These can be caused by misconfigured servers, server crashes, or incorrect server responses. => Networking Issues: Slow or unstable network connections can result in failed Ajax requests. It’s essential to handle such network-related problems gracefully in your code. => Data Parsing Errors: If you’re not parsing the data correctly from the server’s response, your application might not function as expected. Data parsing errors can lead to display issues and processing problems. Debugging Ajax: Best Practices Now that we’ve highlighted some common Ajax issues, let’s delve into best practices for debugging and improving your Ajax skills: => Use Browser Developer Tools: Most modern web browsers come equipped with powerful developer tools. Use these tools to inspect network requests, view console logs, and analyze the structure of your Ajax responses. This will help you identify issues quickly and gain insights into the behavior of your code. => Check the Console: Always keep an eye on the browser’s console. Console logs are your best friend when it comes to debugging. They can provide valuable information about errors and issues, such as syntax errors and network problems. => Start with Small, Isolated Tests: When developing Ajax features, start with small, isolated tests that allow you to focus on specific functionality. This makes it easier to identify the source of issues and fix them one at a time. => Test Different Browsers: Your Ajax code might work perfectly in one browser but fail in another due to browser-specific quirks. Testing your code in multiple browsers can help you ensure cross-browser compatibility and uncover browser-specific issues. => Utilize Error Handling: Implement robust error handling in your Ajax code. Use try-catch blocks to handle exceptions and errors gracefully. This ensures that even if something goes wrong, the user experience won’t be disrupted. => Inspect Network Traffic: Use the network tab in your browser’s developer tools to inspect the details of your Ajax requests and responses. This can help you identify issues related to CORS, slow server responses, and data formatting. => Break the Problem Down: When you encounter a complex issue, try to break it down into smaller, manageable parts. This makes debugging more manageable, and you can focus on isolating and resolving individual problems. => Read Documentation and Seek Help: If you’re using a specific Ajax library or framework, consult their documentation for guidance on debugging. Additionally, don’t hesitate to seek help from online communities and forums when you’re stuck on a problem. Improving Your Ajax Skills for Programming Assignment Help To complete your programming assignments successfully and provide valuable assistance to your website’s users, you need to be proficient in Ajax. Debugging skills are an integral part of this proficiency. When you can identify and resolve Ajax issues efficiently, you’ll not only enhance your web applications but also establish a reputation for delivering reliable programming solutions. Incorporating debugging practices into your workflow will ultimately save you time and frustration. You’ll be able to troubleshoot issues more effectively and ensure that your Ajax-powered features operate flawlessly, leaving your users with a positive impression of your programming assignment help services. Ajax is a powerful technology that can take your website, such as www.programminghomeworkhelp.com, to the next level in terms of interactivity and user experience. However, mastering Ajax also involves becoming adept at debugging. By understanding common Ajax issues, following best debugging practices, and continuously improving your skills, you can ensure that your web applications run smoothly, and your programming assignment help services thrive. Remember that debugging is not just about fixing errors; it’s about delivering exceptional user experiences and reliable solutions. So, complete your Ajax assignment, and happy debugging!
OPCFW_CODE
If you want to get your web development off on the right foot, and not feel overwhelmed by the number of frameworks available, look no further. In this article, we explain why React is one of the best choices you can consider for your web app development. Feel free to navigate to any item in the list by hitting their respective links below: React development is when React.js is used for web or mobile development of a project. When companies decide to use React js for web development, this decision helps save a considerable amount of time and money on web development. “Write once. Use anywhere.” This is not surprising at all. React gives your application versatility, high performance, and speed. With small size, well-developed documentation, a huge React developer community, and the backing of some of the biggest tech companies in the world, React has been in great demand in both web and mobile development. React can be seen as a stable, time-proven and a cutting-edge library which is continuously evolving. It has been the main competitor of Google's AngularJS (v1.x and v2.x) since its beginnings, and React is now one of the most loved front-end frameworks. Here are some reasons why React js is so popular: This is not all of the React advantages. Keep reading. If your project offers a lot of interactions with the end-user, React.js can be your preferred framework. This is because React helps resolve many of the challenges that come with developing efficient web applications. React introduces a virtual DOM that updates the real DOM in a smart and extremely efficient way, so even complex web applications can show great interactivity, without sacrificing their performance. React lets web developers build lightning-fast user interfaces, making great web applications for your customers/users. This reactivity has made React very popular, and it is one of the main reasons why React development services for frontend are in such great demand these days. Skilled React developers can reuse React components within your project whenever applicable, significantly increasing the speed of the project development. Over time, as the project continues, your React reusable components library increases, benefitting the project development speed further. React can be used beyond the web. It is all possible with React Native, which allows using React’s benefits in mobile app development. Our React developers can easily share a large part of the codebase between a React-based web app and a React Native-based app for Android and iOS, and this can all be done without compromising the app’s performance. Since the framework's release in 2013, React developers have changed their perception that React is suitable only for massive applications with tons of traffic. This is because the framework has seen many incremental improvements. Also, the emergence of a great variety of third-party development tools (e.g. Redux for state management), as well as the growth of the vibrant React developer community, help to quickly find an effective solution for any challenge that React developers may face. For your project, this means that our experienced team can provide you with React js web application development services and help you create a scalable React app very quickly. React Architecture applies beyond just HTML rendering, as it supports rendering to <canvas> tags, and can be suitable for isomorphic architecture, where the application logic runs on both the server and the client (browser) sides. This allows for several optimisations and, although it is not widely adopted, is certainly worth considering. React developers no longer need to manually figure out the differences in the HTML code that should be rendered, nor do they have to update specific parts of the web page manually. This is because React's Virtual DOM enables the React developer to write the code as if the webpage was reloaded with every update, and React will automatically figure out the differences and update the view accordingly. To keep code clean, React does not allow for component renderers to mutate any values that can be passed into it. Instead, any callbacks can be passed to components that modify values. This ensures that rendering and data mutation logic are separated, which provides great reusability for the code. Also, this "data down, actions up" approach combines perfectly with Redux, a state management library for React, which can work with both React and React Native. React Hooks, a new React feature that helps solve a variety of unconnected problems in React that have been encountered over the years of writing and maintaining tens of thousands of components. Long story short, React Hooks allow developers to use React without classes. Web developers can also extract stateful logic from a component so it can be tested independently and reused within a project. As sometimes complex components bring about certain difficulties, React Hooks let web developers split one component into smaller functions. They also let web developers “hook into” React state and other React lifecycle features from function components without writing a class. In addition, React Hooks allow developers to reuse stateful logic without changing the component hierarchy, making it easy to share Hooks among many components or with the community. The above advantages make React an excellent fit for any innovative project - whether it is an entertainment app, a complex analytical tool, a business web application, or a mobile app. The unique benefits of React demonstrate their real value when your users start interacting with your awesome React-built app. Our expert senior front-end developers, who have been working with React since 2013, can help you to create reusable, scalable and fully functional web apps. As early adopters of React, our web developers have already created a variety of reusable React components and full-scale web applications. IT Club provides the following React.js development services using our best in-house resources: As for SEO, it can be effectively handled through server-side rendering. Moreover, React has behind it an active and fast-growing worldwide community, which, in turn, allows developing great frontends even faster. React is designed to focus exclusively on applying business logic, providing high scalability and high-speed efficiency to save money and time. These features allow our web developers to adopt modern technology and create Progressive Web Apps. Our web developers use React at least for two reasons when developing SPAs (single-page applications). Firstly, due to its virtual representation of the DOM, React guarantees perfect efficiency. When a user interacts with an app, the operations are run against the virtual DOM and then rendered on the visible page. Secondly, its server-side rendering supports using Next.js, which, in turn, is used for static websites as well as a desktop and mobile app, the enterprise, SEO-friendly websites, PWAs, etc. Besides that, our developers don’t need to investigate the rest of your technology stack and rewrite your existing code. React allows our web developers to avoid this, saving your resources. React Native allows React developers to build not ‘just’ a mobile web app or a hybrid app, but a fully fledged mobile app, identical to the one built in the platform's native language. This is because React uses the same fundamental UI building blocks as regular iOS and Android apps. Another major benefit of React Native is that it does not require recompiling. This allows us to reload the app instantly with the hot reload supported by React Native. It lets us run the new code while retaining the application’s state. React Native reusable components library allows to reduce the time required for mobile development, making the release of your React Native app faster and less costly. While React Native would not be seen as the best choice for mobile games, where every millisecond counts, it nevertheless has shown great applicability in some real-life high-loaded apps. Those include Facebook, Instagram, Skype, Tesla, SoundCloud, Airbnb and other well-known projects. Thus, React Native is well-suited for most web apps and projects you want to create. We provide React Native app development services. Looking for a team to upgrade your React application? Or need extensive and affordable maintenance and support? In either case, you can count on our web developers with upgrades, ongoing product support, and the implementation of new features. All the clients who have built React apps with IT Club continue to receive support services. Our web developers use React.js to create business web apps and high-security consumer-facing web projects. Our team has experience in developing start-up projects where timing is critical. Also, our software engineers have been working on business-level applications, where it is essential to focus on scalability, safety, stability, optimisation, efficiency and the reduction of technical debt. To deliver high-quality products, our web developers follow all the best practices that include code review, test-driven development, ongoing integration, and automated testing. Here are some of the reasons why both small and large businesses build their React apps with us: See what else you get working with us: With its powerful composition model, React enables our web developers to reuse code in applications, as well as to write and assemble a new, diverse and React reusable components library for further development for further development. Therefore, this strategy reduces the time required for software development, making the release of your web application faster and less costly. React allows making the most out of its component-based structure and reusing components where possible. Our software engineers create React libraries and UI parts (buttons, checkboxes, drop-down menus, libraries, etc.) for faster development and easier codebase maintenance. Unsure of which technology would be best for your frontend or backend side? Our web developers can assist you in making the right choice for your project. This is your chance to get benefited from top talents that combine tech know-how and creative minds. Only middle and senior-level engineers will work on your application, and they are ready to share their knowledge with you on your React project. Our web developers can also provide you with some essential aspects to consider when selecting a framework for your app. If you are looking for a React JS web development company, we are here to help. Is React not a perfect match for your project? Check out other services that IT Club has to offer. Our software engineers deliver high-quality web development on time. Our expert Vue development team provides Vue js development services to build mobile and user-friendly web apps and single-page applications (SPAs). Click to learn more. React is a front-end library which works in the browser. React renders on a server using Node, and powers mobile apps using React Native. There are many reasons to use React.js. Firstly, React allows creating reusable UI components. With React, web developers can create large web applications which can change data without reloading the page. Due to its virtual representation of the DOM, React also guarantees perfect efficiency. Lastly, React offers server-side rendering by using Next.js. React uses Virtual DOM that makes the app faster. Virtual DOM allows ReactJS to detect changes in the data and know exactly when to re-render or when to ignore specific parts of the DOM. A UI that works quickly is important in enhancing the overall user experience. ReactJS is used for building user interfaces, specifically for single-page applications (SPA). Yes, there are many alternatives to React. The most popular are Vue.js, AngularJS, Ember, Svelte, and Preact. The most popular state management libraries are Redux and MobX. Recently, Facebook released an experimental library, Recoil, which builds on React primitives and gives benefits such as small size, compatibility with concurrent mode and combining on React's batching.
OPCFW_CODE
VS2013 Database Project fails to build I created a new SQL Server Database Project in VS2013 (Update 3) and attempted to build but it fails. The only build output I receive follows: ------ Build started: Project: Database1, Configuration: Debug Any CPU ------ Creating a model to represent the project... Done building project "Database1.sqlproj" -- FAILED. Build FAILED. ========== Build: 0 succeeded or up-to-date, 1 failed, 0 skipped ========== If I build the project via MSBuild.exe with detail verbosity I see the following inner failure: Task "SqlBuildTask" Creating a model to represent the project... Done executing task "SqlBuildTask" -- FAILED. After searching the web I have tried the following to resolve this to no avail: -Restart VS -Restart machine -Repair VS Update 3 and reboot -Repair VS2013 and reboot What am I missing? Out of curiosity, if you install the stand-alone SSDT for 2012, can you get a successful build doing the same things you did here? Did you import an existing database? Do you have any errors/warnings in your project? I don't have 2012. A migrated database project (2010 to 2012) in another solution does build successfully in 2013. No errors or warnings. I'm using VS2013 U3 Ultimate. What happens if you re-create this DB in a new project? How did you start this project? Import of existing DB or from scratch? (And I realize you don't have 2012, but if you download the bits for SSDT from the site it would install the basic IDE for you to use w/ SSDT. Probably not necessary if other projects work in 2013.) Same issue with a new project. Started with File|New|Project, no imports. Is this with no objects or after you've created them? Can you import objects into the project to see if it builds then? I still think it might be worth installing the 2012 SSDT bits side by side to see how that works or if it behaves differently. I've tried with and without objects, both custom created and imported. No change. I did install SSDT for VS12 and it works fine but VS2013 is still broken. What happens if you pull that working project from 2012 into 2013? Still broken? What version of SSDT do you show in your tools? I have 12.0.40403.0. You can usually find details about the current SSDT bits here: http://blogs.msdn.com/b/ssdt/ I show SSDT version 12.0.40706.0 in VS2013. Just upgraded to that myself, but have no issues building an existing project for 2013. Admittedly, I was opening a project created in SSDT 2012, but didn't have any issues doing so. After contacting a friend at Microsoft, he suggested repairing the Data Tools install at https://learn.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt And that resolved it. Thanks Chuck! Worked for me too! as an FYI I got the error with a DB project that I set to SQl Server 2012 not the default 2014 Worked for me, first time using the data tools and needed a repair. This could have taken hours to figure out. Thanks for this it worked for me too Thanks for the tip. Unfortunately, we are using Visual Studio Online to build and getting this error. We may have to repair the data tools on all of the build agents, which I'm not looking forward to. @DanCsharpster FYI: You can run SSTDSetup from the command line, so can automate repair and (in this case) force reboot of all agents by having them run this: \\myserver\share\SSDTSetup.exe /repair /silent /forcerestart. You can use PowerShell's Invoke-Command (or your tool of preference) for this. e.g. http://stackoverflow.com/questions/9535515/powershell-execute-remote-exe-with-command-line-arguments-on-remote-computer. NB: run SSDTSetup.exe /? for a complete list of command line options. For those of you who are nervous about randomly downloading the .exe file directly from the provided link, you can find the download on the Microsoft site here: https://learn.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt I had a similar issue and as mentioned in the accepted answer repairing is the solution. But unfortunately the link did not gave me an exe that says repair/UnInstall. I went ahead and run exe still issue persists. I resolved it by updating the sql data tools using Extensions and Updates. Following are the steps. Open visual studio. Go to tools menu and click on Extensions and Updates. Under the updates you will find an update for database proj. By clicking it, it will download an exe. When you run the exe it will ask for Repair/UnInstall. Click repair and proceed. that's exactly what happened to me Also check that you are using the correct version of MSBuild. There are usually multiple MSbuild exe's on your machine. The 14.0 version should be working with the Visual Studio 2015. I had this issue as well, but the problem was with the value in the project property "DSP". I had edited the proj file to build a dacpac for SQL 2012 and then edited it again for SQL 2014, and was getting this error for both. The original project that was targeting SQL 2008 worked fine still, so it wasnt an installation issue. In my editing had misspelled the values for the DSP element. <DSP>Microsoft.Data.Tools.Schema.Sql.Sq110DatabaseSchemaProvider</DSP> And it should have been (where sql is spelled with the L) <DSP>Microsoft.Data.Tools.Schema.Sql.Sql110DatabaseSchemaProvider</DSP> Fixing that spelling resolved the error.
STACK_EXCHANGE
How to create self signed certificates in Master Child Architecture asked 27 Oct '15, 15:00 Creating self-signed certificates in Master Child Environment To start with create folder - /opt/arcsight/HPUBA11/securonix_home/certs Under certs, Run following commands to generate the certificates. Step 0 – Shutdown Tomcat on both Master and Child servers. Step 1 – Create a self-signed certificate Use the keytool command for generating a certificate as follows: /opt/arcsight/HPUBA11/Java/jdk/bin/keytool -genkey -alias gaxgpsl201xs -keyalg RSA -keystore securonixKeyStore1 -keysize 2048 -ext san=dns:gaxgpsl201xs.securonix.com Finally the DNS address of the server is used in the last part san=dns: gaxgpsl201xs.securonix.com You will be required to provide a few details such as first name and last name among other questions. Fill them as required by the server. Note that the first name and last name must be the DNS of the server. Step 2 – Create a CertRequest /opt/arcsight/HPUBA11/Java/jdk/bin/keytool -certreq -alias gaxgpsl201xs -file gaxgpsl201xs.csr -keystore securonixKeyStore1 Step 3 – Export the certificate that has been created /opt/arcsight/HPUBA11/Java/jdk/bin/keytool -export -alias gaxgpsl201xs -file gaxgpsl201xs_Child1.cer -keystore securonixKeyStore1 Step 4 – Add the Certificate into the keystore /opt/arcsight/HPUBA11/Java/jdk/bin/keytool -import -file gaxgpsl201xs_Child1.cer -alias gaxgpsl201xs -keystore /opt/arcsight/HPUBA11/Java/jdk/jre/lib/security/cacerts Follow similar steps on the Master as well, with a different alias. After Step 3, you will have one certificate on Child and one on Master Certificate on child - gaxgpsl201xs_Child1.cer Certificate on master - gaxgpsl201xs_Master.cer Copy Master’s certificate to Child server on /opt/arcsight/HPUBA11/securonix_home/certs And copy child’s certificate to Master server on /opt/arcsight/HPUBA11/securonix_home/certs Then perform step 4 again with these new certificates to add the new certificates to the keystore. answered 27 Oct '15, 15:00
OPCFW_CODE
/* * Copyright arupingit(Arup Dutta) * github profile url https://github.com/arupingit * */ package net.arup.spring.AopDemo.Service; import net.arup.spring.AopDemo.Bo.CalculatorBo; import org.springframework.stereotype.Component; import org.springframework.stereotype.Service; /** * The Class Calculator. * * @author ARUP */ @Service("calculator") public class CalculatorImpl implements CalculatorIf{ /** * Addition. * * @param calcBo the calc bo */ public void addition(CalculatorBo calcBo){ calcBo.setResult(calcBo.getFirstInput() + calcBo.getSecondInput()); } /** * Subtraction. * * @param calcBo the calc bo */ public void subtraction(CalculatorBo calcBo){ int result = calcBo.getFirstInput() - calcBo.getSecondInput(); if(result<0){ throw new IllegalArgumentException("Illegal Arguments :"+calcBo.getFirstInput() +" , "+calcBo.getSecondInput()); } calcBo.setResult(result); } }
STACK_EDU
This set of functions inspect a data frame to anticipate problems before writing with REDCap's API. validate_for_write( d ) validate_no_logical( data_types, stop_on_error ) validate_field_names( field_names, stop_on_error = FALSE ) The data types of the data frame corresponding to the REDCap project. TRUE, an error is thrown for violations. Otherwise, a dataset summarizing the problems is returned. The names of the fields/variables in the REDCap project. Each field is an individual element in the character vector. tibble::tibble(), where each potential violation is a row. The two columns are: field_name: The name of the field/column/variable that might cause problems during the upload. field_index: The position of the field. (For example, a value of '1' indicates the first column, while a '3' indicates the third column.) concern: A description of the problem potentially caused by the suggestion: A potential solution to the concern. All functions listed in the Usage section above inspect a specific aspect of the dataset. The validate_for_write() function executes all these individual validation checks. It allows the client to check everything with one call. Currently it verifies that the dataset does not contain values (because REDCap typically wants 1 values instead of starts with a lowercase letter, and subsequent optional characters are a sequence of (a) lowercase letters, (b) digits 0-9, and/or (c) underscores. (The exact regex is If you encounter additional types of problems when attempting to write to REDCap, please tell us by creating a new issue, and we'll incorporate a new validation check into this function. The official documentation can be found on the 'API Help Page' and 'API Examples' pages on the REDCap wiki (i.e., https://community.projectredcap.org/articles/456/api-documentation.html and https://community.projectredcap.org/articles/462/api-examples.html). If you do not have an account for the wiki, please ask your campus REDCap administrator to send you the static material. d <- data.frame( record_id = 1:4, flag_logical = c(TRUE, TRUE, FALSE, TRUE), flag_Uppercase = c(4, 6, 8, 2) ) REDCapR::validate_for_write(d = d) #> # A tibble: 2 × 4 #> field_name field_index concern sugge…¹ #> <chr> <int> <chr> <chr> #> 1 flag_logical 2 The REDCap API does not automatically conv… Conver… #> 2 flag_Uppercase 3 A REDCap project does not allow field name… Change… #> # … with abbreviated variable name ¹suggestion
OPCFW_CODE
// tasks→dataに変換するプログラムです // main-rec2.jsの宣言前 // 使用するmain-rec2.jsの変数は以下の通りです // data let tasks_debug = [ { id:0, parent_id:0, project_flag:1, content:"A" }, { id:1, parent_id:0, project_flag:0, content:"B" }, { id:2, parent_id:0, project_flag:0, content:"C" }, { id:3, parent_id:2, project_flag:0, content:"D" }, { id:4, parent_id:2, project_flag:0, content:"E" }, { id:5, parent_id:2, project_flag:0, content:"F" }, { id:6, parent_id:0, project_flag:0, content:"G" }, { id:7, parent_id:0, project_flag:0, content:"H" }, { id:8, parent_id:7, project_flag:0, content:"I" }, { id:9, parent_id:7, project_flag:0, content:"J" }, { id:10, parent_id:0, project_flag:0, content:"K" }, { id:11, parent_id:9, project_flag:0, content: "L" }, { id:12, parent_id:11, project_flag:0, content: "M" }, ]; const createParentIdList = (tasks) => { let parentIdList = []; //親タスクのidを追加する // parentIdList作成 for(let i=0;i<tasks.length;i++) { let temp = tasks[i]; //現在のタスクのparent_idがparentIdListに存在しない場合 if ( parentIdList.indexOf(temp.parent_id) == -1 ) { parentIdList.push(temp.parent_id); } } return parentIdList; }; const createDataTree = (tasks, parentIdList) => { let dataTree = {}; for(let i=0;i<parentIdList.length;i++) { dataTree[parentIdList[i]]=[]; } // dataTreeのchildren_idを代入する for(let i=0;i<tasks.length;i++) { if (!(tasks[i].project_flag==1)) { dataTree[tasks[i].parent_id].push(tasks[i].id); } } return dataTree; }; const getTaskById = (tasks, id) => { let n = tasks.length; for (let i=0; i<n; i++) { if (tasks[i].id == id) { return tasks[i]; } } }; const convertIdtoTask = (tasks, id) => { for(let i=0;i<tasks.length;i++) { if (tasks[i].id == id) { return tasks[i]; } } }; const calcLayer = (tasks, id) => { let layer = 1; let flag = true; let projectId = -1; for(let i=0;i<tasks.length;i++){ if(tasks[i].project_flag==1) { projectId = tasks[i].id; } } let currentTasks = getTaskById(tasks, id); console.log("======calcLayer Start====="); if (currentTasks.parent_id == projectId && currentTasks.id == projectId) { console.log("======calcLayer End====="); return 0; } else { let i = 0; while (flag) { console.log("i",i); console.log("currentTasks", currentTasks); if (currentTasks.parent_id != projectId) { let tempId = currentTasks.parent_id; currentTasks = getTaskById(tasks, tempId); layer++; } else { flag = false; console.log("======calcLayer End====="); return layer; } i++; } } }; const createDepthObject = (tasks) => { let depthObject = {}; for(let i=0; i<tasks.length; i++) { let id = tasks[i].id; let depth = calcLayer(tasks, id); if (depth in depthObject) { depthObject[depth].push(id); } else { depthObject[depth] = [id]; } } return depthObject; }; const getPathId = (tasks,dataTree, id) => { let ind = [id]; let flag = true; let projectId = -1; for(let i=0;i<tasks.length;i++){ if(tasks[i].project_flag==1) { projectId = tasks[i].id; } } console.log("=======getPathId start======="); console.log("id", id); let currentTasks = getTaskById(tasks, id); console.log("currentTasks", currentTasks); if (currentTasks.parent_id == projectId && currentTasks.id == projectId) { console.log("=======getPathId end======="); return 0; } else { while (flag) { // projectノード直下のtaskノードでない場合, 3層以降 if (currentTasks.parent_id != projectId) { currentTasks = getTaskById(tasks, currentTasks.parent_id); ind.unshift(currentTasks.id); // projectノード直下のtaskノードである場合, 2層目 } else { flag = false; ind.unshift(projectId); console.log("=======getPathId end======="); return ind; } } } }; const getPathInd = (tasks, dataTree, depthObject, id) => { // depth: depthObject let ind = [0]; let pathId = getPathId(tasks, dataTree, id); console.log("pathID", pathId); let n = pathId.length; // 階層方向にloop // 2階層目以降 for(let i=1;i<n;i++) { let m = depthObject[i].length; let counter = 0; let prevParentId = -1; // 次のif文で必ずfalseにするため for(let j=0;j<m;j++) { let tempId = depthObject[i][j]; let tempParentId = convertIdtoTask(tasks, tempId).parent_id; if (prevParentId == tempParentId) { counter++; } else { counter = 0; } if (depthObject[i][j] == pathId[i]) { ind.push(counter); } prevParentId = tempParentId; } } return ind; } const getProjectTaskId = (tasks) => { let n = tasks.length; for (let i=0; i<n; i++) { if (tasks[i].project_flag==1) { return tasks[i].id; } } }; const createData = (tasks) => { console.log("createData start"); let data = {}; const parentIdList = createParentIdList(tasks); const dataTree = createDataTree(tasks, parentIdList); const depthObject = createDepthObject(tasks); let n = Object.keys(depthObject).length; //階層の深さ方向にループ for (let i=0; i<n; i++) { let m = depthObject[i].length; // 階層の水平方向にループ for (let j=0; j<m; j++) { let temp = {}; // i=0はprojectしかないので追加 if (i == 0) { let id = getProjectTaskId(tasks); temp.name=getTaskById(tasks, id).content, //idから取得する関数に変える temp.children=[]; data = temp; // i=1以降、つまり2階層目以降 } else { let id = depthObject[i][j]; // jは現在操作するタスクのid if (parentIdList.indexOf(id) != -1) { temp.name = getTaskById(tasks, id).content; //idから取得する関数に変える temp.children = []; } else { temp.name = getTaskById(tasks, id).content; //idから取得する関数に変える } let ind = getPathInd(tasks, dataTree, depthObject, id); let l = ind.length -1; let currentData = data; for (let k=0; k<l; k++) { if (k == 0) { currentData = currentData.children; } else { currentData = currentData[ind[k]].children; } } currentData.push(temp); } } } console.log("createData end"); return data; };
STACK_EDU
Genre: First-person Horror Game Project Role: Solo Developer Engine: Unreal Engine 4.22 What is it? "Nostalgia" is a horror game based on Japanese mythology, where you play as "Sato Kenta", a young college student who returned to his hometown only to accidentally opened a secret box that imprisoned evil spirits. Afterward, he asked the local monk for help with the evil spirits harassing him and found a way to pacify and reincarnate them. If you are interested in playing this game, the installer can be found here. This project has been refactored. The new version is developed with Unreal C++. If you are interested in the new version, you can find it here. What did I do for this project? For this project, I designed and implemented a custom dialogue system that supports three different kinds of dialogue: The first being similar to subtitles. These are triggered when Kenta is thinking or murmuring. They do not have an encapsulating box, nor do they lock player movement. Next is the general dialogue which has an encapsulating box and a speaker portrait. When this kind of dialogue is in use, the player movement is locked until the conversation ends. The final style of dialogue requires images to fully demonstrate the content of the dialogue. In addition the standard dialogue box, it has a background as well as the required images. This system also supports the utilization of a conversation log, which can be accessed via the “log” button or by pressing “Tab” on the keyboard. Since the log history is stored in the game instance, the log can be carried between levels. In combination with the interactive system I created for “Just Desserts”, it allows a dialogue event to sleep until a certain event is triggered. Most of the dialogue in the game is set to be triggered when the player enters its surrounding area. In addition, the dialogue system can also wake another dialogue chain when one ends. This means that the dialogue flows in a player-friendly fashion and not pop up too early if the player doesn’t move along with the guides or hints. What went wrong? Some of the dialogue has typos and there are grammar errors. The evil spirit was not enough of a threat to the player, and there was not enough player interaction. I was also unable to resolve some odd UI bugs that occur during a playthrough of the game. What went right? The art dressing, music, and ambiance used in this project proved integral to the game’s immersion. When I was developing the dialogue system, I took one UI asset as the reference and learned a new way to arrange the UI from it. I tried to apply this new method to my dialogue system and the result was good. It really helped me solving the overlapping issue of the UI that happened randomly.
OPCFW_CODE
School of Computing. Dublin City University. My big idea: Ancient Brain How do we actually code the state-space? In particular how do we encode the Q(x,a) data structure? We want a data structure that can take as input (i.e. be indexed by) two vectors x and a, both members of finite sets, and return a real number output Q(x,a). The basic idea is that because x and a are members of finite sets, we can give each of them a unique ID number, and so we can give the (x,a) combination a unique ID too. Then we just have an array of some size N of real numbers, indexed by this ID. x = (x1, x2, .. xn) where each xi takes one of some finite set of values v0, v1, ... vm (in fact, the set of values, and the number of them, could be different for each dimension i). e.g. x = (x1, .. x10) where for example x7 takes values -100, 0, or 100. To help us enumerate the states so we can construct an ID number, we simply constrain the set of values to run from 0 to m (we can translate to/from the real values when we use them). So above we say x7 takes values 0,1,2. Even with large finite, we may need generalisation. To reduce the statespace size, we may need to be careful with our definition of state. Note that we can easily define too many possibilities, so that a huge chunk of the logical state space may not actually exist, e.g. all blank in chess board, or all kings. Could even have 90 percent of the lookup table unused.Often we might make the world discrete and finite by dividing up the continuous input into a small number of coarse-grained divisions. But remember this in itself is a generalisation. Consider how we could enumerate the finite set of states defined as follows: x = (x1, x2, x3) where x1 takes values 0,1, x2 takes values 0,1,2, and x3 takes values 0,1. We can sum this up by just having a set of variables saying how many values each dimension takes: c1=2, c2=3, c3=2. We can enumerate as follows: x = (0,0,0) id = 0 0 0 1 1 0 1 0 2 0 1 1 3 0 2 0 4 0 2 1 5 1 0 0 6 1 0 1 7 1 1 0 8 1 1 1 9 1 2 0 10 1 2 1 11Note that it is not binary. Consider the last state (1,2,1). The "1" means we have done the "0"'s in the first column, i.e. we have done 1.c2.c3, plus the "2" means that within the "1"s we have already done an additional 2.c3, plus the last "1" means we have done an additional 1. So id(1,2,1) = 1.c2.c3 + 2.c3 + 1 In general, we can enumerate a state as follows: int id ( state x ) return x1c2c3 + x2c3 + x3 x = (x1, x2, .. xn), id(x) = x1c2..cn + x2c3..cn + ... + x(n-1)cn + xn The number of possible states is: nostates = c1c2...cn The state IDs run from: 0 .. (nostates-1) Similar scheme. Action IDs run from: 0 .. (noactions-1) (x,a) pairs can be enumerated: (0,0) id = 0 + 0 (0,1) 0 + 1 ... ... (0,noactions-1) 0 + noactions-1 (1,0) 1.noactions + 0 ... ... (1,noactions-1) 1.noactions + noactions-1 (2,0) 2.noactions + 0 ... ... (nostates-1,noactions-1)The ID of an (x,a) pair is: The number of (x,a) combinations is: IDs run from: 0 .. (nostates*noactions)-1 So our function to access the State Space is: real StateSpace :: at ( vector x, vector a ) int id = (id(x) * noactions) + id(a) return ActualArray [ id ] where ActualArray is simply an ordinary 1-dimensional array of real numbers: real ActualArray [ nostates * noactions ]Then we have one of these data structures to hold the Q-values: and we can just use it like: Q.at(x,a) = ((1-ALPHA) * Q.at(x,a)) + (ALPHA * new-estimate) In the sample code you will see that I enumerated Actions x States rather than States x Actions as above.
OPCFW_CODE
[Samba] files on shared ntfs-disk in linux-pc are not accessible toffi at muenster.de toffi at muenster.de Fri Sep 3 12:56:38 GMT 2004 I have spent the entire evening, trying to access a ntfs-drive in my linux-box from my WinXP-Notebook but I just can't get it to work (and it's 3:50 a.m. I have put a 160GB harddisk into a linux box (LFS 5.1.1, console only) and /dev/hdd1 /mnt/toffi ntfs ro,uid=christoph,gid=christoph,umask=022 0 0 To check wether the data on the disk is ok, I then put in a Knoppix-Live-CD (3.6) and opened some of the Video-Files on the disk -> the disk and it's ntfs filesystem seem to be ok! I then tried to share this disk using samba, to be able to access it from my notebook (WinXP): (from my smb.conf) read only = yes browsable = yes valid users = christoph The linux-pc shows up in my network neighbourhood, as do its shares, including my ntfs-disk. By double-clicking on 'toffi', I get a listing of the disk's root However, any further click causes the explorer window to freeze for about two minutes until a folder is opened or a context menu appears. In the opened folder, the file-listing often remains blank and if the files are listed, I cannot open them. Copying files from the ntfs-disk to my windows-box will result in corrupted This problem only applies to the ntfs-share. All the other (ReiserFS) shares So I tried to mount the ntfs-share on the server-machine itself: mount -t smbfs -o username=christoph,password=passwd //linbox/toffi /mnt/test The ntfs-disk then gets mounted to /mnt/test and i can browse and list folders as much as I like but when I try to open a file with less, I get an error smb_request: result -5, setting invalid I did all this as root. As the files are okay and the problem does not seem to be M$-specific, something has to be wrong with my Linux-machine. So I booted from the Knoppix-Disk one more time, mounted the ntfs-disk and shared it and voilà: I can access any file from my notebook and it's fast! So I copied the samba-configuration from the Knoppix-disk (at least partly) but that didn't help either. Maybe all this has to do with user-rights.. I don't know.. and I really do not have any idea what I can try.. * P L E A S E * H E L P * M E * More information about the samba
OPCFW_CODE
Topgallantfiction Pocket Hunting Dimension read – Chapter 769 – He’s A Good Person homeless rare -p2 Novel–Pocket Hunting Dimension–Pocket Hunting Dimension the eternal sanctum Chapter 769 – He’s A Good Person womanly makeshift Having a dry strengthen, on the list of prodigies stated, “The barrier… broke??” “How is this feasible?!” Only then managed their terror minimize slightly. The 2 prodigies s.h.i.+vered a lot more. Some Pioneers and Pilgrims on the Prairies of Dakota Only then have their terror reduce a bit. Pocket Hunting Dimension The fire-fan-and-darkness-buff-strengthened Fantastic Fist Art work was tougher than including the Gentle and Darkness Beam. However, it was subsequently very demanding. He could only execute three punches, irrespective of employing a handful of our blood crystals. Sensing Lu Ze’s chi, the colour through the encounters of these two prodigies was drained. It even changed Inside crimson buffer, the terror sensed from the two prodigies from the Purple Range Race gradually disappeared. In return, they glared at Lu Ze with intensive hatred. They had just turned up inside the top secret realm, nonetheless they ended up immediately pressured out. On top of that, they even lost a s.p.a.ce lock scroll as well as 2 everyday life-keeping notes. Precisely what a massive losses! The prodigy declined back and landed by using a thud on the floor. Lu Ze then blossomed once more until the other prodigy. Also, he kicked him. “We initialized the s.p.a.ce jewel. Precisely why are we still inside of?!” “How can a levels-3 planetary declare be this robust??!” Pocket Hunting Dimension These ended up actually no cost punching hand bags. danny the champion of the world spanish 2 escudo doubloons ‘Did he say there was clearly a prodigy through the prodigy ranking?’ of truth and beasts Accordingly, they took out yet another grey rock and roll with sterling silver runes. Soon after triggering it, a metallic mild slowly packaged round the two. Lu Ze grinned at their impulse. “What a coincidence! We connect with yet again.” A purple scale came out inside their arms. Complex runes spun around it simultaneously. Thereafter, they put in their nature push in the rune, and a couple of crimson obstacles included them. One roared, “d.a.m.ned individual! Never feel good about on your own. There exists a very few extremely highly effective prodigies in our midst. They’re certainly not creatures it is possible to deal with! The leader is actually a prodigy around the prodigy position!” Upon the roar of one of several prodigies coming from the Purple Degree Race, the remaining one also reacted. Lu Ze was too fast. They had no chance to perform! Lu Ze saved his smiling term and looked over their barrier. He thought about how robust it had been. In the same way, the other just one sneered, “You will pass on during the mystery kingdom!” Lu Ze was too fast. That they had absolutely no way to perform! Lu Ze smiled. ‘Did they imagine they could escape by using a s.p.a.ce object?’ Naive little ones! s.p.a.ce G.o.d artwork was extremely uncommon from the complete universe. ‘Why were definitely they so unlucky to face a being with this power on this page?’ Lu Ze smiled. “Stay there and do not shift. I’ll just organize a couple of punches.” Well before they can reply, a black colored-and-white colored tennis ball spun on Lu Ze’s other hand and hit the extremely slim buffer. On the other hand, Lu Ze frowned. This shield was obviously a little bit challenging.
OPCFW_CODE
On Washington's Farewell Address Here is a question from CollegeBoard's practice APUSH exam... I'm having trouble answering it. Washington's Farewell warned against the dangers of international alliances and the formation of political parties, right? Knowing this, I still am unsure of what the answer is (personally, I think it's A but I feel I am incorrect). Which of the following groups most strongly opposed Washington’s point of view in the address? (A) Democratic-Republicans (B) New England merchants (C) Southern plantation owners (D) Federalists My reasoning for why it's A (taken from my comment): If you want my reasoning, it's because Washington was shown to support Hamilton (a Federalist). Also, the Democratic-Republicans wanted to support France in the war against Britain, which violated Washington's neutrality policy. However, Hamilton wanted to side with the British for possible economic advantages, meaning that Federalist motives also conflicted with the ideas in Washington's address. That's why I'm confused, because my reasoning appears to be self-contradicting. I also don't know too much about the political stance of merchants and plantation owners. It's not homework. I'm practicing for the exam on my own. I found the exam through the CollegeBoard website. Since I was stuck on this question, I decided to come here for help. If you want my reasoning, it's because Washington was shown to support Hamilton (a Federalist). Also, the Democratic-Republicans wanted to support France in the war against Britain, which violated Washington's neutrality policy. However, Hamilton wanted to side with the British for possible economic advantages, meaning that Federalist motives also conflicted with the ideas in Washington's address. That's why I'm confused, because my reasoning appears to be self-contradicting. I also don't know too much about the political stance of merchants and plantation owners. Don't just dismiss this as someone trying to get answers for homework, when I'm honestly in need of clarification on historical concepts. Great, now if only someone could help... What a horrible question (the CollegeBoard's, not yours). I think the best answer is vote to close as "unclear what you're asking". There was more than one point of view in the address - it covered numerous topics to which any of the 4 groups listed would have different opinions on. You don't say what your question is? Are you asking what the right answer is? Yes, I am asking what the right answer is. I think it's A, but I'm not sure. It's a bad question The answer is A, Democratic-Republicans Benjamin Franklin Bache was a vehement Republican. As editor of the Philadelphia Aurora, he was also perhaps Washington's most outspoken and vitriolic detractor. Yet even he couldn't find anything to fault in the substance of the address: Without any commentary, Bache reprinted the address over the next two days. He could hardly have faulted a moving statement that referred to the benefit of education in enlightening public opinion, the mischief of trade restrictions and foreign influence, and even the personal failings of the president himself. The Aurora did, however, question the president's sincerity as he left office, saying he delivered "the profession of republicanism, but the practice of monarchy and aristocracy." Arguing that the nation had been "debauched" by Washington, another editorial remarked that "the masque of patriotism may be worn to conceal the foulest designs against the liberties of a people." (source) So the best response Bache could muster to the Farewell Address was claiming that Washington didn't mean it. Perhaps the biggest policy difference in the speech was that Washington called for free trade with all nations: . . . our commercial policy should hold an equal and impartial hand; neither seeking nor granting exclusive favors or preferences; consulting the natural course of things; diffusing and diversifying by gentle means the streams of commerce, but forcing nothing. (source) This wouldn't be so far from Republican preferences, except for the fact that in practice, commercial neutrality meant that the majority of American commerce would continue to be with Britain. Madison and other Republicans wanted to use "commercial coercion" (withholding American raw materials from Britain) as leverage in gaining further concessions (especially the opening of the lucrative West Indies market to American shipping). Still, Washington and Madison are both ultimately championing free trade. Well, I think this question is quite controversial. However, APUSH loves to throw these questions at you to see what answer is MOST correct, rather than having a single correct answer. Although I do think this is a very bad question... the only explanation I could come up with was this: Washington expressed that he supports a ratification process, stating in his farewell address that "it is the right of the people to alter the government to meet their needs." This was a Federalist point of view... as they wanted to ratify the Articles of Confederation rather than just amend them. Because of this, the most correct answer would be A (Democratic-Republicans)... previously known as just Anti-Federalists during the process of ratifying the Articles of Confederation. In simplest terms, the Federalists wanted to ratify the AoC, which Washington supported, and the DR's (Anti-Feds) wanted to amend the Constitution, something in which Washington probably would not have been sided with. Also, he tended to favor Hamilton. Hope this helped, and good luck! It's not really clear what you are asking. If you are asking for an explanation of why (A) is the correct answer, it is as follows: The main thrust of the address was to support a unified country under a single centralized government (Federalism) and neutrality with regards to foreign powers. Both of these aims were supported by the Jay Treaty which Washington and the Federalists (like Hamilton) sponsored. This humiliating treaty kowtowed to England to buy peace and betrayed France, who had helped America in the Revolutionary War. The treaty was favored by rich people like New England merchants and big plantation owners because it would make it easier for them to resume trade with England. The small farmers who made up the bulk of the country bitterly opposed this rich man's treaty because it did nothing to help them and made all kinds of unnecessary concessions to England, which they had fought and died for and now rich pricks like Hamilton were just giving it away. Jefferson welded all these small farmers into a potent political force, the Republicans, who favored states rights, not Federal power (and Federal taxes). These farmers wanted to support France, just as France had supported them, and they resented the idea that the states should be lorded over by a central power ruled by a president, who was not too far off from a king in their eyes. I'm not an expert, but I think its mostly because the democratic republicans wanted to go to war against Britain (with France) and Washington wanted neutrality.
STACK_EXCHANGE
Timeout is an important parameter in test automation that impacts the reliability and responsiveness of automated test executions. In accelQ, this parameter is managed at multiple layers to accommodate the need for responsiveness. This article highlights the nuances of this fine grain control and explains how these settings impact test execution. acccelQ comes pre-configured with optimal timeout settings, while providing an ability to control in case there is a specific need. Page and Element wait times This configuration is controlled during test execution in the Run modal. Default values are 60 sec and 30 sec respectively. Page timeout is the amount of time, test execution will wait for a page to load. This is used in conjunction with the Synch element setup for a Context. Similarly, element timeout is the amount of time, test execution will wait for an element to be ready to function on. This is the most common setting you would tune according to your need. Note: Commands that enquire readiness of an element such as is-exists, is-enabled etc. do not wait for the element timeout. They respond immediately. Runtime Selenium Grid timeout configuration In addition to the functional timeout settings you specify with the page and element timeouts, there are additional infrastructure level settings you can fine tune. This goes as part of Selenium grid setup. Note: This is typically an admin level tasks when the execution environment is shared for the team. "timeout" : 300 (300 seconds is default) This is set in the hub config json file. Hub automatically releases a node that hasn't received any requests for more than the specified number of seconds. After this time, the node will be released for another test in the queue. This helps to clear client crashes without manual intervention. To remove the timeout completely, specify -timeout 0 and the hub will never release the node. Note: This timeout kicks in when no browser interaction command is sent to the node for the specified time. If your test logic involves hard coded wait for longer duration, ensure that this setting is more than the possible sleep in the logic. Max sessions on the Node "maxSession" : 5 (5 is default) The maximum number of browsers (of all types, inclusive) that can run in parallel on the given node. This is different from the maxInstance, which limits the number of browsers of a specific type that can be opened at a time on a given node. maxInstances can be controlled separately at a browser-type level. Browser Timeout on Node "browserTimeout" : 300 The timeout in seconds that the node waits for the Browser to respond before it is reclaimed. After this time, the node will be released for another test in the queue. Note: This timeout should be at least equal to or more than the timeout you setup for the Page and Element timeout values.
OPCFW_CODE
VisualForce Page : Userinfo.getUserId(); No matching users found I have an object called Sales_and_Marketing__c, and I would like to override the standard New button so that the field Sales_Contact_New__c(lookup to user object) gets pre populated with the logged in User. I have looked on online for scenarios and have attempted to write this however i am getting the following error (see screen shot attached) Apex class : public with sharing class Extension1 { public final sales_and_marketing__c objX; public Extension1(ApexPages.StandardController controller) { this.objX = (sales_and_marketing__c)controller.getRecord(); objX.Sales_Contact_New__c= Userinfo.getUserId(); } public PageReference RedirectToMKTRequest() { return new PageReference ('/a0R/e?CF00N6E000000UdP0=' + objX.Sales_Contact_New__c + '&nooverride=1'); //return new PageReference ('/a0R/e?CF00N6E000000UdP0=' + objX.Sales_Contact_New__c + '&nooverride=1'); } } Visual Force page <apex:page standardController="sales_and_marketing__c" extensions="Extension1" action="{!RedirectToMKTRequest}"> </apex:page> This is the error I am presented with: No matching users found. Because it is considering it as string. To pre-populate the lookup field through URL, If field's HTML Id is CF00N6E000000UdP0. Two parameters needs to be passed: Lookup Value : This you can pass with CF00N6E000000UdP0 Lookup Id: This can be passed with CF00N6E000000UdP0_lkid Try this example: public PageReference RedirectToMKTRequest() { User objUser = [SELECT Id, Name FROM User WHERE Id = :UserInfo.getUserId()]; return new PageReference('/a0R/e?CF00N6E000000UdP0=' + objUser.Name + '&CF00N6E000000UdP0_lkid=' + objUser.Id + '&nooverride=1'); } Refer this blog for more details- Salesforce URL Hacking to Prepopulate Fields on a Standard Page Layout Hi Rahul, if i go with your method i would return the following error in the ; Daniel MasonCF00N6E000000UdP0=00558000001PjR8AAK there it would need to be something like this ; public PageReference RedirectToMKTRequest() { User objUser = [SELECT Id, Name FROM User WHERE Id = :UserInfo.getUserId()]; return new PageReference ('/a0R/e?CF00N6E000000UdP0=' + objUser.Name +'&nooverride=1'); } } I missed a ampersand sign, updated the answer Thanks @rahul sharma - I been looking at this all morning and you have come along and helped straight away :) i have a similar request here however this is a contact, could you have a quick browse at this http://salesforce.stackexchange.com/questions/165957/visualforce-page-which-populates-a-field Cheers @Masond3, Posted comment in one of the answer of the mentioned question.
STACK_EXCHANGE
MICROSOFT HOST DRIVER DETAILS: |File Size:||4.1 MB| |Supported systems:||ALL Windows 32x/64x| |Price:||Free* (*Free Registration Required)| MICROSOFT HOST DRIVER (microsoft_host_5639.zip) Set Hostname Azure. You can enter the following in the hosts file, 192.168.100.1 myhomeserver. Driver Update: hp a1510n. Hostname is used to display the system's dns name, and to display or set its hostname or nis network information services domain name. Hosts 5 name top this article explains, software from hyper-v. Excerpt from the hosts wiki page, the hosts file contains lines of text consisting of an ip address in the first text field followed by one or more hostnames, each field separated by white space blanks or tabulation characters . For example, macos, linux machine. December 1, 2013 updated decem by pungki arianto linux commands, linux howto. - Kali contains several hundred tools which are geared towards various information security tasks, such as penetration testing, security research, computer forensics and reverse engineering. - I n surprising news to both windows and linux users, microsoft recently announced the first-ever windows linux conference, named wslconf, which stands for windows subsystem for linux conference. - Should you choose a linux or a windows web hosting package? - It also performs reverse lookups, finding the domain name associated with an ip address. /etc/hosts file, but the globe. Install the windows subsystem for linux. And is there such a pseudoconsole conpty, open the hostname. You can run cmd and do nslookup hostname the same way you'd do host you need something other than the ip address, the command-line arguments will differ. Drivers audio lenovo g560 Windows 10. Hostname in order to it cheaper and reverse engineering. Note that you can use both the /etc/hosts file and a dns server for name resolution. Microsoft will support customers who chose to run linux with microsoft s virtual server 2005 r2, software for running multiple operating systems on one machine. On linux, you can find the hosts file under /etc/hosts. As we re trying to set of this opportunity. Here is another example of use of this opportunity. In some operating systems implement name. Welcome to small tutorial series of hosting a website on linux machine. Net core 3.1 downloads for linux, macos, and windows. Full path to have a dns lookup for linux servers. If there is no match in the hosts file, then the dns server will be used. Is there such a thing as a human readable format. Host manager, a dns server on windows command-line in 2003. |Microsoft Store Fifth Avenue, New York, NY.||With more and more computer connected to the network, the computer needs to have an attribute to make it different from each other.| |Free microsoft website.||Net core is a cross-platform version of.net, for building apps that run on linux, macos, and windows.| |Archivo hosts, la enciclopedia libre.||Open powershell as administrator and windows subsystem for hostnames.| |Windows Web Hosting, Green Hosting.||The windows giant will support customers that run red hat and suse linux in virtual server 2005 r2, now offered as a free download.| |Microsoft's Linux Kernel, Hacker News.||By christopher heng, new webmasters who are trying to choose a web host often find that they are confronted with a plethora of web hosts offering a wide variety of web hosting packages.| |Best Web Hosting 2020, Domains, WordPress.||Building 20 x faster page loads compared to host one machine.| Linux server hosting. This series of the domain name system dns existed. Ubuntu vms can now be launched from hyper-v quick create and use rdp for enhanced session mode. Convenience, new webmasters who chose to 20 in the /etc/hosts. The website we ll host on our personal computer can be accessed from around the globe. Linux distribution, open the main microsoft office and other name. Drivers canon bjc 2100 Windows. Operating system hostnames are used to look up. How to edit host file in Linux. Because of the broad nature of the message there are several possibilities that could be causing it. On unix-like operating systems, the host command is a dns lookup utility, finding the ip address of a domain name. Try to identify a reliable web host can be a daunting task especially with so many service providers and options available nowadays. Linux is most common and widely used operating. Then make the hosts file in the hosts file. This host-level isolation helps address compliance requirements. Hosts file, your computer connected to it in 2003. The instructions below are valid for all linux distribution, including ubuntu, centos, rhel, debian, and linux mint, in your terminal window, open the hosts file using your favorite text editor, sudo nano /etc/hosts. Also i would like to use fingerprint sensor, and other peripherals from the host. New software from codeweavers will allow linux servers to host microsoft office and other windows productivity applications for hundreds of linux or unix users. The linux hosting experts, that's what we've been called has been since our launch in 2003. Linux is a unix-like operating system provided as a free, open source choice. These answers are provided by our community. Set hostname in azure public cloud. Linux is an open source software server, which makes it cheaper and easier to use than a windows server. This article explains, how to setup a local dns using the hosts file /etc/hosts in linux for local domain resolution or testing the website before taking live. It can even choose a human readable format. A set of drivers that enable synthetic device support in supported linux virtual machines under hyper-v. Host provides physical servers to have a linux-based option first. In some operating systems, the hosts file's content is used preferentially to other methods, such as the domain name system dns , but many systems implement name service switches e.g, for linux and unix to provide customization. This document describes the gnu/linux version of host. Whenever you open a website by typing its hostname, your system will read through the hosts file to check for the corresponding ip and then open it. Whether you're looking for a simple shared web hosting account or a powerful dedicated server, the chances are that you'll be offered a linux-based option first. Rdp for lookups, nano editor. How to setup a mac web host? So to edit the file using a linux terminal-based text editor such as nano, you will need to. Your server is dedicated to your organisation and workloads capacity is not shared with other customers. Each are hosted on our blazing fast swiftserver platform. Again, our goal is that everything in azure works for linux vms just like it works for windows vms. It s a broad message that means that your computer can t reach the target server. Each are used preferentially to use cpanel. Hosts change and manage the /etc/hosts file. World's most famous hacker kevin mitnick & knowbe4's stu sjouwerman opening keynote - duration, 36, 30. Hp officejet 6500 e710n-z scanner Windows 8 driver. Or, and other methods, hostname. Asus Elantech Touchpad Treiber Windows 8. When you re trying to connect to a service on linux, no route to host is one of the last things that you want to hear. Since taking ownership of the windows command-line in 2014, the team added several new features to the console, including background transparency, line-based selection, support for ansi / virtual terminal sequences, 24-bit color, a pseudoconsole conpty , and more. The file /etc/hosts started in the old days of darpa as the resolution file for all the hosts connected to the internet before dns existed . Some web hosts give you a choice of packages using the linux operating system, others. A great linux host should offer feature-rich hosting packages that make managing your account. You choose a simple website by using a superuser. You can edit the hosts text file, located at /etc/hosts only as a superuser. At the back end, linux servers use cpanel. Up to those competing linux on the /etc/hosts. Website before dns lookup utility, wa. As we re doing for the on-premises datacenter, microsoft is making huge investments in the azure public cloud.
OPCFW_CODE
""" DialogueHandler The DialogueHandler maintains a pool of dialogues. """ from evennia.utils import logger from muddery.server.utils.builder import build_object from muddery.server.mappings.typeclass_set import TYPECLASS from muddery.server.utils.localized_strings_handler import _ from muddery.server.dao.shop_goods import ShopGoods class MudderyShop(TYPECLASS("OBJECT")): """ A shop. """ typeclass_key = "SHOP" typeclass_name = _("Shop", "typeclasses") model_name = "shops" def at_object_creation(self): """ Called once, when this object is first created. This is the normal hook to overload for most object types. It will be called when swap its typeclass, so it must keep old values. """ super(MudderyShop, self).at_object_creation() # set default values self.db.owner = None if not self.attributes.has("goods"): self.db.goods = {} def at_object_delete(self): """ Called just before the database object is permanently delete()d from the database. If this method returns False, deletion is aborted. All goods will be removed too. """ result = super(MudderyShop, self).at_object_delete() if not result: return result # delete all goods for goods in self.db.goods.values(): goods.delete() return True def after_data_loaded(self): """ Set data_info to the object. Returns: None """ super(MudderyShop, self).after_data_loaded() # load shop goods self.load_goods() self.verb = getattr(self.system, "verb", None) def load_goods(self): """ Load shop goods. """ # shops records goods_records = ShopGoods.get(self.get_data_key()) goods_keys = set([record.key for record in goods_records]) # search current goods current_goods = {} for key, obj in self.db.goods.items(): if key in goods_keys: current_goods[key] = obj else: # remove goods that is not in goods_keys obj.delete() # add new goods for goods_record in goods_records: goods_key = goods_record.key if goods_key not in current_goods: # Create shop_goods object. goods_obj = build_object(goods_key) if not goods_obj: logger.log_errmsg("Can't create goods: %s" % goods_key) continue current_goods[goods_key] = goods_obj self.db.goods = current_goods def set_owner(self, owner): """ Set the owner of the shop. :param owner: :return: """ self.db.owner = owner def show_shop(self, caller): """ Send shop data to the caller. Args: caller (obj): the custom """ if not caller: return info = self.return_shop_info(caller) caller.msg({"shop": info}) def return_shop_info(self, caller): """ Get shop information. Args: caller (obj): the custom """ info = { "dbref": self.dbref, "name": self.get_name(), "desc": self.get_desc(caller), } icon = self.icon if not icon and self.db.owner: icon = self.db.owner.icon info["icon"] = icon goods_list = self.return_shop_goods(caller) info["goods"] = goods_list return info def return_shop_goods(self, caller): """ Get shop's information. Args: caller (obj): the custom """ goods_list = [] # Get shop goods for obj in self.db.goods.values(): if not obj.is_available(caller): continue goods = {"dbref": obj.dbref, "name": obj.name, "desc": obj.desc, "number": obj.number, "price": obj.price, "unit": obj.unit_name, "icon": obj.icon} goods_list.append(goods) return goods_list
STACK_EDU
Lists Home | Date Index | 7/11/2002 7:32:19 PM, email@example.com wrote: > 2. Most people mention SAX can handle files larger >than memory, but I am thinking, is this really the case, >because files are read into the kernel buffer, so large >files still have to be read into the memory, just not in >user space. Am I right? DOM builders generally load the entire document into a tree structure. SAX operates at parse time; it can call a user-defined function for each element, attribute list, entity reference, etc. The application can choose to either process the XML data and throw it away (meaning that the total size of the document is independent of the memory usage) or build another data structure, store the data to DBMS, or whatever. This provides the usual tradeoff -- more work for the application programmer but more control over resource usage. > 3. DOM is memory-thirsty, according to most articles I >read. So DOM's performance lags, does anyone run any type >of profiling, and I am interested in why it is memory >hungry, and poor in terms of performance. It is quite true that if one simply defines classes that directly implement all the DOM interfaces, each Node will be fairly large because of all the properties and methods defined on the basic Node interface. The DOM exposes several different models of an XML document -- a tree with parents, children, and siblings; lists of nodes containing lists of nodes, a more OO conception of Document, Element, Attribute, etc. objects, and a more abstract model where the document is traversed via iterators. Still, this is an implementation issue, not intrinsic to the DOM API. There are some DOM implementations that are "lazy", i.e., only build actual objects implementing the DOM interface when a specific part of the document is accessed. There are also persistent DOMs, where the parser essentially loads a database that is then navigated and queried on demand. Both these techniques would be less memory hungry than a straightforward implementation of the spec. > 4. What do people think of pull type parsers and DOM >SAX hybrids? Are these popular and stable? There's been a lot written on this, but you'll probably have to sort it out for yourself. A simple Googling for "xml pull parser performance" yields quite a number of articles. It's probably something to consider if you have lots of data and relatively constrained processors, but a well-defined application. I'd say in general that the more flexibility you need, the more you need a DOM-like API; the more you can constrain EXACTLY what the application will do with each bit of markup, the more you can exploit a streaming > 5. Is it possible for SAX to support XSLT? Well, several (most?) XSLT implementations support SAX parsers to build the tree for transformation. Strictly speaking, however, you're not getting some of the infinite document size / efficiency advantages of SAX because a conformant XSLT implementation must keep the entire document around because the stylesheet can refer to arbitrary pieces. There are extensions to SAXON, I believe, to support a more efficient use of memory by having the user tell the XSLT engine what sections of the document to look at ... see the <saxon:preview> extension element? There are also occasional discussions of "streaming XSLT" processors (I don't know if any actually exist in a stable, available form) but they would have to operate on a subset of XSLT. I should probably shut up and let someone who knows what they're talking about explain the
OPCFW_CODE
Can i edit Xeror printer colour profile on OSX to make all colours / white points 10% lighter? Ive got a Xerox 6515 printer. The printer is supposed to be able to print Pantone colours, but i always feel its 10% too dark. Ive played around with loads of colour profiles and settings, but i cant get the colours to look right. Ive got a ICM profile file for the printer, is there a way i can edit this to make it 10% lighter across the board ? Perhaps by setting the white point to have 10% less black ? Im on a Mac running OSX 10.14.6 which has the OSX inbuilt colour sync tool, but i cant see anywhere to edit the profiles. Upon consideration the printer is probably behaving correctly , and most probably it my screen that’s too bright. I do have a screen calibration tool, but rarely use it for day to day work. Instead of calibrating my screen to the printer, I’d like to adjust the ICM profile to have my printer closer match the screen. I don’t need to print Pantone colours, so was just using it as a reference for this conversation. Alternatively after some digging I found a GUI that will let me do this in the print dialog, but i can only seem to access this GUI when printing via MS word on OSX, if i get to the print dialog via OSX Preview i get a different GUI. (see attached screenshots) "10% too dark" compared to what? A current Pantone book [they only have a 2-year life-span] or your screen? Is your screen calibrated accurately & have you calibrated your workflow to the printer, using the appropriate colorimeter/spectrophotometer? @Tetsujin good point, I’ve updated me to question for more info / background If you have physical pantone samples (just to say), you can print the same color and compare it with the references. Of course, it is good practice to first adjust the brightness and temperature of the screen to match the physical references (pantone). Then to adjust the printer to match the same physical references (pantone). I talked about references because obviously it is better to have more than one ... If you insist on matching the altered screen with the printer, you can try to modify the "lightness" indicator in the GUI, directly, until you are satisfied. There are a number of tools that let you create and edit ICC or ICM files. However, to match screen and printer you will need hardware (spectrophotometer) as well as software tools: you need to measure the output on the screen and on the paper. Also, remember that the printed colour will vary depending on the paper type (let alone colour) used. Start by having a look at http://color.org/profilingtools.xalter Popular hardware is available from X-Rite and others. Ive had a look at some of those profile tools, but cant seem to find one that will let me edit an existing profile like the existing xerox profile i have to amend it as described in the question, for instance that document lists photoshop as being able to edit the ICM / ICC files but i cant actually see how to do this, when looking online it photoshop seems more able to edit its own created ICC profiles than anything else. Ive had a look at x-Rite but unless im mistaken i think all their software is proprietary and quite expensive I don't think there is a cheap solution to editing icc/icm files. The only thing I have some remote experience with is X-Rite and their Profilemaker (which is no longer available and replaced by i1Publish). See https://www.xrite.com/search?search=profilemaker You could try the color profiles available from Xerox, to see if they do the printing better. Installing the profiles is described in the article How to Install ICC Color Profiles Mac OSX. You could also try the free Gutenprint and Gimp-Print software for Mac OS X, which supports Xerox although your exact printer model is not in the list. Thanks, ive tried the colour profiles available from Xerox, but they hardly make a noticeable difference
STACK_EXCHANGE
Why is each MenuItem in my react-contextmenu list identical? For reference, I'm using react-contextmenu to add context menues to my app. I have items in a list, and you can right-click each item to do some actions with it. Each item provides a context menu that is also a list, so I have a couple of map methods in my code. (I made sure to use unique keys as per the info here and here). However, this is the behavior I'm observing (right-clicking any item in the list always shows me the context menu for the last item in the list): Here's my code: function ListItem(props) { // console.log(`CONTEXT OPTIONS for ${props.name}:\n${JSON.stringify(props.contextOptions)}`); // console.log(`Mapped context options for ${props.name}:\n${props.contextOptions.map((option) => option.label + option.subLabel)}`); const contextOptions = props.contextOptions.map((option) => <MenuItem key={option.label + option.subLabel} disabled={option.disabled} divider={option.divider}> {option.label}<span className="context-sublabel">{option.subLabel}</span> </MenuItem> ); return ( <div> <ContextMenuTrigger id="list-item-context"> <div className={"list-item-container" + (props.selected ? " list-item-selected" : "")} onClick={ () => props.onSelection(props.index) } > <p className="list-item-label">{props.name}</p> </div> </ContextMenuTrigger> <ContextMenu id="list-item-context"> {contextOptions} </ContextMenu> </div> ); } I've even gone as far as throwing print statements everywhere to ensure I wasn't somehow passing down a list of the same context menu info too. For instance at one point I wrote the map like so: const contextOptions = props.contextOptions.map((option) => { console.log(`Item: ${props.name}, label: ${option.label}, sub-label: ${option.subLabel}`); return (<MenuItem key={option.label + option.subLabel} disabled={option.disabled} divider={option.divider}> {option.label}<span className="context-sublabel">{option.subLabel}</span> </MenuItem>); }); Output proved my data was correct: Item: ITEM A, label: Copy "ITEM A", sub-label: Item: ITEM A, label: , sub-label: Item: ITEM A, label: Connect, sub-label: - [PRIMARY IP A] Item: ITEM A, label: Connect, sub-label: - [BACKUP IP A] Item: ITEM B, label: Copy "ITEM B", sub-label: Item: ITEM B, label: , sub-label: Item: ITEM B, label: Connect, sub-label: - [PRIMARY IP B] Item: ITEM B, label: Connect, sub-label: - [BACKUP IP B] Item: ITEM C, label: Copy "ITEM C", sub-label: Item: ITEM C, label: , sub-label: Item: ITEM C, label: Connect, sub-label: - [PRIMARY IP C] Item: ITEM C, label: Connect, sub-label: - [BACKUP IP C] Note: empty labels or sub-labels is fine in the context of my app. I literally can't figure out why the UI actually getting rendered isn't showing the correct text. Perhaps it isn't supported? Every example I've seen of this library being used showed explicitly written out MenuItem JSX. I figured it out: The id of each ContextMenuTrigger and ContextMenu must match, and they did. But since I was generating everything dynamically in a list, every element in my list used the same id, so they all displayed the same data. To make my ids unique, I appended the name prop to them, since in the context of my app the names are guaranteed to be unique: return ( <div> <ContextMenuTrigger id={"list-item-context" + `-${props.name}`}> <div className={"list-item-container" + (props.selected ? " list-item-selected" : "")} onClick={ () => props.onSelection(props.index) } > <p className="list-item-label">{props.name}</p> </div> </ContextMenuTrigger> <ContextMenu id={"list-item-context" + `-${props.name}`}> {contextOptions} </ContextMenu> </div> );
STACK_EXCHANGE
DF-594: Correctly show 'no results' messaging when refined searches yield no results Ticket URL: https://national-archives.atlassian.net/browse/DF-594 About these changes As discussed on Slack, the 'buckets' were previously erroneously tied to the refined/filtered search results. The changes in #422 fixed this issue (tying them instead to the 'original' search query result), but failed to update the template logic conditions to render correctly under the right conditions. How to check these changes If your original search query has no hits in any bucket, the buckets or refinement options should not display, and you should see a clear "No results" message. Catalogue search example: http://<IP_ADDRESS>:8000/search/catalogue/?q=querty Website search example: http://<IP_ADDRESS>:8000/search/website/?q=querty If your original search query has hits in at least one bucket, the buckets should display. However, if the current bucket has no hits, you should see a clear "No results" message and no further refinement options. Catalogue search example: http://<IP_ADDRESS>:8000/search/catalogue/?q=snub&group=creator Website search example: http://<IP_ADDRESS>:8000/search/website/?q=japan&group=highlight If your original search query has hits in at least one bucket AND the 'current' bucket has results, the buckets should display. However, if you use the "Search within results" option and there are no results for that 'more refined' search, you should see a clear "No results" message. with no further refinement options (but you SHOULD be able to change/unapply the "Search within results" query Catalogue search example: http://<IP_ADDRESS>:8000/search/catalogue/?q=japan&filter_keyword=querty Website search example: http://<IP_ADDRESS>:8000/search/website/?q=japan&filter_keyword=querty Before assigning to reviewer, please make sure you have [x] Checked things thoroughly before handing over to reviewer. [x] Checked PR title starts with ticket number as per project conventions to help us keep track of changes. [x] Ensured that PR includes only commits relevant to the ticket. [x] Waited for all CI jobs to pass before requesting a review. [ ] Added/updated tests and documentation where relevant. Merging PR guidance Follow docs\developer-guide\contributing.md Deployment guidance Follow docs\infra\environments.md Hi @TNA-Allan. Regarding your point: When I ran the search query, the text "Try removing any filters that you may have applied" is missing. It appears templates\search\blocks\no_results_catalogue_website.html is used. A better way to explain this might have been: On main, when using the catalogue search view and refining the search so that there are no results, the text "Try removing any filters that you may have applied" is included in the no results message. That appears to be missing when on this branch. This copy appears to be present in templates\search\blocks\no_results_catalogue_website.html Anyhow, I figured out that the search_results.html include template was already handling "no results" cases, so I am letting that take care of the "No results in a refined search" case.
GITHUB_ARCHIVE
Sometimes, you may find the need to configure a static IP address on your system. A perfect example is when you want to make it a server and host services so that it is always reached using a permanent/static IP address. In this new post, we'll take look at how you can configure static IP address on Ubuntu 18.04. There are four main ways of achieving this: - Using Ubuntu Desktop - Using Netplan - Using interfaces file - Using DHCP service Configure static IP address on Ubuntu 18.04 using Ubuntu Desktop Using the Ubuntu desktop GUI is one of the easiest and most preferred methods of configuring a static IP. To achieve this, Head out to the top right corner and click on the 'Network' settings icon and select on the interface connected to the network. In my case, I'm connected to the network over the LAN, so I'll head out to 'Wired Connected" the to "Wired settings" In the next Window, Navigate and Click on "Network" option. To the right, click on the gear icon adjacent to the interface as shown below. To view current settings, click on 'Details' tab To configure a static IP address, Click on the IPv4 option and Click on "Manual". Next, type your preferred IP address, netmask, DNS and default gateway. Turn off the automatic Toggle Once satisfied with your settings configuration, Click on the "Apply" button. Next, Restart the network - Turn OFF and ON - for the changes to take effect. You can now go ahead and verify your new settings. Configuring Static IP using netplan Canonical introduced a new tool for network management since the advent of Ubuntu 17.10 . The /etc/network/interfaces file is no longer used and instead a new network management utility called Netplan has taken its place. Netplan's configuration files are found in /etc/netplan/. The default configuration file is /etc/netplan/01-netcfg.yaml. Open the default configuration file using your favorite text editor # This file describes the network interfaces available on your system # For more information, see netplan(5). network: version: 2 renderer: networkd ethernets: enp0s3: dhcp4: yes To configure a static IP address, where the IP is 192.168.43.245, subnet mask 255.255.255.0, default gateway is 192.168.43.1 and nameservers 192.168.43.1 & 220.127.116.11, replace this configuration with the configuration shown below # This file describes the network interfaces available on your system # For more information, see netplan(5). network: version: 2 renderer: networkd ethernets: enp0s3: dhcp4: no addresses: [192.168.43.245/24] gateway4: 192.168.43.1 nameservers: addresses: [192.168.43.1,18.104.22.168] Save and Exit sudo netplan apply Later on, check the IP address using the ifconfig command to confirm the changes. Configuring Static IP using interfaces file Alternatively, you can configure a static IP using the interfaces configuration file found in By default, the configuration file contains the following lines # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback The next step is to identify the network interface that we need to assign a static IP address. To achieve this, run the following command This lists all the interfaces attached to your system ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:c0:7f:03 brd ff:ff:ff:ff:ff:ff inet 192.168.43.245/24 brd 192.168.43.255 scope global dynamic noprefixroute enp0s3 valid_lft 2317sec preferred_lft 2317sec inet6 fe80::a4ba:e64c:9105:f617/64 scope link noprefixroute valid_lft forever preferred_lft forever Alternatively, you can use ip link show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp0s3: mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 08:00:27:c0:7f:03 brd ff:ff:ff:ff:ff:ff As seen in the above 2 outputs, the interface connected to the network is enp0s3 To configure the address as a static IP open the /etc/network/interfaces file and append the following lines auto enp0s3 iface enp0s3 inet static address 192.168.43.245 netmask 255.255.255.0 gateway 192.168.43.1 dns-nameservers 192.168.43.1 22.214.171.124 - auto enp0s3 This enables interface enp0s3 - iface enp0s3 inet static This sets the interface to use static addressing. - address 192.168.43.245 This is the static IP address - gateway 192.168.43.1 This specifies the gateway - dns-nameservers 192.168.43.1 126.96.36.199 These are the dns servers Finally, Save the configuration file, and reboot or restart networking using the commands shown below ip flush enp0s3 then restart networking service systemctl restart netwroking.service Later on, check your IP configuration to verify the accuracy of the configuration. How to set DHCP IP on Ubuntu 18.04 To set a dynamic IP address for interface enp0s3, you can leave the default netplan YAML configuration file the way it is, or if a static IP was set, you can configure DHCP the following configuration # This file describes the network interfaces available on your system # For more information, see netplan(5). network: version: 2 renderer: networkd ethernets: enp0s3: dhcp4: yes dhcp6: yes Next, as root run systemctl restart networking Check your IP address using At this point, your system should be able to pick an IP address from the router dynamically. As we have seen in this post there are many ways you can configure static IP address on Ubuntu 18.04. You can go for the GUI desktop, use the interfaces file or default netplan file. If you want to revert back to DHCP, you can also follow the last step to accomplish that.
OPCFW_CODE
COMP 3000 2011 Week 9 Notes Test Review (with Ann Fry) will go over Q5 before test 30 mins test Anil will have Q&A before the test Go over lab notes, and study questions 1) Q:Without errors, what does the execve system call return? A:execve doesn't return on success because process is replaced. (Found in man page of execve) 2) Q:Can a process modify its argument variables? Environment variables? A:Yes, Yes. Regular variables defined in process address space by the kernel & made accessible by libc Kernel copies the args of execve into the top of memory of the new program image (running in the old process) 3) Q:Who calls a signal handler? A:The kernel. Kernel emits the signal which is caught by the processes handler, so essentially the kernel calls the handler "machine generated error messages handled by the kerenl are sent via signals eg.(divide by zero & seg fault) Kernel calls the signal dispatcher in the process that calls the actual handler" 4) Q:If a parent and child process start printing "Parent\n" and "Child\n" to standard out, what will be the order of the output? A:Order is indeterminate (printing 2 words continuously). The scheduler determines how they are interleaved, either could go first. 5) Q:In race-demo, why does the consumer sometimes finish before the producer has done hardly anything? A:*dont worry for test* We will get it before next class 6) Q:What does pthread_join() do? A:Blocks calling thread until the specified thread id terminates (Ann posted a link in the answer key of Lab4 with mutex tutorial) 7) Q:How is argc calculated? Note that argc is not specified by execve! A:Calculated by libc in a process before main starts. # of args are counted by walking argv 8) Q:What parts of the kernel would you need to re-implement in userspace in order to simulate a loopback mount? A:Re-implemeting the file system (ext4) in userspace - then you can parse the file system image. WHY? It's a weird data structure, you can parse the file system image just like any file, you just can't use "file" operations because those are implemented in the kernel, but you could simulate them even by copying the kernel code into userspace. 9) Q:Why are sparse files tricky to copy? A:When you read them they do not look sparse. Scenario 1: trasferrring between file systems types, must allocate memory for entire file. 10) Q:What is the relationship between lost+found and fsck? A:If an improper shutdown, Linux throws all potentially corrupt files into lost+found directory for that partition fsck is used for disk recovery. fsck will go into lost+found directory and try to recover files
OPCFW_CODE
""" Created on March 18 2021 @author: Dongfang Xu Part of this library is based on sentence-transformers[https://github.com/UKPLab/sentence-transformers] """ import math import queue import numpy as np import torch import torch.multiprocessing as mp import transformers from sentence_transformers import SentenceTransformer, models from sklearn.metrics.pairwise import cosine_similarity import read_files as read class ConceptNormalizer(): """ Loads or create a concept normalizer model, that can be used to map concepts/mentions to embeddings. :param model_name_or_path: Filepath of pre-trained LM or fine-tuned sentence-transformers. If it is a path for fine-tuned sentence-transformer, please also set sentence_transformer True. :sentence_transformer: This parameter can be used to create custom SentenceTransformer models from scratch. :search_over_synonyms: Whether to generate concept embeddings by averaging synonyms of that concept and then search over concept. """ def __init__( self, model_name_or_path: str = None, sentence_transformer: bool = False, search_over_synonyms: bool = True, ): if sentence_transformer == False: ######## Load pre-trained models ######## ######## word_embedding_model = models.BERT(path_to_BERT-based-models) ##### ######## word_embedding_model = models.RoBERTa(path_to_RoBERTa-based-models) ##### word_embedding_model = models.BERT(model_name_or_path) # Apply mean pooling to get one fixed sized sentence vector pooling_model = models.Pooling( word_embedding_model.get_word_embedding_dimension(), pooling_mode_mean_tokens=True, pooling_mode_cls_token=False, pooling_mode_max_tokens=False) self.concept_normalizer = SentenceTransformer( modules=[word_embedding_model, pooling_model]) else: #### load fine-tuned sentence-BERT models #### self.concept_normalizer = SentenceTransformer(model_name_or_path) self.search_over_synonyms = search_over_synonyms self.concept_mentions = {} self.concepts = [] def generate_embeddings(self, ontology): for idx, [synonym, concept] in enumerate(ontology): read.add_dict(self.concept_mentions, concept, synonym) if len(self.concepts) == 0: self.concepts = self.concept_mentions.keys() self.synonyms = [] self.concept_mention_idx = {} self.idx_to_concept = {} idx = 0 for concept in self.concepts: concept_synonyms = list(set(self.concept_mentions[concept])) self.synonyms += concept_synonyms end = idx + len(concept_synonyms) for index in range(idx, end): self.idx_to_concept[int(index)] = concept self.concept_mention_idx[concept] = (idx, end) idx = end self.ontology_embedding = self.concept_normalizer.encode(self.synonyms) if self.search_over_synonyms == False: ontology_embedding_avg = [] for concept in self.concepts: s, e = self.concept_mention_idx[concept] embedding_synonyms = self.ontology_embedding[s:e] ontology_embedding_avg.append( np.mean(embedding_synonyms, axis=0)) self.ontology_embedding_avg = np.asarray(ontology_embedding_avg) self.ontology_embedding = None def load_ontology(self, concept_file_path=None): if concept_file_path is not None: ontology = read.read_from_tsv(concept_file_path) self.generate_embeddings(ontology) else: raise ValueError("Please specify the path of ontology files") def add_terms(self, term_concept_pairs=[]): """ term_concept_pairs is a list of 2-element tuples, [(syn_1, concept_1), (syn_2,concept_2),...] """ ontology = [[item[0], item[1]] for item in term_concept_pairs] self.generate_embeddings(ontology) def normalize(self, mention, top_k): mention_embedding = self.concept_normalizer.encode( [mention], show_progress_bar=True) if self.search_over_synonyms: similarity_matrix = cosine_similarity(mention_embedding, self.ontology_embedding) else: similarity_matrix = cosine_similarity(mention_embedding, self.ontology_embedding_avg) # similarity_matrix = similarity_matrix.astype(np.float16) idx = np.argsort(similarity_matrix).astype( np.int32)[:, ::-1][:, :top_k] scores_pre = [row[idx[i]] for i, row in enumerate(similarity_matrix)][0] concepts_pre = [[self.idx_to_concept[item] for item in row] for row in idx][0] if self.search_over_synonyms: synonyms_pre = [[self.synonyms[item] for item in row] for row in idx][0] predictions = [(cui, syn, score) for cui, syn, score in zip( concepts_pre, synonyms_pre, scores_pre)] return predictions else: predictions = [(cui, self.concept_mentions[cui][:2], score) for cui, score in zip(concepts_pre, scores_pre)] return predictions
STACK_EDU
|Version 4 (modified by jbenito, 3 years ago) (diff)| cds-indico-0.96.2.tar.gz - tar file - 23 July 2010 CERN production branch (which is always a bit ahead of the official releases) is accessible via anonymous CVS. $ export CVSROOT=:pserver:email@example.com:/log/cvsroot $ cvs login (password is "anonymous") $ cvs co -r release_0_96-patches indico Once checked out, go to the indico/code/ directory then copy the indico/code/dist/config.xml file in there. Then proceed as usual ("setup.py upgrade" then "setup.py install") To upgrade from version 0.8.14 and previous, you must run the script "Tools/Migration0.8.14To0.90.0.py" after the upgrade Indico now require python 2.4 To upgrade from 0.90.3 and previous versions: - if you want to use the new html cache feature make sure the XMLCacheDir in config.xml is set to a path which is writable by the http server. Else, disable the cache by going to the "Admin" "Main" page. - PIL is now mandatory - reportlab version must be 2.0 - run the following script: - tools/indexes/reindexOAIModificationDates.py (only needed if you use the OAI gateway) To upgrade from 0.92.2 and previous versions: - run the following scripts: To update from 0.94 and previous versions, you must run the following script before installation: To upgrade from 0.96.2 and previous versions, or CVS release_0_96-patches: - install pytz: - install simplejson: - package: http://pypi.python.org/pypi/simplejson/ - installation tips: - ungzip and untar the package - sudo python setup.py bdist_egg - if you have easy_install: sudo easy_install -UZ simplejsonXXXXX.egg - if you DO NOT have easy_install: unzip the egg file (actually it is a zip file) and copy the folder simplejson in your python/lib/site-packages folder - run script tools/TZMigration.py (NOTE: firstly you have to install Indico package and then run this script) - You must stop the Apache server while running the script because user interaction with Indico could cause inconsistencies. Indico migration at CERN took ~4 hours - You will be asked to provide the category timezones file. You, as Indico admin, have to create this file in order to choose which timezone will be set to each of your categories. The file must have 2 columns (separated by 1 tab), the first one will contain the ID for each Indico category and the second one is the TZ for the corresponding category. Please, see an example here. - A complete list of timezone codes is here - On the other hand, if you do not want to create the categories timezones file, you can provide an empty file as parameter. The script will add the categories to your server timezone (default timezone). And afterwards, you will be able to change the TZ for each category by hand.
OPCFW_CODE
Initialize a nil pointer struct in method I have a struct called Article which has a field called Image. Per default Image has value nil. As Image should be only persisted as Image.Id to database I use the bson.BSONGetter, bson.BSONSetter and json.Marshaler interfaces to fake this behavior. However internally it is possible to use Image as an io.ReadWriteCloser if I load a file onto this with some other helper. package main import ( "io" "fmt" "gopkg.in/mgo.v2" ) type Article struct { Name string Image *Image } type Image struct { Id interface{} io.ReadWriteCloser } func (i *Image) SetBSON(r bson.Raw) error { i = &Image{} return r.Marshal(i.Id) } func (i *Image) GetBSON() (interface{}, error) { return i.Id } func (i *Image) MarshalJSON() ([]byte, error) { return json.Marshal(i.Id) } Playground The problem with this approach now is that it is not possible to initialize Image in Image.SetBSON as Image is nil. What is your question? I don't see a single sentence that looks like it could be a question in your post. The receiver is passed by value, including the pointer receiver: it is a copy, and changing its value doesn't change the initial pointer receiver on which the method is called. See "Why are receivers pass by value in Go?". A function Setup returning a new *Foo would work better: play.golang.org func SetUp() *Foo { return &Foo{"Hello World"} } func main() { var f *Foo f = SetUp() } Output: Foo: <nil> Foo: &{Bar:Hello World} twotwotwo points to a better convention in the comments, which is to make a package function foo.New(), as in sha512.New(). But here, your Setup() function might do more than just creating a *Foo. Or, following the convention of the standard library, NewFoo() *Foo (or if the module is named foo, just foo.New(), like how crypto/sha512 exports sha512.New()). @VonC okay I think it is a good idea to show you the big picture maybe my approach makes more sense then. @twotwotwo good point. I have included it in the answer for more visibility. @bodokaiser whatever setup is doing in your "big picture", it needs to return a *Foo (in addition of anything else it does) @VonC Really? Maybe you have a better idea for the overall case. @bodokaiser no, I don't. My point is simply that you won't be able to modify the pointer receiver directly. @VonC Okay maybe you could express the not possible a bit more in your answer. I will then move this to the user group :) @bodokaiser meaning: SetBSON(): i = &Image{} will never work, since i *Image is a copy of Image *Image you use to call that method. bson.Unmarshal creates a pointer to an Image value when it comes across it in the bson data. So once we enter SetBSON i is already a valid pointer to an Image struct. That means that there is no reason for you to allocate the Image. package main import ( "fmt" "io" "gopkg.in/mgo.v2/bson" ) type Article struct { Name string Image *Image `bson:"image,omitempty"` } type Image struct { Id interface{} AlsoIgnored string io.ReadWriteCloser } func (i *Image) SetBSON(r bson.Raw) error { err := r.Unmarshal(&i.Id) return err } func (i Image) GetBSON() (interface{}, error) { return i.Id, nil } func main() { backAndForth(Article{ Name: "It's all fun and games until someone pokes an eye out", Image: &Image{ Id: "123", AlsoIgnored: "test", }, }) backAndForth(Article{Name: "No img attached"}) } func backAndForth(a Article) { bsonData, err := bson.Marshal(a) if err != nil { panic(err) } fmt.Printf("bson form: '%s'\n", string(bsonData)) article := &Article{} err = bson.Unmarshal(bsonData, article) if err != nil { panic(err) } fmt.Printf("go form : %#v - %v\n", article, article.Image) } http://play.golang.org/p/_wb6_8Pe-3 Output is: bson form: 'Tname6It's all fun and games until someone pokes an eye outimage123' go form : &main.Article{Name:"It's all fun and games until someone pokes an eye out", Image:(*main.Image)(0x20826c4b0)} - &{123 <nil>} bson form: 'nameNo img attached' go form : &main.Article{Name:"No img attached", Image:(*main.Image)(nil)} - <nil> Interesting alternative, more detailed than my answer. +1 @stengaard I do not get the benefit of such a method. I mean isn't the problem that I cannot set a field in BSONSetter on an nil pointer? @bodokaiser - yes, you are right. Just updated my answer to actually solve the original problem (though my first solution did sidestep the issue). But in my case this leads to a panic because nil has no field id.
STACK_EXCHANGE
Debug and Run Azure Functions Locally Azure Functions are great for running bits of processing on a trigger without having to worry about hosting. Recently, I needed to debug an Azure Function—I needed to hunt down a particularly evasive bug that wasn’t showing up in the unit and integration tests. As it turns out, debugging an Azure Function isn’t as trivial as simply running the debugger in Visual Studio. Instead, it requires some setup to replicate the environment and configuration typically available in Azure. Now that I’ve learned how to debug an Azure Function, I thought I’d simplify the process for you by walking through the steps in this blog post. This explanation will also lend itself well to running Azure Functions locally without debugging if you want to host them on-premises, as the steps to do so are very similar. Note: I’m not covering the code behind an Azure Function, but I’ll assume you intend to create one and will need to debug it locally. In addition, this blog post is written from the perspective of using C#/.NET; you might experience minor differences in the process if you’re using other coding languages. When running in Azure, the configuration backing an Azure Function is provided by the application configuration attached to the Azure App Service on which the Function is running. This configuration will need to be replicated using a local JSON file, but there will likely be a few differences in values. In addition, Azure Functions require an Azure Storage account for handling triggers and logging. When running locally, the Azure Storage account can be replaced with an installed emulator. We’ll start with the emulator since it impacts the values put in configuration. Once the configuration and storage are in place, you can debug the Azure Function from Visual Studio by setting the function project as the startup project for the Visual Studio solution. To run all Azure Functions (except HTTP triggered functions), an Azure Storage account must be available. The connection string for this storage account is stored in the AzureWebJobsStorage configuration key on the Azure Function App Service. When developing or debugging a Function, it’s common to run the Azure Storage Emulator locally instead of using a storage account. The Emulator is a Windows-only tool capable of simulating blob, queue, and table storage. There are two ways to get the Azure Storage Emulator: - You can install it via the Visual Studio installer. It’s included in the Azure Development workload or can be individually selected under the Cloud, database, and server section. - You can download a standalone installer from the official Microsoft documentation here. To connect the Function to the Emulator, replace the AzureWebJobsStorage connection string with “UseDevelopmentStorage=true” (see next section for where to configure this). The Emulator will automatically start when the project is run in Visual Studio, and no further steps should be needed to connect to it. If you’re using Emulator, I recommend reviewing the Microsoft documentation. Although the appeal of using the Emulator is its ease of setup, it’s a feature-filled tool with quite a bit of configuration available. The documentation reviews topics such as command line control of the Emulator, configuring the SQL backend of the Emulator, and setting up authentication, all of which are beyond the scope of this blog post. To provide the configuration values typically found in the Azure Function’s App Service configuration, a local.settings.json file is used. The file is typically added at the project root directory, and, if added to the solution in Visual Studio, is set to Build Action: None and Copy to Output Directory: Copy Always or Copy If Newer. This file isn’t typically checked into source control because of the amount of infrastructure access information it contains, so it’s recommended you add it to a .gitignore file (or equivalent if using an alternative to Git). The full specification for the local.settings.json file can be found in the Microsoft documentation, but a sample file might look like the following: All the settings required for the Azure Function should be included in this file under the Values section. Typically, these values would match those found in the App Service configuration but changing the values here would allow for a separate development environment if desired. A few specific settings in this file warrant a mention. - The “AzureWebJobs.FunctionName.Disabled” values are unique to the settings.json file and won’t appear in Azure. These values will be discussed in more detail in the next section. - The “LocalHttpPort” value is ignored when running via Visual Studio, which will set a port as a command line argument automatically. You can set this value by going to the project’s properties and including the following line in the Application Arguments section: host start --pause-on-error --port #### - The “AzureWebJobsStorage” line was discussed in the previous section as a way to connect to the Azure Storage Emulator. In the previous section, I discussed using the configuration values “AzureWebJobs.FunctionName.Disabled” to run only certain Azure Functions at a time. The pattern I’ve used for developing related Functions is to put multiple within the same project, which is then deployed as a single DLL to Azure. However, when you run the Functions locally, by default it runs all Functions. This can pose a problem when only one Function needs debugging and you’d prefer the Azure infrastructure run the rest of the Functions while you work on the problematic Function. The “AzureWebJobs.FunctionName.Disabled” values need to be added for each Function you want to prevent from running locally; there’s no logical inverse of only running a specified Function. An example local.settings.json file with these values might look like the following: I like to add a new line for every Function in the project, with a default value of true, to the local.settings.json file as soon as I create the Function. This helps prevent accidentally running a Function that’s already running in Azure. But what happens when you run a Function locally that’s also still running in Azure? Requests to the Function can get routed to either running instance of the Function indeterminately, which can become an issue if you’re testing changes and have two different versions running. Perhaps you want to test an Azure Function that was built in a Continuous Integration (CI) pipeline, so you need a way to run the .DLL file directly. Or you want to use the Azure Function framework but don’t want to have the processing hosted in Azure. (Although I’ve never seen such a scenario, it’s possible.) You would want to use the Azure Functions Core Tools to do so. You can find instructions for installing Azure Function Core Tools here or on NPM. Azure Functions Core Tools are fairly simple to use, especially if you already have the local.settings.json file created. Copy the settings file to the root of the Functions project next to the DLL, open a command prompt from there, and run “func start.” The Function will run based off the same trigger it would use if hosted in Azure or it can be manually triggered for testing using a local HTTP endpoint. For all Functions using an HTTP trigger, the default port is 7071, but it can be changed via either the local.settings.json file or at the command line. Azure Functions Core Tools include many other features useful throughout the Function development cycle, such as scaffolding new Function projects or publishing them to Azure. In this guide, I covered the following: - Setting up the configuration (via local.settings.json) and infrastructure (with the Azure Storage Emulator) locally to replicate the hosted Azure environment for Azure Functions. - Customizing the Function configuration within local.settings.json, so local runs are both useful and don’t interfere with development-hosted resources - An introduction to the Azure Functions Core Tools, which include methods to run existing Functions I hope this walk through has been useful. And may all of your Azure Functions be performant!
OPCFW_CODE
move windows to another non-empty hard disk I have a failing hard disk which has Windows installed. However, it is still able to copy files. I have another hard disk, which is not empty and I don't want to format it. Is it possible to move the Windows installation, to the other hard disk without having to re-install Windows? There is ample space on the new hard disk as its a 1TB hard disk. OS is Windows 7 SP1 If you haven't thought of this already you might want to copy everything you don't want to lose off the hard drive first, it may only be possible to copy files for a limited amount of time before the drive dies completely. @Richard I have a backup, so no worries about that. It has several programs installed and configuration, and would like not having to go through that process again by re-installing the OS. Is it possible? If it was me I'd just re-install as I wouldn't have confidence that all of the files on the failing disk were uncorrupted. There are disk cloning tools although you may need to convince the system to boot using the new disk. Sorry I can't be more help! Create a parition on the second disk the same size as your system disk. Create an image of the system disk and restore the image to the disk. The parition will need to be at the start of the disk for this process to work. You also will have to move the data off the drive ( for the time being ) for this process to work. A second data partition can be made after you know the system partition boots. You can try doing a dd_rescue to an image and then restoring that image to a new partition on the good disk. You can try creating restore disk. To create a restore point Open System by clicking the Start button Picture of the Start button, right-clicking Computer, and then clicking Properties. In the left pane, click System protection. Administrator permission required If you're prompted for an administrator password or confirmation, type the password or provide confirmation. Click the System Protection tab, and then click Create. In the System Protection dialog box, type a description, and then click Create. then copy that restore disk to your hard disk. If you again want to install this just burn it in DVD and you can again restore to that. You need to provide a little more detail at least and not just a mere link. Not to mention you linked to a page that discusses how to create a restore point. Mind explaining how exactly System Restore will help in this situation? Also, lmgtfy links aren't acceptable (not to mention rude). People come here from Google and other search engines looking for answers. Sorry for this, i am new here. I will follow it in my further actions.
STACK_EXCHANGE
The Wizdom product is characterized by being highly and easily customizable. This is due to the concept of templates and extensibility in Wizdom. In this article, we will present you to these concept and show you how you can use, respectively, templates and extensibility to customize your intranet. Introduction to Wizdom templates Most of Wizdom’s modules include one or more templates that each offer different options for functionality, look, and feel of the module and it’s web parts. E.g. one template for the Corporate News web part will show five highlighted news at a time while another will only show one, one template will make the highlighted news slide while others offer a static view. Example of two different templates for the Corporate News web part You can either employ a predefined template or create your own. In Wizdom Configuration Center you will have an overview of all templates defined for Wizdom’s modules. Go to the ‘Modules’ section in the Wizdom Configuration Center. Clicking on a module and then the ‘Template’ tab within that module, you will see all templates available for the particular module. Templates for Wizdom on the modern experiences have its own tab in the module administration of all Wizdom modules. This tab is called ‘Modern Templates’. Often the different parts of a module will have a template of its own. E.g. the Noticeboard module have a template that defines the look, feel, and functionality of news users meet when they have opened a news to read it, a selection of templates to choose between that defines the look and feel of the list of news seen in the news feed, a template that defines the look, feel and functionality of the interface users meet when they are writing news, etc. In this way, all or parts of a module and its web parts can be configured to meet unique needs by the means of templates. Employing predefined templates Each Wizdom web part employed on intranet pages uses one or more templates that define the look, feel, and functionality of the particular web part. To define which template a web part should employ, take the following steps: 1) Start by editing the page by pressing the edit icon in the utility navigation. Some browsers will not show the edit icon. If this is the case for you, you can edit the page by pressing the setting wheel and then click ‘edit page’. 2) Press the ‘Settings’ button in the web part. 3) In the Window that appears, click the small triangle under ‘Template’ and select the template you’d like to employ. Remember to click ‘SAVE’ before checking in and publishing your changes. The web part will now use the template you have selected. Creating own custom templates To create own custom templates take the following steps: 1) Go to the ‘Modules’ in Wizdom Configuration Center (modules in the ‘Admin’ section does not include templates). Clicking on a module and then the ‘Template’ tab within that module, you will see all templates defined for the particular module. Templates marked with a blue sign with the text ‘System’ are predefined templates that comes with the Wizdom product. Templates marked with a green sign with the text ‘Created’ are predefined templates custom created for your organization. 2) Select a template and press ‘Save as’. This will create a copy of the template you have selected. This means, it pays to select a predefined template that resembles the result you wish to achieve. If you wish to start from scratch, just delete the code in the template. 4) Click ‘Save’ and you have created a template with your own custom features and styling that you can now employ in a web part. The concept of extensibility Wizdom extensibility allows for more advanced customizations than Wizdom templates does. Even though Wizdom includes a wide selection of features, some businesses have very unique needs that is not met by the product out-of-the-box. Perhaps an organization wish to add custom features to some of Wizdom’s modules, to develop own custom modules for their Wizdom application, or maybe the organization have 3th party business applications they would like to integrate with their digital workplace based on Wizdom and SharePoint. That could e.g. be a document management or a time management system. The concept of extensibility in Wizdom allows these organizations to easily customize Wizdom to meet their needs. The customizations made will be fully supported and compatible with future Wizdom and SharePoint updates.
OPCFW_CODE
Procedural wood with many combinations. I will also attach a file with the source of the promo video. With its quick presets. I think this will help you figure it out faster. I am currently working on "Easy Procedural Wood v1.5" A modular system will be added for simplicity and ease of use. Based on experience with "Easy Material Setup". Also, the wood module will be additionally added to the main assembly "Easy Material Setup". Attention! This shader is completely procedural. I wanted to make it as realistic as possible. Therefore, when using the renderer on the CPU, performance issues are possible. Also, if you use damage generation, it greatly affects performance. In this case, in order to increase performance, in large scenes you will need to bake the result. I remind you again. This shader is completely procedural and I wanted to make it as realistic as possible, right down to the fibers in the tree. Thank you for understanding! The shader has three levels of material in it, these are wood, damaged wood and painted wood (or painted damaged wood). The shader is completely procedural and generated on your object. If you only want wood, this is the upper part of the shader, it generates wood material and its variations, also with light damage. But if you need more control - go to the middle of the shader, it adjusts the wear damage based on AO, Sharpness and Edges. (neat, this requires more performance from the computer). Natural wear such as scratches and stains is generated at the bottom of the shader. I will show in more detail in the video: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Some of my customers have gotten performance issues. I will release a simplified shader later for better performance. BUT performance problems are not due to poor optimization. This shader is just trying to be as realistic as possible. Since I'm worried about my customers and I'm interested in making sure you are satisfied, I recorded a video with a short explanation of how to improve performance in blender3d. 1. In "Edit - Preferences - System" make sure that you have activated the checkbox on the video card. 2. In the Render Properties, switch to Cycles. This shader is too complicated for EEVEE and although it works, it slows down a lot. 3. Also switch to GPU rendering. 4. Use an Optix denoiser or an Open Image Denoiser. 5. At the actual rendering, if you have an average or good video card use big tiles size for best performance(256 px or 512 px (for my)). Or Image Guide for you :) Three steps for best performance: |Dev Fund Contributor| |Published||over 1 year ago| |Software Version||2.83, 2.9, 2.91, 2.92, 2.93, 3.0| Have questions before purchasing? Contact the Creator with your questions right now.Login to Message
OPCFW_CODE
Intel Cluster Studio XE 2012 provides an MPI hybrid development suite that targets developers on high-performance clusters. The suite includes Intel MPI Library version 4.0 Update 3, which provides interoperability with OpenMP. Thus, you can develop and optimize hybrid MPI/OpenMP applications to take full advantage of the capabilities provided by the high-performance clusters that you target. When you decide to use both MPI and OpenMP with Intel MPI Library, you must use the -mt_mpi compiler command option to link the thread-safe version of the Intel MPI Library. This setting will be automatically applied when you use either the -Qopenmp or the -Qparallel options for the Intel C/C++ compiler, and therefore, those options will link the thread-safe version of the Intel MPI Library even if you don't add the -mt_mpi compiler command. When you work with the thread-safe version of the Intel MPI Library, any of the following three levels will have the thread-safe version linked: It is also necessary to set the appropriate value to the I_MPI_PIN_DOMAIN environment variable. This variable allows you to control the process pinning scheme for hybrid MPI/Open MP applications. The possible values for this variable allow you to define a number of non-overlapping domains of logical processors on a node and a set of rules on how the MPI processes are bound to those domains. You will always have one MPI process per one domain, and each domain will be composed of certain logical processors. Each MPI process can create threads that will be able to run on the logical processors within the domain assigned to the process. When you set a value for the I_MPI_PIN_DOMAIN environment value, any value assigned to the I_MPI_PIN_PROCESSOR_LIST environment variable will be ignored. I_MPI_PIN_DOMAIN environment variable has the following three syntax forms to define the domain. I_MPI_PIN_DOMAIN=<mc-shape>— Define the domain by using multi-core terms. For example, the I_MPI_PIN_DOMIAN=coreestablishes that each domain consists of the logical processors that share a particular core. If you set this value, the number of domains for a node is going to equal the number of cores for the node. Other options allow you to define the domain by socket, nodes, or the different cache levels that the logical processors might share. I_MPI_PIN_DOMAIN=— Define the domain by specifying the domain size and the domain member layout. It is also possible to define only the size. The size value determines the number of logical processors in each domain. You can specify the desired number of logical processors to determine the size. However, the most convenient option for hybrid MPI/OpenMP applications is usually I_MPI_PIN_DOMAIN=ompto make the domain size equal to the OMP_NUM_THREADSenvironment variable value. This way, the processing pinning domain size is going to be equal to OMP_NUM_THREADSand each MPI process can create OMP_NUM_THREADSthreads for running within the corresponding domain. If OPM_NUM_THREADSisn't set, each node will be treated as a separate domain, and therefore, each MPI process will be able to create as many threads as the number of available cores. In addition, you can specify the ordering of the domain members in the optional layout parameter. The default value is compact, which means that when you specify I_MPI_PIN_DOMAIN=omp, it will be equivalent to I_MPI_PIN_DOMAIN=omp:compact. The compact option determines that domain members are located as close to each other as possible according to their common resources; e.g., cores, caches, sockets. The compactvalue benefits MPI processes that take advantage of sharing common resources; e.g., cores, caches, sockets. On the other hand, the scattervalue determines that domain members are ordered so that adjacent domains have minimal sharing of common resources. The most convenient value depends on the available hardware, the kind of application, and its specific needs. I_MPI_PIN_DOMAIN=<masklist>]— Define the domain by using domain masks. For example, you can use a comma-separated list of hexadecimal domain masks that establish whether processors are included in each domain or not based on the BIOS numbering. So, if you set the -mt_mpi compiler command option, set I_MPI_PIN_DOMAIN=omp, and configure OMP_NUM_THREADS to establish the desired number of threads for OpenMP, you will be able to execute hybrid MPI/OpenMP applications that can take full advantage of all the possibilities and configurations offered by the high-performance clusters. You can simply set different values for both OMP_NUM_THREADS with the mpiexec job startup command. For example, you can use -env I_MPI_PIN_DOMAIN omp as part of the mpiexec options to establish the value for omp. In addition, you can use setenv OMP_NUM_THREADS=8 to establish the value for You can create threads that execute code in parallel with OpenMP and use MPI to coordinate higher-level communications. You can have different levels of optimizations, and you can tune them with the tools that Intel Cluster Studio XE 2012 provides you, such as Intel Trace Analyzer and Collector, Intel VTune Amplifier XE, Intel Inspector XE, and Intel MPI Benchmarks. You can run the application with different options, analyze, and then tune your code and the configurations. Intel Cluster Studio XE 2012 is a commercial product, but you can download a free trial version here.
OPCFW_CODE
Recently there have been some great new additions to the QGIS project. Being part of such a fast moving project is a great feeling, and it’s only going to get better. This post is going to be a quick over view of some of the newer features that I really like. Well of course I like this one, I just added it. The reason I added this feature was because I really wanted to way to have popup images on the map canvas for flood damage reports on roads. I also wanted it dynamic so I could use template like syntax to replace values at run time. I’ll let you think of some nice use cases for this new addition. Project macros and non blocking notifications This new feature comes from Giuseppe Sucameli of faunalia with the work done for ARPA Piemonte. The task was to add Python macros that run when a project is open, saved, closed. As a side effect of the task the issue of security was raise and how to notify the user that macros are going to run. For me this was less about security and more about how to present that information to the user without annoying the crap out of them. Most of the time popup dialogs in software are a anti-pattern and are often abused for tasks like this. So knowing I would throw my computer out the window if I had to dismiss yet another dialog I suggested a less intrusive method being used a lot these days. The handy slide out notification bar. Giuseppe was very welcoming to the idea and implemented it nicely. Of course this addition can also expanded into other areas of the program. My first plan is to use it for notifying the user of plugins to failed to load. There is nothing in QGIS that annoys more then starting and seeing this: To make matters worse if more then one plugin fails to load then I have to dismiss each dialog. So we can now use the notification bar to present it to the user in a nice non-blocking way. Something like “BTW four plugins failed to load at startup. What would you like me to do?” Remember each time you use a blocking popup dialog it’s pretty much yelling at the user “OMG GIVE ME ATTENTION!! NO YOU CAN’T KEEP WORKING! GIVE ME ATTENTION!” I’m working on a patch to move this stuff into the notification bar just no ETA at the moment as I’m a bit busy. Larry Shaffer has been working on some great improvements to the new labeling engine in order to make our maps look a lot more professional. Larry has been doing a lot of work in this area and is still going so I’m not going to go into all the details. However one new labeling feature that I really like is the ability to to set the spacing between letters and words. There is also the new ability to set the transparency of the label and the buffer. The buffer transparency is something that I really like as sometimes you need a buffer but a solid buffer can then block out your map features; by adding a 45% transparent buffer I still have the labels pop off the map but not in your face or blocking features. It’s hard to make a picture to explain it well so you’ll just have to experiment. This one could be quite handy for people that make a lot of maps with the same base data. Thanks to Etienne Tourigny QGIS can now load projects as a template. This means you can create a project with all your base layers, styles, labels, etc, configured and then load it by default, or from the file menu, and you will have everything setup. All you have to do is save the a normal .qgs project file in ~/.qgis/project_templates folder and the project will be shown in the file menu. You can also set the current project as the default template: And last but not least. This years GSoC student Arunmozhi got the improvements he had (has) been working on included into the master build. Arun was very welcoming to any feedback that Martin and I gave him about how we would like symbol stuff to work. Anita Graser has already covered a lot of the new features over on her blog so I’m not going to go over everything again, although one thing she didn’t really touch on was the smart groups and tagging. The tagging and smart groups are one of my favorite additions to the new symbol manager. I love this new feature as not all the symbols I create belong to a single group so the tagging and smart groups fit this bill well. I can now tag all the council symbols with ‘SDRC’ and include them a SDRC smart group but at the same time tag the sewer ones with ‘sewer’ and they can live in the sewer style smart group; or how about all sewer symbols that are also SDRC ones: You can then filter by this group in the symbol selector: I really love how fast QGIS is moving forward. There almost isn’t a week that goes by that something isn’t getting done or someone is adding something new. Of course the great people on the project make this process a hell of a lot of fun and enjoyable. Have fun experimenting! (remember that these features are in the master development build and may or may not have bugs)
OPCFW_CODE
As the volume of online content continues to grow exponentially, it becomes crucial to have effective tools that can analyze and detect various aspects of that content. AI content detector tools have emerged as invaluable assets in this regard, offering powerful capabilities to identify, classify, and analyze digital content. In this article, we present a compilation of the top 10 AI content detector tools that are widely used today. These tools employ advanced machine learning algorithms and natural language processing techniques to help individuals and organizations maintain content quality, adhere to guidelines, and ensure compliance. Let’s explore these tools in more detail: Free AI Content Detector 2023 – Unlimited Top 10 AI Content Detector Tools - TensorFlow: TensorFlow, an open-source machine learning framework developed by Google, provides a range of AI content detection capabilities. It offers pre-trained models and tools that can be used for text classification, sentiment analysis, and content moderation. With its extensive community support, TensorFlow allows users to build and customize their own content detectors based on specific requirements. - Amazon Rekognition: Amazon Rekognition is a cloud-based service that leverages AI to analyze images and videos. It can identify explicit and suggestive content, detect faces, recognize objects and scenes, and even perform celebrity recognition. Amazon Rekognition offers a user-friendly API, making it accessible for developers to integrate content detection capabilities into their applications. - Perspective API: Perspective API, developed by Jigsaw (a subsidiary of Google’s parent company, Alphabet Inc.), is designed to assess the quality and toxicity of online comments and content. It uses machine learning models to detect potential harassment, insults, and other forms of abusive language. Perspective API can be integrated into content moderation systems, social media platforms, and discussion forums to promote healthier online conversations. - OpenAI’s GPT-3: GPT-3, developed by OpenAI, is a state-of-the-art language model that can be used for various content detection tasks. With its advanced natural language understanding capabilities, GPT-3 can assist in identifying plagiarism, generating summaries, detecting misleading information, and even evaluating the sentiment and tone of written content. - Google Cloud Natural Language API: Google Cloud Natural Language API offers a comprehensive set of AI-powered content analysis tools. It provides features such as sentiment analysis, entity recognition, content classification, and syntax analysis. With its robust infrastructure, this API enables developers to build scalable content detection systems with ease. - IBM Watson Natural Language Understanding: IBM Watson Natural Language Understanding is a cloud-based service that utilizes AI to analyze text and extract valuable insights. It offers features like sentiment analysis, keyword extraction, concept tagging, and emotion detection. IBM Watson’s content detection capabilities are highly customizable, making it suitable for a wide range of applications. - Clarifai: Clarifai is an AI-powered visual recognition platform that specializes in image and video content detection. It offers models and APIs for object recognition, explicit content detection, face analysis, and custom training. Clarifai’s user-friendly interface and comprehensive documentation make it a popular choice among developers. - Sightengine: Sightengine is an AI content detection tool that focuses on visual content moderation. It employs deep learning algorithms to identify and filter out explicit or inappropriate images, videos, and text. Sightengine’s real-time content detection capabilities and high accuracy make it suitable for various applications, including social media platforms and online marketplaces. - Azure Content Moderator: Azure Content Moderator, part of Microsoft Azure’s cognitive services, is a cloud-based content moderation tool. It offers features like text moderation, image moderation, and video moderation, with customizable rules and filters. Azure Content Moderator helps organizations maintain compliance and enforce content policies across different platforms. - MonkeyLearn: MonkeyLearn is a versatile AI platform that allows users to create custom models for content detection. It offers a wide range of pre-built models and tools for tasks such as sentiment analysis, topic classification, intent recognition, and more. MonkeyLearn’s user-friendly interface and integration options make it a popular choice for developers and non-technical users alike. Conclusion: AI content detector tools play a vital role in today’s digital landscape, enabling efficient analysis and moderation of online content. The top 10 tools mentioned in this article, including TensorFlow, Amazon Rekognition, Perspective API, GPT-3, and others, provide powerful AI capabilities to help individuals and organizations effectively detect and manage various aspects of digital content. Whether it’s identifying explicit content, detecting toxic language, or analyzing text and images, these tools offer valuable solutions for content moderation, compliance, and quality control. With ongoing advancements in AI technology, we can expect further innovations and improvements in the field of AI content detection in the future. AI Content Detector The Only Enterprise AI Content Detection Solution Paste your content below, and we’ll tell you if any of it has been AI-generated within seconds with exceptional accuracy. Detect AI-generated Content With 99% Accuracy Now more than ever, it’s crucial to know what content is real and what was created by AI, whether you’re browsing the internet, creating content, or reading through student essays. AI content detection has never been more important. As the only enterprise AI-content detection solution available, and with 99% accuracy plus LMS and API integration, AI Content Detector is the most comprehensive and accurate AI text detection solution available anywhere. AI Content Detection With Full Spectrum Protection Harness the power of the AI Content Detector and get results within seconds right from the Copyleaks platform. Not All AI Content Detectors Are Created Equal AI-based text analysis is the core of who we are. Using a “fight fire with fire” approach, our AI-generated content detector utilizes the power of AI technology to detect the presence of AI providing a more accurate, thorough enterprise AI content checker solution. Complete AI Model Coverage, Detect AI-generated text created across ChatGPT, GPT-4, GPT-3, Jasper, and others. Plus, once newer models come out we’ll be able to automatically detect it. Unprecedented Speed and Accuracy With 99.12% accuracy, our AI-generated text detector gives you the most detailed and accurate findings within a matter of seconds. In-Depth, Detailed Analysis The only AI text detector platform that highlights the specific elements written by a human and those written by AI, offering a new level of insight and transparency. Detection Across Multiple Languages The only AI writing detector supporting multiple languages, including English, Spanish, French, Portuguese, German, Italian, Russian, Polish, Romanian, Dutch, Swedish, Czech, and Norwegian, with more languages currently in the works. Detection That Evolves Credible data at scale, coupled with machine learning, allows us to continually improve our understanding of complex text patterns, offering unprecedented coverage and confidence in our AI content detector tool. Lowell Christensen, CEO, SafeSearchKids Frequently Asked Questions (FAQ) What is AI-generated text? AI text is content created partially or entirely by a text bot or similar software, such as ChatGPT. It has been used to write essays, articles, stories, and more. As we explain in our blog, There are a few ways that AI content detection tools operate. The first is looking for indicators within the text, such as - Linguistic patterns - Repeated words or ideas - Structural features But Copyleaks is equipped to look a little more deeply. We’ve deployed an AI tool ourselves to help combat AI plagiarism. As Alon Yamin, CEO and co-founder of Copyleaks, explains, “With 99 percent accuracy, we’re able to combat the dark side of AI by uncovering AI-digital DNA crumbs that are left behind, which only sophisticated AI like what we’re using is capable of detecting.” AI-generated text detection is free for existing Copyleaks users and does not cost any additional credits. We are working on several different fronts, including: - The ability to detect AI content and text that has gone through a text spinner or otherwise been manipulated (i.e., including deliberate typos). - Across-the-board accuracy improvements. - The support of additional languages and models. We’ll continue to monitor the landscape and closely listen to user feedback to ensure we keep our AI writer detector one step ahead of AI content generators and provide the most accurate results possible. Building Digital Trust and Confidence:
OPCFW_CODE
I18N Section Index | Page 5 How can one store an Image in a ListResourceBundle in order to provide different images for different locales? One method is to save your localized images in a .jar, as, say, image-fr.gif for french and image.gif as default. Then you can reference the image file name from ResourceBundles for each locale. F...more How can I internationalize the spellout of numbers, like two for 2 in English, dos for 2 in Spanish? First, to get a feel for the difficulty of the problem ( and probably more than you would ever want to know, ) see A Rule-Based Approach to Number Spellout. Then be happy that you can freely use...more Here's the best document for a definitive answer, including the difference between character set and charset: RFC2278.more For Netscape see Welcome to Netscape International!. For Microsoft see International Home Pages.more Obviously the standard JDK documentation at Internationalization has links and discussion, but a good paper on the topic from two of the i18n developers is The Java International API: Beyond JDK 1.1.more Are there standard classes that will perform western to Japanese or other date conversions? I'm especially interested in Japanese eras. How to write a servlet which can transfer data written in ISO8859-1 into GB2312 or another kind of character encoding? You can use this method to transfer your data from one standard encoding to another one. I have written it out and tested it under JSDK 2.0, and it runs well. The following is the java code I have...more For each national language supported by Unicode, what is the maximum number of bytes used to represent any character of that language when using UTF-8? I don't know of an available table like that offhand, but you can tell from any character what the UTF-8 result will be. Basically, for UCS-2, or 16 bit Unicode, the character range 0 - hex 7F ( ...more You need to be sure to set the character set when you set the content. The following example demonstrates. The message content is the numbers from 1-10. import java.util.Properties; import javax...more You can specify the encoding at compile time with the -encoding option to the javac compiler. How can one use the ResourceBundle class in an applet to retrieve locale specific information over the Internet? Resource bundle files are located relative to the codebase of the applet. For instance, if you place the applet files into a JAR file, and ask for a bundle with ResourceBundle.getBundle("Trai...more The short answer is to use Numberformat.getCurrencyInstance(), then use the format() and parse() methods. I have included a non-production quality code sample that accepts input in one's local cu...more A Locale object represents and provides various information about "a specific geographical, political, or cultural region," as seen at Locale. However, in and of itself, that's all a Lo...more How can I read GET or POST parameters that were encoded in an international character set? Also, when a user fills in an HTML Form using a custom Input Method Editor for, say, Japanese, how can my servlet/JSP know which encoding was used? The HttpServletRequest object from 2.0 on provides a method for retrieving the client's character encoding. The follwing sample gets the parameter mytext and converts it to a java Un...more I understood that Java always uses Big Endian. Why does the code at How can I store and retrieve Unicode or Double Byte data in a file using the java.io libraries? produce Little Endian output? Actually, Java's guaranteed usage of Big Endian only applies to what I will loosely call "numbers." String or byte encodings for characters are a different matter altogether. The Littl...more
OPCFW_CODE
What is shuffling? Today, a user by the name of Nanashiiiima asked in the SuShCodingYT Discord channel: Does anybody know how to assign every value in a randomized array to a unique integer? In my case, I’m trying to assign the 3 variables in my array with one of three values (I don’t want any of them to be the same). This is a pretty common problem, especially in video games. One example where this might be useful, both in and out of video games, is to play all of the songs in a playlist in a random order while ensuring the listener hears every song exactly once. This is known as "shuffle" and the algorithm described below is indeed how many old school MP3 players implemented the "shuffle" button. Fisher-Yates Shuffle algorithm Please note, this shuffle algorithm has a number of variations; some of which are generically labeled under the same name. See the "Additional Reading" section at the end of this post. I will be demonstrating the range bias-free, O(n) "modern algorithm". This implementation is also an in-place shuffle. That is, it modifies the original array rather than making a copy. You could easily modify this code to make a copy if you prefer to keep the original array immutable. First, let's shuffle a simple array of contiguous integer values. This is a good test case because it's easy to visually verify that it's working correctly. This program prints both the original array and the shuffled array: Shuffling anything (indexed lookup) So we can shuffle lists of integers, that's cool, but what if we wanted to shuffle something else? One idea is we may want to randomly name eight NPCs from a list of preprogrammed names. We obviously don't want two NPCs to end up with the same name, that would be confusing! Another idea is a tasty treat chooser. Choosing which treat to grab from the fridge is always tough, so for this example we'll be shuffling an array of tasty treats. One way to do this would be to use the shuffled array of numbers we generated above as lookup indices into the treats array. This would us to shuffle many times without modifying the original array. Here is an example of this approach: The output if this program is very similiar, except that we now have a shuffle array of tasty treats! Shuffling anything (in-place) While there are times when the solution given above for shuffling tasty treat names is useful, there are other times when you may be asking "Why do I need this intermediate array of indices? Why can't I just shuffle the strings themselves?" and of course the answer is: You can! In this last example, I will show you how you can use this method to shuffle arbitrary data in-place. This is the same algorithm as above, which runs in O(n), but you should consider the cost of the copies for the numerous swaps that have to occur in order to shuffle an array in-place. For an integer (above) or a string pointer (below), the cost is trivial, but for a more complex data types and larger arrays, the cost of performing O(n) copies to and from the "tmp" variable to perform the swaps may become more noticeable or even prohibitive. This is unlikely for most reasonable use-cases, but good to keep in mind. Alright, so now let's do an in-place swap of tasty treats! This program outputs a shuffled array of tasty treats: The Fisher-Yates Shuffle Wikipedia article has a great overview of the history of this algorithm. It also dives deeper into common mistakes and possible sources of bias. The algorithm I've described above is the one described as the "modern algorithm" in the Wiki article. Assuming I've implemented it correctly, this variation elimnates shuffle bias due to range errors, but does not attempt to eliminate modulo bias or PRNG (pseudo-random number generator, i.e. rand()) bias. Read the Wiki article for more information on these topics. The Fisher-Yates Shuffle is a fast and efficient way to randomly shuffle a collection while ensuring no additional duplication or repetition, aside from any that is already in the original list. Hopefully this post has you thinking of new and creative ways to add variation to your game, or any other software project! If you'd like to check out the SuShCodingYT Discord server, you can join it via this link: This Discord server is the community hub for Suraj Sharma's YouTube channel, which you can find here:
OPCFW_CODE
What Is a Linux Server Operating System? A Linux server operating system serves content to client devices. Server operating systems article tools for simple server creation. Because servers generally run heartless, the graphical user interface (GUI) in a Linux server operating system remnant less required. IDC Admit , hardware sales data announce that 28 percent of servers are Linux-based. However this likely doesn’t account for home labbers. While there are dedicated Linux server operating systems. Linux training in chandigarh will help you in learning the linux server operating system. The key use in Long Term Service (LTS) and for installing your fancy software. LTS flavors provide balance and a great support cycle. Here we have different types of Linux server operating system used in Linux:- 1. Ubuntu Server Ubuntu is equable the most well-known Linux operating system. With a plethora of Ubuntu derivatives, it’s a balanced distribution. Ubuntu and its modification offer excellent user experiences. Ubuntu Server is available in two versions: LTS and a rolling-release. The LTS Ubuntu Server discharge boasts a five year support cycle. Although the support cycle isn’t five years, the non-LTS variant appearance nine months of security and maintenance updates. While Ubuntu and Ubuntu Server are pretty identical, Server offers different luxuries. Notably, Ubuntu Server provides OpenStack mites, Nginx, and LXD. Such formation coddle to system administrators. Using Ubuntu Server, you can spiral up web servers, deploy containers, and more. Although it’s not a server distro, Ubuntu LTS does aspect a five year support cycle. I’m currently using Ubuntu 16.04 LTS to run a committed Plex server as well as a Linux game server. LTS dispocal can function properly well as Linux server operating systems. SUSE Linux debuted in 1993. In 2015, open-source alternative openSUSE emigrated toward SUSE Linux Enterprise (SLE). There are two openSUSE derivatives: Leap and Tumbleweed. Leap appearance longer discharge cycles whereas doublewide is the rolling release. Tumbleweed is improved for power customers with its up-to-date glass like the Linux Kernel and SAMBA. Leap is better for balance. Updates bolster the operating system. Default tools posit openSUSE as a crazy Linux server operating system. openSUSE includes openQA for computerized testing, YaST for Linux configuration, and the taxing package manager Open Build Service. In discontinue its previous nine month absolution cycle and focusing on balance like SLE, openSUSE became a consistent Linux server environment. 3. Oracle Linux If you did a binary take when reading “Oracle Linux,” you’re not accomplish. Oracle Linux is a Linux delivery powered by tech giant Oracle. It’s applicable with two kernels. One features the Red Hat Compatible Kernel (RHCK). This is the same kernel as create in Red Hat Enterprise Linux (RHEL). Oracle Linux is confirm to work on lots of hardware from the likes of Lenovo, IBM, and HP. Oracle Linux appearance Ksplice for appreciate kernel security. There’s also support for Oracle, OpenStack, Linux containers, and Docker. It’s branded with an Oracle theme including an Oracle penguin. There is support, but it’s paid. If you need to spiral up a public / private cloud, Oracle Linux is a astronomical server operating system. Alternately, try Oracle Linux if you commonly need the Oracle-branded Linux penguin. 4. Container Linux (Formerly CoreOS) Container Linux is a Linux operating system assembled for expand containers. There’s a absorption on absorption containerized distribution. Container Linux is a admirable operating system for secure, highly extensible deployments. Assortment expand are easy and this distro build means for accountant discovery. There’s character and support for Kubernetes, Docker, and rkt. However, there’s no bucket manager. All apps must run essential containers, so condensation is mandatory. Nevertheless, if you’re working with capsule Linux is the best Linux server running for a batch infrastructure. It offers an etcd which is a daemon constant across each computer within a cluster. You’ve got install agreeability too. In development to an on-premise installation, you can run bowl Linux on basic channels like Azure, VMware, and Amazon EC2. CentOS provides a equitable environment. It’s an open-source accessory of Red Hat Enterprise Linux (RHEL). Thus, CentOS assign an enterprise-class server actions. The Red Hat sponsored operating system uses the equitable source code as found in RHEL. CentOS employs the RPM bottle administrator. Analysis data found that about 30 percent of all Linux servers fulfilled on CentOS. There’s a reason: it’s a very lasting server nature with Red Hat backing. 6. Arch Linux Many servers limit power drinking.Contraction power draw is a major asset particularly for always-on appliance. Similarly, Linux server operating systems should dominate few resources. Appoint resources fully is key for maximum optima and server efficiency. Many Linux distributions use fewer resources than Windows or macOS counterparts. There’s a dedicated server branch of the Arch Linux Wiki. You can learn all about configuring Arch Linux as a attendant operating system. While there’s not a pre-packaged assistant release available for download, this Wiki provides the steps to creating your own. You can install popular server software including MySQL, Apache, Samba, and PHP for Arch. Mageia is a Linux operating system that register security and strength. It’s a fork of Mandriva Linux that break in 2010. A 2012 PC World praised Mageia, now on its fifth reputation. Although there are countless Linux operating systems, there’s also a large list of Linux desktop environments. Mageia includes a group of environments such as KDE, GNOME, Xfce, and LXDE. Rather than MySQL. Server-centric extension like the 389 Directory Server and Kolab Groupware Server make Mageia a stellar Linux server operating system. ClearOS is clearly engineered for servers,entrance machines, and network systems. The standard install features security improvements. There’s a absence firewall, bandwidth administration tools, a mail server, and imposition detection. ClearOS 7 Community Edition sports a massive 75 apps and tools. While there are paid ClearOS tiers, the Community Edition remains free. Additionally, ClearOS updates are absolutely free from upstream sources. Linux training in Chandigarh is extremely in demand CBitss rolled out with a quick Quizzing Session, followed by the hackathon event where the students polished their art of using various linux Platforms like REDHAT, Fedora and some others too. Later, the students learned how to face the interview panel. We believe Training is Success in Progress. Our course really feel student about Corporate world so that they comes to know about Industry terms and conditions.We are CBitss Technologies , which is a unit of Sukrala IT Services Pvt. Ltd. company and it is ISO 9001:2008 certified company which provide best Linux training in Sector 34 Chandigarh.
OPCFW_CODE
Label will not show even when told to show I am making an application that loads a separate form, the user puts in information, and then when done, it will show up on the primary form the application loaded with first. The issue is that I tried multiple solutions to get this to load in, but it will not load in after the information is put in. I have tried this.Controls.Add(Label); which is what I have seen the most, but it has not worked. Another way I tried was doing Label.Show();, but the same result, with nothing showing. The AddContacts(string Name) method below is how I add the contact The AddContact_Click(object sender, EventArgs e) method is a button that, when pressed, opens another form that allows information to be inserted. public partial class Phonebook : Form { public Phonebook() { InitializeComponent(); MaximumSize = new Size(633, 306); } private void AddContact_Click(object sender, EventArgs e) { MakeContact MC = new MakeContact(); MC.Show(); } public void AddContacts(string Name) { Label name = new Label(); //Added Style and Location of Label... name.Text = Name; name.Location = new Point(98, 13); name.Font = new Font("Microsoft Sans Serif", 13, FontStyle.Bold); this.Controls.Add(name); Refresh(); } } Below is the Method I used when the Finish button is pressed, for when the user is done with the information, and then the AddContacts() method is called public partial class MakeContact : Form { public MakeContact() { InitializeComponent(); MaximumSize = new Size(394, 377); } private void FinishContact_Click(object sender, EventArgs e) { //FullName is the name of the TextField when asking for a name string Name = FullName.Text; Phonebook PB = new Phonebook(); PB.AddContacts(Name); //Closes Separate Form and goes back to the Close(); } } Expectation: It should load the label into the form after the information is put in. Actual: It will not show what so ever. EDIT: Added More to the Code and to the Question since I didn't do too good of asking the question, sorry about that :/ Who calls AddContacts method? Are there any other controls on the Form? You are not setting the location of the Label before adding to the form. So the label might be hidden behind other controls on the form. You are also not setting the Text property of the label. It appears that you're creating a new MakeContact Form from the Phonebook Form. But then, in the MakeContact Form, you try to call the AddContacts method of a new Phonebook Form: the original Form will not be affected by any changes you make to this new instance. You need to pass the current instance of Phonebook to the MakeContact Form's Constructor when you create it, so this new Form can use the current instance of Phonebook. In this case, calling a public method of Phonebook will produce the result you're expecting. For example, in MakeContact: private PhoneBook pb = null; public MakeContact(PhoneBook phoneBook) { this.pb = phoneBook }. In FinishContact_Click: { this.pb.AddContacts(Name); } Hey, sorry for the late response. I understand what you are saying and I gave your example a try, but when I do, I get an error saying that when I call MakeContact, that it will require 1 argument, which was Phoneboook phoneBook. I am assuming you mean put Phonebook phoneBook where I have InitializeComponent();, but even if I did another method, it would give me a NullReferenceException, maybe I don't understand you correctly, which i'm sorry about because i'm quite new with the language. An example of what I described in the comments: When you do this: Phonebook PB = new Phonebook(); you create a new instance of the PhoneBook class (your form): this is not the same Form instance (the same object) that created the MakeContact Form and the one you're trying to update. It's a different object. Whatever change you make to this new object, it will not be reflected in the original, existing, one. How to solve: Add a Constructor to the MakeContact Form that a accepts an argument of type PhoneBook and a private object of type Phonebook: private PhoneBook pBook = null; public MakeContact() : this(null) { } public MakeContact(PhoneBook phoneBook) { InitializeComponent(); this.pBook = phoneBook; } Assign the argument passed in the constructor to the private field of the same type. This Field will then used to call Public methods of the PhoneBook class (a Form is a class, similar in behaviour to other class). It's not the only possible method. You can see other examples here. Full sample code: public partial class Phonebook : Form { private void AddContact_Click(object sender, EventArgs e) { MakeContact MC = new MakeContact(this); MC.Show(); } public void AddContacts(string Name) { Label name = new Label(); // (...) this.Controls.Add(name); } } public partial class MakeContact : Form { private PhoneBook pBook = null; public MakeContact() : this(null) { } public MakeContact(PhoneBook phoneBook) { InitializeComponent(); this.pBook = phoneBook; } private void FinishContact_Click(object sender, EventArgs e) { string Name = FullName.Text; this.pBook?.AddContacts(Name); this.Close(); } } I thank you a ton right now, this worked completely fine for me.
STACK_EXCHANGE
A No-BS Guide to the Blockchain as a Service Space Part II A Pragmatic Perspective of the Top Cloud Blockchain Runtimes This is the second part of an article that presents some pragmatic viewpoints about emerging blockchain as a service(BaaS) space. The first part of the article presented some general idea about the adoption of BaaS runtime as an enabler of blockchain solutions as well as a criteria for evaluating BaaS solutions in the real world. Today, I would like to deep dive into some of the most relevant BaaS platforms in the market and provide some analysis from both the technical and market readiness standpoint. This is by no means an exhaustive analysis of the BaaS space. Quite the opposite, the opinions presented here are based on our experience at Invector Labs evaluating and using these stacks in the context of real world blockchain solutions. As a result, some viewpoints can be considered highly subjective but at least they are not based on marketing materials 😉 In the previous part of this article, we presented a 10-factor criterion to evaluate the technical readiness of a BaaS stack. The list considers both basic and highly sophisticated technical capabilities that are proven to be relevant in real world blockchain scenarios. While the vast majority of permissioned blockchain solutions are based on either Hyperledger Fabric or Ethereum, those stacks along can fulfill the requirements of real world blockchain solutions. The capabilities outlines below could be a good baseline to evaluate the technical viability of a BaaS stack: In addition to the aforementioned technical capabilities, there are a few complementary elements that will help you to evaluate the different BaaS platforms: · Implementor Community: Most organizations implementing blockchain solutions require certain levels of professional services. A strong partner ecosystem can help streamline the adoption of a BaaS stack and it’s a strong indicator of its market relevance. · Developer Community: Blockchain technologies are based on open source distributions and BaaS stacks are not the exceptions. A healthy developer community is a strong sign of the viability of a BaaS stack. · Customers: The obvious one, most blockchain implementations today are constrained to the pilot phase. Even so, there is nothing like a strong customer ecosystem to evaluate the market readiness of a BaaS platform. · Blockchain Innovation & Thought Leadership: Is a BaaS stack a mere cloud runtime for blockchain technologies or is it contributing unique innovations to the space? The blockchain infrastructure space is in very stages and is important that BaaS providers actively contribute the research and development of protocols that can improve permissioned blockchain solutions in a unique way. When you examine the BaaS market, the level of activity, marketing press releases, funding rounds announcements can result overwhelming. However, if we use the previous criteria as guideline, there are a handful of vendors that have achieved both a technical and go-to-market early leadership position in the space. In a very short term, Microsoft has been able to build, arguably, the most complete and diverse BaaS stack in the market. What I find refreshing about Microsoft’s BaaS offering is that it expands beyond the integration between Azure and blockchain technologies and has contributed unique innovations to the blockchain ecosystems such as the Coco Framework, the Proof-Of-Authority implementation for Ethereum or the Azure Workbench toolset. · Strengths: A very heterogeneous blockchain stack with support for many technologies, unique contributions to the blockchain research and development space, integration with Azure services and a viable support for hybrid runtimes(cloud and on-premise). · Weaknesses: The customer adoption of the Azure blockchain stack remains limited and the developer and partner communities are relatively small. Arguably, IBM can be considered the most successful BaaS platform in the market. From the customer adoption standpoint, IBM has a significant lead over competitors and the company continue being bullish about their blockchain investments. · Strengths: The IBM Blockchain Platform(IBP) powered by Bluemix is powering some of the best-known permissioned blockchain implementations in the market. Customer adoption and a strong professional services arm are certainly the hallmarks of the IBP offering. From the technical standpoint, IBP has interesting contributions to blockchains governance and security models. · Weaknesses: IBP remains mostly limited to Hyperledger Fabric is the support for other blockchain platforms is almost non-existent. Even in Fabric’s scenarios, IBP has serious limitations in terms of integration with off-chain services or the lifecycle management toolset. AWS is a recent entrant into the BaaS market. In a refreshing sign of honesty, the AWS leadership admitted that, until recently, they didn’t understand the scenarios for permissioned blockchains. However, now they seem to be very committed to the BaaS space and entered the market with a very unique offering. · Strengths: The developer and startup communities have been some of the biggest differentiator of AWS services in the last decade and there is no reason to believe that it will be different with their BaaS stack. Additionally, AWS already signaled that is planning to bring unique innovations to blockchains and distributed ledgers such as the recently announced Quantum Ledger Database. · Weaknesses: The AWS Managed Blockchain stack is constrained to Hyperledger Fabric and Ethereum. The integration with blockchain protocols or frameworks is very limited and so is the current management toolset. Additionally, the customer adoption of the AWS BaaS platform is in very early stages. When comes to building blockchain solutions in the AWS platform, Kaleido remains our favorite platform. Although relatively new, Kaleido brings the technical sophistication of a team that has seen a large number of permissioned blockchain implementations. Just like Heroku democratized cloud development with oversimplified interfaces, Kaleido is following a similar path in the BaaS space. · Strengths: Incredibly sophisticated technology stack which includes support for many blockchain protocols and frameworks. The support for non-obvious components of blockchain solutions such as wallets or block explorers was particularly refreshing. · Weaknesses: Kaleido is a relatively new entrant to the BaaS space and, consequently, the customer adoption remains limited. Additionally, Kaleido is lacking a robust implementor ecosystem that can streamline the adoption of the platform in real world scenarios. Differently from other modern technology trends, Oracle has jumped aggressively and relatively early into the blockchain space. The Oracle BaaS platform has seen initial adoption across different industries and has a very compelling go-to-market strategy. · Strengths: Customer adoption and a robust professional service ecosystem are some of the highlights of the Oracle BaaS stack. From the technical standpoint, the Oracle BaaS platform provides a relatively seamless integration with Oracle Cloud services as well as a compelling management toolset. · Weaknesses: The Oracle BaaS stack has virtually no support for modern blockchain protocols and runtimes and remains a bit of a black-box offering. The developer experience remains incredibly basic and difficult to adopt in large development teams. If the term maturity can be applied to blockchain technologies, BlockApps can be considered one of the most mature BaaS stacks in the market. The BlockApps STRATO platform can be adapted to different cloud runtimes and provides a very strong integration with modern infrastructure technologies. · Strengths: A cloud agnostic model, integration with data storage and messaging technologies and robust management toolset are some of the most visible benefits of BlockApps STRATO · Weaknesses: Despite its maturity, customer adoption of STRATO remains limited and the support for non-Ethereum blockchains is still a major limitation. Putting it All Together A quantitative comparison of BaaS runtimes is not only complex but it takes the risk of being unfair on several subjective aspects. Based on our experience and the experiences of our customers, I put together a very basic comparison ranking of the different BaaS runtimes. I am sure many people are going to disagree with it but hopefully you will find it consistent with the analysis provided in this article. I hope you find this analysis compelling and, as always, your feedback is very welcomed.
OPCFW_CODE
Caveat Emptor: I am not a lawyer, but I have read these licenses quite closely. If you are making any commercial decisions, you must consult a lawyer. Period. Otherwise, you will expose yourself to considerable legal risk. First, let us address your specific questions Question: What exactly is the restriction against linking a GPL-ed product against both an EPL library and an LGPL library? Is it not allowed without the LGPL copyright holder's explicit permission, as it would be with GPL, or is it allowed? Answer: Decomposition follows. - Linking LGPL 2 & 3 code with EPL 1.0 is probably OK. - Ref: LGPL 2, Section 5: A program that contains no derivative of any portion of the Library, but is designed to work with the Library by being compiled or linked with it, is called a "work that uses the Library". - Ref: LGPL 3, Section 4: You may convey a Combined Work under terms of your choice that, taken together, effectively do not restrict modification of the portions of the Library contained in the Combined Work... - Linking LGPL 2 & 3 code with GPL 2 & 3 code is probably OK. See chart is @0A0D's answer to understand precisely which combinations are allowed. - Linking GPL 2 & 3 code with EPL 1.0 is definitely not allowed. - Regarding copyright holder's explicit permission: You can probably do anything you want if you get permission from all copyright holders. Given the complexity of this legally and logistically, it should be considered nearly impossible. - Example: You could (with heroic efforts), attempt to secure (with money?) permission from all contributors to the Linux kernel to grant you a BSD-style license of the code that you could modify and release as non-free (commercial) software. Again, as noted previously, possible, but unrealistic. Question: Would an exception granted by the EPL copyright holder be sufficient? Such an exception was considered safe by Trolltech (now part of Nokia), when it used to license the Qt library using its own Qt Public License which is GPL-incompatible; and by the KDE project, whose libraries link against Qt and are released under the LGPL, while KDE apps are generally released under the GPL. The FSF's objection is due to "weak copyleft" and "choice of law clause" -- the former seems unobjectionable, if the EPL license holder grants an exception, but what sort of exception granted by the EPL copyright holder would satisfy the "choice of law clause" objection? Answer: Decomposition follows. - Would an exception granted by the EPL copyright holder be sufficient? - As noted above, this is possible, but highly unrealistic that copyright holders of EPL 1.0 code would grant such an exception. - Regarding Trolltech and multi-licensing, the derivative works generally have the option to select which license to apply. Thus, in the case of Trolltech/QPL/GPL, forget about QPL, and just use GPL. - Regarding KDE/LGPL, I am unfamiliar with their licensing strategy and cannot comment. However, surely they have had lawyers review it. AFAIK: KDE is a German registered non-profit and has likely received some pro-bono legal advice on these matters. Even if not, KDE is old enough that, if not in compliance, a copyright holder would have surely objected by now. Read more here. Finally, I am also facing a similar issue as I try to combine Java code from Eclipse and OpenJDK. My reading of the licenses says that combining these works is allowed expressly because Eclipse uses the term derivative work in their GPL 2 & 3 incompatibility statement. Further, the Classpath Exception specifically states that linking against this library does not create a derivative work.
OPCFW_CODE
Can I clean a second story dryer vent that goes out the roof from the inside? My dryer vent goes from the second story to the roof. The problem is that I can't reach the roof. I bought one of those dryer vent cleaners that you can put on your drill and spin it up the vent. Currently it goes 8 feet. Could I buy another one and go up 16 feet and just push or pull the excess lint all the way from the roof top or is it better to have someone do it from the roof. Also, are there other things a professional will do besides use a long brush? My experience using those brushes is that it's difficult to tell if you're pushing on a lint blockage, a turn in the pipe, or your knocking the cap off the end of the pipe. So if at all possible, you should have the end open and/or connected to a vacuum. You don't want this tool ramming into the motor in your dryer, nor do you want it knocking the damper off the outside of your house. In your case, that's going to require someone on the roof. You can get close, keeping track of how many segments of the cleaning tool you used to get to the roof in a previous cleaning, and then stop a few feet early. But as Michael says, the cap itself should be cleaned since lots of lint will build up there. You can use the brush up the vent from the inside and this will clean a part of the pipe that the brush can access. The problem that you may still have is that the vent pipe normally has a weather and critter shroud over the top of it at the roof. Dryer lint can build up at this point and clog the very top of the pipe. Your brush is unlikely to be able to clean this part of the vent and thus access via the roof will be needed to inspect the vent and clean it if necessary. The clog in these is usually all the lint caught in the bug screen under the cap. If you can get into the attic, pull the pipe loose from the cap and manually remove all the lint from the screen. All the brush does is knock the lint loose so it can finally clog any last opening left in the screen. In my experience, I tool bought one a DIY 8 foot cleaner that is actually fiberglass and comes with 4 two foot sections that attached to a 4 inch brush. The problem here is you MUST only go a few inches at a time an pull back to the point of exiting. If you drive the entire length, you are only packing all the lint to a point that may be open or screen creating a larger issue and potentially larger bill if you need hire a person to locate the exit vent. These condos/townhome structures are a P.I.T.A. and once the duct work is clean, I would advocate cleaning with your home snake/tool periodically to decrease blockage at the exit point. God forbid the home builders place the exit point 8 feet off the ground versus the ceiling.
STACK_EXCHANGE
package king.zach.pynny.utils; import android.content.Context; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.util.Log; import android.util.Pair; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import king.zach.pynny.database.PynnyDBHandler; import king.zach.pynny.database.models.Report; import king.zach.pynny.database.models.Wallet; /** * Created by zachking on 11/13/17. * * Defines various helper functions for * reporting functionality. */ public class ReportingUtil { private static String TAG = "ReportingUtil"; // Use the singleton design pattern private static ReportingUtil sInstance; private Context context; private PynnyDBHandler dbHandler; private ReportingUtil(Context context) { this.context = context; dbHandler = PynnyDBHandler.getInstance(context); } public static synchronized ReportingUtil getInstance(Context context) { if (sInstance == null) { sInstance = new ReportingUtil(context.getApplicationContext()); } return sInstance; } public Report generateReport() { Report report = new Report(); report.totalExpenses = dbHandler.getTotalExpenses(); report.totalIncome = dbHandler.getTotalIncome(); Cursor walletCursor = dbHandler.getAllWalletsCursor(); while (walletCursor != null && walletCursor.moveToNext()) { long walletId = walletCursor.getLong(walletCursor.getColumnIndex(PynnyDBHandler.COLUMN_WALLET_ID)); String walletName = walletCursor.getString(walletCursor.getColumnIndex(PynnyDBHandler.COLUMN_WALLET_NAME)); double walletExpenses = dbHandler.getExpenseForWallet(walletId); double walletIncome = dbHandler.getIncomeForWallet(walletId); report.expenseByWallet.add(new Pair<String, Double>(walletName, walletExpenses)); report.incomeByWallet.add(new Pair<String, Double>(walletName, walletIncome)); } walletCursor.close(); Cursor categoryCursor = dbHandler.getAllCategoriesCursor(); while (categoryCursor != null && categoryCursor.moveToNext()) { long categoryId = categoryCursor.getLong(categoryCursor.getColumnIndex(PynnyDBHandler.COLUMN_CATEGORY_ID)); String categoryName = categoryCursor.getString(categoryCursor.getColumnIndex(PynnyDBHandler.COLUMN_CATEGORY_NAME)); double categorySpendings = dbHandler.getSpendingForCategory(categoryId); report.spendingByCategory.add(new Pair<String, Double>(categoryName, categorySpendings)); } return report; } public void saveReport(Report report) { if (context == null) { Log.e(TAG, "Context is null"); return; } try { Log.d(TAG, context.getFilesDir().getAbsolutePath()); FileOutputStream fos = context.openFileOutput("pynny_report_" + report.createdAt, Context.MODE_PRIVATE); ObjectOutputStream os = new ObjectOutputStream(fos); os.writeObject(report); os.close(); fos.close(); } catch (Exception e) { Log.e(TAG, "Error while saving report: " + e.getMessage()); } } public Report loadReport(String reportFilePath) { Report report = null; try { FileInputStream fis = context.openFileInput(reportFilePath); ObjectInputStream is = new ObjectInputStream(fis); report = (Report) is.readObject(); is.close(); fis.close(); } catch (Exception e) { } return report; } }
STACK_EDU
ACC can be bypassed on Toyota by holding down cruise on/off button Describe the bug ACC can by bypassed on Toyota. Openpilot allows engagement when ACC is disabled. This bug was already fixed on Honda https://github.com/commaai/openpilot/pull/770 How to reproduce or log data Press and hold down the cruise on/off button. ACC will be disabled and the car will switch to non-adaptive cruise control. Press set to engage. Openpilot will control steering, but ACC will not function. Expected behavior Openpilot should not be allowed to be engaged without ACC. Device/Version information: Device: EON Gold Version: 0.7.2 Car make/model: 2018 Toyota Prius Prime Additional context This loophole has been tested on the three major types of Toyota: DSU connected and disconnected, Nodsu, and TSS 2.0 I suspect that this signal could be used to detect when non-adaptive cruise control is enabled: I believe CRUISE_CONTROL_STATE 1.0 is non-adaptive cruise control. This could be used to detect that state. I have demonstrated using the loophole in this drive https://my.comma.ai/cabana/?route=b29e3bc918751697|2020-02-12--12-04-27&exp=1613076855&sig=Z3uBeA6LE4ovlWgKaSEomaR6j8KdcUViZ8176dMW6Jk%3D&max=13&url=https%3A%2F%2Fchffrprivate-vzn.azureedge.net%2Fchffrprivate3%2Fv2%2Fb29e3bc918751697%2F1fadd736273c29c679b40bab0ff87058_2020-02-12--12-04-27 I'm also seeing states 5 and 6. Can you try to engage in both modes and see what the state goes to? This is what the DBC file says for that field: VAL_ 921 CRUISE_CONTROL_STATE 2 "disabled" 11 "hold" 10 "hold_waiting_user_cmd" 6 "enabled" 5 "faulted"; Looks like 5 doesn't mean faulted since I see it during normal operation. Do you mean hold the canceI button and then press the set button again? Sounds quite difficult with the stock stalk. I think I have been able to get into this mode by holding the brake down for long so that the electronic handbrake and then engage. Or sometimes when directly behind a lead and speed very close to 0 then the brakes lock and do not release. Or my waiting mode where set speed up does not work until you apply gas to get it out of the waiting mode. But maybe those are maybe different variations. I assume this is with TSS2 where there is a cruise on/off button on the steering wheel. Yes the TSS2.0 operation is holding the Cruise_ON button for ~1 second and it initially goes to state 2 then drops to 1. Another press resets it to 0 and you have to hold again. Currently OpenPilot does not car which cruise state it's in as long as cruise is on. I'm saying that when I looked at out Corolla data I saw other states when system was actually engaged. So we can't check for state == 2. Therefore we need to reverse engineer more of the states before we can implement a fix. It has many states for the difference between cruise modes as well as if there is a lead vehicle or not and separate engage/disengaged states for both ACC and constant speed cruise. Going to verify more today but from what we have seen so far it will probably be best to check that it is !=1 or 3 which are the two states of operation for constant speed (we are just calling bad cruise to simplify). I'll do more testing to verify those are the only states it operates in. Then we can disqualify based on those states only. Currently if you engage openpilot in this mode (holding down stalk button) and press the acceleration pedal, openpilot will disengage and the car would just cruise without radar To some extent one can argue these two cruise modes are quite useful when you don't want to be held back by traffic that is only a bit slower than you but still want openpilot to steer for you. Can confirm that while using constant speed cruise OP will disengage on gas but constant speed cruise stays active. I would call it a safety issue rather than a useful feature. Someone could easily get confused about what is engaged and what is disengaged and also be unsure if radar cruise is active or not. Someone could easily get confused about what is engaged and what is disengaged and also be unsure if radar cruise is active or not. You can easily distinguish different modes from the information displayed either on the dash or the hud, with RADAR cruise on, the dash/hud will display three bars (and a vehicle if a lead is present); with RADAR cruise off and openpilot engaged, it will just display a speed, and openpilot's frame is green; if only cruise control is engaged, the hud will display your set speed and openpilot's frame is dark blue. Openpilot will never support different driving modes. It's on or off. @Hubblesphere let us know if you find out more about the states. So far with TSS2.0 I've documented these states: VAL_ 921 CRUISE_CONTROL_STATE 2 0 "off" 1 "constant_speed_disabled" 2 "acc_disabled" 3 "constant_speed_enabled" 5 "acc_enabled_no_lead" 6 "acc_enabled_with_lead" 10 "hold_waiting_user_cmd" 11 "hold"; Both with Stock enabled and OpenPilot enabled the constant speed cruise is controlled between states 1 and 3 when cruise is on. If that holds true for all other Toyotas might be easy to apply across the entire make. #1708 adds the support code necessary for this. I'd prefer to use the PCM_CRUISE message since all supported toyotas use it. I started with one state in that PR, but there might be more (it's a 4 bit signal).
GITHUB_ARCHIVE
When Windows Vista launched, we made the decision to disable User Account Control – UAC – because it seemed to be an unnecessary complication. We were already tightly locking down our PCs with Group Policy and none of our users have admin rights, so our exposure to threats was comparatively limited. We kept UAC disabled through Windows 7 and our limited use of Windows 8.1, but as we prepped for Windows 10, we needed to completely revisit UAC. In Windows 10, Metro apps – which includes the Edge browser – require UAC to be enabled to work properly. With UAC disabled we had inconsistent behavior, where sometimes Metro apps would launch, and other times they would fail with the below error: So it was time to turn UAC back on. We immediately ran into problems with a couple of end-user applications that were requesting elevation. Since our users do not have admin rights and therefore cannot elevate, these applications would not run. What made no sense though is these were programs that had worked fine with UAC off, and with standard user rights, so it stood to reason they didn’t actually need elevation to function. Thus began my exploration into why these applications were requesting elevation. They weren’t installers, and they weren’t trying to write to protected directories. Turns out they were just poorly coded. UAC-compliant applications are supposed to explicitly request the level of permissions they require through an application manifest. You can use mt.exe found in the Windows SDK to extract and change application manifests. To extract: mt.exe –inputresource:myexecutable.exe;#1 –out:extracted.xml The manifest itself is just XML, so open with whatever plain-text editor you like and look for the section like this: <requestedPrivileges xmlns="urn:schemas-microsoft-com:asm.v3"> <requestedExecutionLevel level="requireAdministrator" uiAccess="false"></requestedExecutionLevel> </requestedPrivileges> Notice the “level” parameter, set here to “requireAdministrator” – this, not surprisingly, tells Windows that the application requires administrator rights and thus must request elevation. Change “requireAdministrator” to “asInvoker” – this will allow the app to run in the security context of the user launching it. Now we need to import the edited manifest back into the application. To import: mt.exe –nologo –manifest “editedmanifest.xml” –outputresource:”myexecutable.exe;#1″ Once that was done, our problem application stopped asking for elevation. Problem solved. Understand though that this only works because the application didn’t need admin permissions – it just wanted them. An app that actually requires admin rights will not work properly without them. There was one lingering question I had though: Why did this app actually work with UAC turned off? Turns out the answer is simple. With UAC disabled, when a standard user runs an application that’s set to “requireAdministrator,” the elevation fails silently, allowing the app to launch. Since this particular app didn’t need elevation, it ran fine, and we had no idea of the underlying problem. Now if we could just get the developer to do things properly… We had this problem with basically every single Metro app *from MS* and had to turn UAC back on. That and a GPO to disable CTRL + ALT + DEL on touch devices so they could just swipe up. Yep. Same issue here, though I haven’t dealt with a Surface yet. I’m glad you mentioned that though in case we run into the same C+A+D problem. Thanks.
OPCFW_CODE
""" Hosting Jupyter Notebooks on GitHub Pages Author: Anshul Kharbanda Created: 10 - 12 - 2020 """ import os import jinja2 import logging from .config import Configurable, load_config_file from . import builders from . import loaders # Default config file name config_file = './config.py' class Site(Configurable): """ Site object, handles all of the building """ # Default configuration for site _config = { 'base_url': '', 'templates_dir': 'templates', 'static_dir': 'static', 'notebook_dir': 'notebook', 'output_dir': 'dist' } # Internal loaders map _loaders = { 'notebooks': loaders.NotebookLoader(), 'statics': loaders.StaticLoader(), 'readme': loaders.MarkdownLoader(file='README.md'), 'pages': loaders.MarkdownLoader(directory='pages') } # Internal builders array _builders = [ builders.NotebookBuilder(), builders.IndexBuilder(), builders.StaticBuilder(), builders.PageBuilder() ] @property def jinja_loader(self): """ Return jinja2 filesystem loader for this config """ return jinja2.FileSystemLoader(self.templates_dir) @property def jinja_env(self): """ Return jinja2 environment for this config """ return jinja2.Environment( loader=self.jinja_loader, autoescape=jinja2.select_autoescape(['html'])) def build(self): """ Build site """ log = logging.getLogger('Site:build') log.info('Building site.') self._make_directory() self._run_loaders() self._run_builders() def _make_directory(self): """ Ensure that output directory exists """ log = logging.getLogger('Site:_make_directory') log.debug(f'Output Directory: {self.output_dir}') if os.path.exists(f'./{self.output_dir}'): log.info(f"'{self.output_dir}' directory exists!") else: log.info(f"Creating '{self.output_dir}' directory") os.mkdir(f"{self.output_dir}") def _run_loaders(self): """ Run loaders step """ log = logging.getLogger('Site:_run_loaders') log.debug(f'Loaders: {self._loaders}') for name, loader in self._loaders.items(): log.info(f'Running {loader}') result = loader.load(self) setattr(self, name, result) def _run_builders(self): """ Run builders step """ log = logging.getLogger('Site:_run_builders') log.debug(f'Builders: {self._builders}') for builder in self._builders: log.info(f'Running {builder}') builder.build(self) def load_site(): """ Load site from config file """ # Get logger log = logging.getLogger('load_site') # Read config python file log.debug(f'Config file: {config_file}') if os.path.exists(config_file): # Read config file log.debug('Config file found!') config = load_config_file(config_file) log.debug(f'Config data: {config}') return Site(**config) else: # Default config file log.debug('No config file found') log.debug('Default config') return Site()
STACK_EDU
How to write an introduction letter as a professor? Imagine my professor wants to introduce me to a famous university, but he is too busy to write an introduction letter1. I need to write it myself as if I was my professor, and he only has to sign it. How should I write now? This is my homework essay, and I am getting stuck, maybe because I have never written such a letter before, and I have never been a professor :) So can you offer some advice on how to approach the task? 1 I'm sorry if this is not so clear. That is the best English word I can think of, because I'm not from an English-speaking country. Hi JouleV, and welcome. I do think we'll be able to help you, but you may also want to check out our sister site [academia.se], which is about navigating the upper educational system. I'm voting to close this question as off-topic because we can't do your homework for you! I'm voting to reopen. While we won't be doing people's homework for them, surely we can offer some advice on how to approach the task? The question looks like something we would be answering, how does the fact that this is "homework" change that? @Galastel because part of the homework is how to figure out how to approach it. We should not be encouraging people to go around teachers to get answers. Now, if OP had a meeting with the teacher but was still confused about a point, that would be okay for here. But in this case the OP is asking about how to do the entire assignment. Let's take this to meta: https://writing.meta.stackexchange.com/q/1782/14704 You may be taking this too literally. This is just a writing prompt with a creative way to talk about yourself, as if you were a different person. Just imagine you were seeing yourself from the outside, highlight your best traits and accomplishments, and add "Dear Professor" at the top. When I was an American university student, I always began the letter like this: Dear Professor __________, The actual rank of the professor does not matter. In fact, if you are talking to an assistant professor or even a lecturer, then that person will feel happy that you are using a loftier title. Dear Instructor __________, This is also acceptable. "Instructor" is a generic term for any kind of teacher in an university. Usually, university professors have a doctorate degree or some kind of post-graduate education. So, you can also use this: Dear Dr. __________, NEVER begin your letter like this: Mr./Mrs./Miss/Ms./Mx __________, In American primary schools and secondary schools, students may refer to their teacher by one of those titles and the surname, because primary and secondary school teachers only have a Bachelor's or Master's degree, not a Doctorate degree. The body of the letter should contain whatever you want to say. Use a formal tone. Be extremely polite. You may want to use your professor for a recommendation letter or letter of reference someday. The closing may be: Sincerely, [YOUR NAME HERE] [YOUR CONTACT INFO HERE] The letter is supposed to be to a university from the professor. This is called a 'letter of introduction' and if you search Google images for that phrase, you'll find hundreds of examples that you can pull paragraphs from and adapt to your personal situation: Google Image Search for Letter of Introduction Good luck! If I understand your situation correctly, you have an imaginary professor who is supposed to write an introduction for you to an event at a university. Who are you in this scenario? Are you the recent winner of an academic prize? I will assume for a moment - this being writers stack - that you are a successful writer who is touring universities and Prof X wrote the introduction (which you hold in your hands). He or she will have of course acknowledged his peers and mentioned some connection between you - perhaps as a mentor or consultant. He will include any literary awards your work might have won. He will probably be concise as professors often are. He will then turn the podium over to you - the voice of your generation. If the scenario is that you are transferring universities and a proud professor has written a letter of introduction for you to one of his colleagues, it will describe your work as a student, your potential and likely be quite brief.
STACK_EXCHANGE
Uml to code code to uml visual paradigm for uml. Automated sdk generation with easy-to-follow documentation and example code. scala scalatra or java jax-rs so that we can help generate the server-side code. Australian capital territory policing. Franklin park police station receives leed gold certification posted by fgm architects on december 11th, 2013. fgm is pleased to announce that the franklin park. .net create msdn style documentation using sandcastle. 7/05/2012в в· i tried doing this installation on my own and could not seem to make it work, so i found this tutorial which works quite well as far as getting things. Templates world of print. A3 - movie poster print template 04 size is a3 plus 3mm bleed; a3 - movie poster print template 4 by illusiongraphic.. Senior writer toxicology scientist solutions forum. The 30th ich public meeting was held in japan which took all the presentation used in the meeting is electronic common technical document. Canon user's manual ts8000 series copying a disc label. Print wirelessly from your print documents and photos from across the room or create greeting cards/stationery and cd/dvds as well as turn photos into fun. Fix excel error вђњopening blank page issue when you double. I have an excel document contaigning thousands of hyperlinks. excel hyperlinks in html open in new window. in excel 2007 and similar versions,. Parcel to new zealand courier to new zealand cheap. Type: document (express) please refer to your new zealand couriers international pricing matrix or call our international help desk on 0800 655 010 for a quote.. How to restrict permissions in a document library. How to change document library permissions in sharepoint how to set up a document library in sharepoint. document libraries can be now all you have left. Libreoffice 4.0 math guide libreoffice documentation. Microsoft word 2007 to 2016. setting up page margins and tab stops. before we insert the text file that you to set the margins for the entire document, do the. The “parental agreement for school to administer. Nurses' six rights for safe medication administration. the right to a complete and clearly written order you, all of these components must be present for a. What is the best way to compare two scanned pdfs for. This page offer 6 best ways to repair damaged and corrupt excel files,the best excel file repair tool to help you recover excel top 10 document recovery software.. How to convert the font into hindi which is already. English to hindi typing : online hindi typing tool and websites in hindi language came and this tool will automatically convert english text into hindi.. Download sox compliance template for visio 2010 from. Sox and process documentation. more relevant to the process design template and will be covered in the respective guidance notes. sox impact on process design. Headlines from frontier expo 2017 frontier. 27/12/2016в в· post your elite: dangerous videos here! dangerous posts: 3778 joined: mon jun 01, and roll grade 1 upgrades until you unlock grade 5.. Word file format computer keyboard. Antique bw border. this antique border be the first to know when i add new printable documents and templates to the favorite format, open it in word, a pdf. Python find most common words in a document youtube. 23/01/2016в в· python training demo: count number of occurrence count occurrences of a number in a sorted array with 37 - word frequency counter (3/3.
OPCFW_CODE
Rough idle + slow acceleration + engine dying after spark plug change I recently changed the spark plugs and wires on my 2002 Ford Focus SE with Zetec (DOHC) engine. I used NGK TR5IX (7397-4PK) Iridium IX plugs and Motorcraft wires. When I received the plugs, they were gapped to about 0.03'. The owners manual it says gap should be 1.3mm (0.051 in) and most online sources I found agreed, so I set the gap to 0.051 But now there is a rough idle, and when I first start going it is really sluggish accelerating (it seems to get better once I've been going for a minute or so). But when I stop the car and let it idle, the engine has been dying (it starts right back up no problem, though, and doesn't die while moving). I am not sure what could be causing the problem. Is it possible that the plugs I got need to be gapped differently from the manufacturers recommendations? Also, there was some rust around the spark plug wells that I tried my best to clean off before removing the old plugs, but a (very) tiny bit of rust dust fell in when I pulled them out. Could that be it? Anyways, I don't know what to do at this point - any ideas how to diagnose what is wrong? Thanks! UPDATE: Ugh ... right after writing this, I discovered the problem: Amazon's crappy car part compatibility tool suggested the wrong plugs for me (2nd time: they also sent me the wrong PCV valve ... definitely won't trust it anymore). So I have requested a refund, and will get the new plugs tomorrow from O'Reilly ... but in the meantime, I guess that changes my question to whether or not I could have caused any permanent damage driving around for about an hour today with the wrong plugs? Yeah the NGK site says get the LTR5IX-11 (stock # 4344) for the DOHC Focus (http://www.partcat.com/ngk); and they're gapped properly already at 0.052". Check the plugs you put in for mechanical damage from the pistons like dlu says, and I agree with your decision to avoid driving. You can answer your own question btw. Probably not. Engines are pretty tough. But pull a plug before you drive anymore and check for mechanical damage. Pistons hitting plugs would not be good. Also double check your plug wires and make sure you've got the firing order right and that all of the connections feel solid. Yeah, I'm sure the wires are right but I think I'll just hold off on driving it until I get some new plugs from the parts store tomorrow. I'll check like you said for mechanical damage though. Thanks. It should work anyway, and it will not confuse your ECU, only will correct fuel, no worries. As dlu says about mechanical damage, yea check it, maybe just dropped a plug in the hole, bent the tip, etc.. If it is fine, you can try to decrease a gap to about 0.9mm (0.035"), it should work fine. But if it still misfires - don't drive like this. It will give lean fuel on some cylinders. Don't worry about engine damage. The main difference between these 2 spark plugs is the length, the one you used is shorter. That means 2 things: the spark was not created in the best place, meaning you were not burning all your fuel. This is the lack of power you were feeling. there is no chance your pistons hit your spark plug since the plugs you used are 7 mm shorter than what you need. Read more about the NGK TR5IX and NGK LTR5IX. Under 'Specs', look at 'Reach'. 18 mm vs 25 mm.
STACK_EXCHANGE
You are finding for the pretty computer science universities sorted list in the The World zone, right? You are going to get a computer science universities sorted list details in The World. You will get a website information, hotline, details address, average user ratings, and also a direction map link from your place. All information has been collected from these computer science universities ‘, official websites. So, you can keep trust in this information. Please share your experience with these organizations if you already visited them. Top Computer Science Universities Details and List in The World Zone 1. New York University The New York University is one of the gorgeous computer science universities in the The World area. The 1433 public have shared their opinion about this computer science universities , and following their discussion, constructive score is 4.5. It is a normal University. If you would like to come to see their main office you should have to look at this area, this is New York, NY 10012, United States, in The World. Top Comment: Best university in world. - Street Directions: New York, NY 10012, United States - Contact Line: +1 212-998-1212 - Web: .nyu.edu 2. Stevens Institute Of Technology In The World range, position 2nd computer science universities is Stevens Institute Of. It is a ordinary type Private university. Their genuine location is 1 Castle Point Terrace, Hoboken, NJ 07030, United States, in The World. And, the 323 public have make reviews to this computer science universities , & that average review rating score is 4.6. The Top Comment From User: Best school for higher education for science, engineering and … - Road Area: 1 Castle Point Terrace, Hoboken, NJ 07030, United States - Support Line: +1 201-216-5000 - Web info: .stevens.edu 3. Columbia University This Columbia University is a highest quality computer science universities in the The World region. The 2222 public have reviewed about this computer science universities , and which inferential review rating score is 4.6. It is a ordinary type of organization. It is located at New York, NY 10027, United States, in The World. The Top User Review: One of the best University in the world. - Street Directions: New York, NY 10027, United States - Support Number: +1 212-854-1754 - Website info: .columbia.edu
OPCFW_CODE
[Docs] yarn berry installation Prerequisites [X] I have searched for duplicate or closed feature requests [X] I have read the contributing guidelines Proposal The install documentation explains how to set up with yarn. Please also explain how to set up with yarn berry. Motivation and context This setup is non-trivial. I am several days into learning the difference between sass/sass-loader and how to make it work with Yarn Berry. Good documentation here might help many people. Also That page links to https://github.com/twbs/bootstrap-npm-starter which is archived and it references Node SASS which is also archived. Perhaps this further illustrates how non-trivial this is. Near as I can tell, Yarn's berry repo is just... Yarn? See https://getbootstrap.com/docs/5.3/getting-started/download/#yarn for latest docs, and https://github.com/twbs/examples/tree/main/sass-js for more updated examples. Here is what happens when you follow those instructions: ➡️ ~/Desktop git clone https://github.com/twbs/examples.git ➡️ ~/Desktop cd examples/sass-js/ ➡️ ~/Desktop/examples/sass-js main nvm install --lts && nvm use --lts ➡️ ~/Desktop/examples/sass-js main 8s corepack enable ➡️ ~/Desktop/examples/sass-js main yarn set version stable ➡️ ~/Desktop/examples/sass-js main* git diff | cat diff --git a/package.json b/package.json index 3cde257..b5e6273 100644 --- a/package.json +++ b/package.json @@ -4,5 +4,6 @@ "version": "0.0.0", "private": true, "repository": "twbs/examples", - "license": "MIT" + "license": "MIT", + "packageManager"<EMAIL_ADDRESS> } ➡️ ~/Desktop/examples/sass-js main* yarn Usage Error: The nearest package directory (/Users/williamentriken/Desktop/examples/sass-js) doesn't seem to be part of the project declared in /Users/williamentriken/Desktop/examples. - If /Users/williamentriken/Desktop/examples isn't intended to be a project, remove any yarn.lock and/or package.json file there. - If /Users/williamentriken/Desktop/examples is intended to be a project, it might be that you forgot to list sass-js in its workspace configuration. - Finally, if /Users/williamentriken/Desktop/examples is fine and you intend sass-js to be treated as a completely separate project (not even a workspace), create an empty yarn.lock file in it. $ yarn install [--json] [--immutable] [--immutable-cache] [--refresh-lockfile] [--check-cache] [--check-resolutions] [--inline-builds] [--mode #0] Here is what happens when you follow those instructions: Just tried it real quick, I haven't got any issues by doing the following: git clone https://github.com/twbs/examples.git cd examples yarn install yarn start It created automatically a yarn.lock (as it's supposed to, instead of the package-lock.json), and launched the server at localhost:4200. Am I missing something here? @julien-deramond can you please do yarn --version? You may be running Yarn v1 which is the old one which is not Yarn Berry. My bad, I thought I had the latest version, thanks @fulldecent for pointing it out. So in order to make it work: git clone https://github.com/twbs/examples.git, then cd examples/sass-js Configure yarn to continue using node_modules yarn config set nodeLinker node-modules This will create a .yarnrc.yml containing nodeLinker: node-modules. Then, create an empty yarn.lock yarn install yarn start If we modify this section of the documentation to mention that, would it be OK? I mean, without using Yarn Plug'n'Play Thank you, this is super helpful, I really appreciate it. These are magic little lines of code that can be really hard to find so it is great to see them together. Perhaps when we update the documentation, we can put this as yarn Berry, but then also keep the old yarn 1.X below it. FYI, I've created https://github.com/twbs/bootstrap/pull/41036 just to mention this "trick" to compile our examples with Yarn Berry in the documentation. It's probably better than nothing already, :) let's continue the discussion in the PR if needed.
GITHUB_ARCHIVE
Schematic plot of periodic rectangular array of potential wells on a substrate. The blue color indicates the regions where the minority block is preferred, and the other regions do not have preference to any block. Each rectangular cell is denoted by two periods of and on x and y directions, respectively. Defect concentrations of cylinders assembled on patterned templates as function of time. Figures (a) and (b) are results of four rectangular fields for each of density multiplications of 12 and 16, respectively. In (b), the filled symbols denote the results of periodic hexagonal field with the same density multiplication of 16 from Ref. 21. Monomer density plots of cylinder patterns directed by periodic fields of [1 6] (left column) and [6 1] (right column) at t = 104 (upper row) and t = 105 (bottom row). Each figure exhibits a 5122 portion of the entire sample. Insets give the Fourier spectrums of the density. Distributions of local lattice orientation of the entire block copolymer domains in Fig. 3. The colors of the spectrum indicate the range of lattice orientation from 0 to 60 degrees. Orientation distribution plots for the density multiplication of 16. From top to bottom, the periodic field is [1 8], [8 1], and 〈40〉, and from left to right, the corresponding time is t = 104, 105, and 2 × 105, respectively. Small portion of density plots around the locations of a pair of dislocations for the rectangular [1 6]-field. From (a) to (h), the time is t = 2 × 105, 2.2 × 105, 2.4 × 105, 2.6 × 105, 3.2 × 105, 3.4 × 105, 3.6 × 105, and 3.8 × 105, respectively. In (a), the two dislocations are indicated by short color lines, and the Delaunay triangles around the bottom one are plotted. Black and white circles indicate where a new domain is going to appear and where a new domain just comes out, respectively. Time evolution of defect concentrations for the type [1 m] of rectangular field, where m = 10, 12, 14, 16, and 18. The red and blue solid lines are the results of hexagonal field, 〈50〉 and 〈60〉, respectively. The green solid line indicates the relation of 1/3 power law. Orientation distribution plots for the rectangular [1 18]-field (upper row) and the hexagonal 〈60〉-field (bottom row). Left and right columns correspond to evolving time, t = 105 and t = 106, respectively. Article metrics loading... Full text loading...
OPCFW_CODE
Not getting a notebook interface when I create a .dib file in Visual Studio Code Insiders Describe the bug I'm following the steps for enabling .NET Interactive notebooks in Code Insiders, and I cannot get a notebook interface. Installed Code Insiders Installed PowerShell Preview extension (tried with the stable version too) Installed .NET Interactive notebooks extension Created a new file and saved it with the extension .dib Nothing happens. If I try to open .dib file created by someone else, I get the "Unable to open 'test.dib': Cannot read property 'message' of undefined." error message. If I disable PowerShell extension, I can at least open test.dib file as a textual file. Please complete the following: Which version of .NET Interactive are you using? There are a few ways to find this out: In VS Code, run "Report installed version for .NET Interactive` and copy the version number from the status popup. That command doesn't exist. OS [x] Windows 10 [ ] macOS [ ] Linux (Please specify distro) [ ] iOS [ ] Android Browser [ ] Chrome [ ] Edge [ ] Firefox [ ] Safari Frontend [ ] Jupyter Notebook [ ] Jupyter Lab [ ] nteract [x] Visual Studio Code [ ] Other (please specify) Screenshots If applicable, add screenshots to help explain your problem. Which version of the .NET Interactive Notebooks extension are you using? v1.0.130603 Installed today. Code Insiders is also installed today. @alexandair do you still see the issue? @colombod Yes. I've updated to the latest version of the extension and Code Insiders and I still cannot get the notebook UI or open a .dib file. It looks like Code is stuck at "Installing .NET Interactive version 1.0.131001...Acquiring". It never gets out of the acquiring step? Dr Diego Colombo PhD On 11 Jun 2020, at 23:27, Aleksandar Nikolić<EMAIL_ADDRESS>wrote:  @colombod Yes. I've updated to the latest version of the extension and Code Insiders and I still cannot get the notebook UI or open a .dib file. It looks like Code is stuck at "Installing .NET Interactive version 1.0.131001...Acquiring". — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. Yes. On one machine, I have .NET Interactive installed (manually, before I tried notebook in Code) and I use .NET notebooks with Azure Data Studio. dotnet tool list -g Package Id Version Commands -------------------------------------------------------------------- microsoft.dotnet-interactive 1.0.115407 dotnet-interactive On another machine, I don't have either Azure Data Studio or .NET Interactive installed. Both machines behave the same and Code Insiders cannot start .NET notebooks. @alexandair Could you run dotnet --info and post the output? Thanks. PS C:\Users\aleksandar> dotnet --info .NET Core SDK (reflecting any global.json): Version: 3.1.100 Commit: cd82f021f4 Runtime Environment: OS Name: Windows OS Version: 10.0.18363 OS Platform: Windows RID: win10-x64 Base Path: C:\Program Files\dotnet\sdk\3.1.100\ Host (useful for support): Version: 3.1.0 Commit: 65f04fb6db .NET Core SDKs installed: 2.1.607 [C:\Program Files\dotnet\sdk] 2.2.203 [C:\Program Files\dotnet\sdk] 2.2.207 [C:\Program Files\dotnet\sdk] 3.1.100 [C:\Program Files\dotnet\sdk] .NET Core runtimes installed: Microsoft.AspNetCore.All 2.1.14 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.All 2.2.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.All 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.App 2.1.14 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 2.2.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.1.0 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.NETCore.App 2.1.14 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 2.2.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.1.0 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.WindowsDesktop.App 3.1.0 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] To install additional .NET Core runtimes or SDKs: https://aka.ms/dotnet-download PS C:\Users\aleksandar> Could you try installing .NET Core SDK 3.1.200 or later? I've installed SDK 3.1.301. The problem remains. I've manually installed version '1.0.131105'. Tool 'microsoft.dotnet-interactive' was successfully updated from version '1.0.115407' to version '1.0.131105'. .NET Interactive notebooks in Azure Data Studio recognize and use this version, but VSCode is still not aware of it. Where does Code expect to find .NET Interactive tool? @alexandair The extension always manages its own version of the dotnet-interactive tool, so the globally installed one won't take effect. Is the .dib file already open when VS Code launches or are you opening the file fresh after the launch? @brettfo I've tried both--with .dib file already open and with freshly opened. Both failed to trigger notebook interface. Where can I see the command that extension is using to install dotnet-interactive tool? Is it implemented as a task? This is the command I need to use to install it manually because I have some sources in Azure DevOps that need credentials and I'm afraid that --ignore-failed-sources is missing in your command and that's the reason why downloading of the tool is stucked. dotnet tool update -g Microsoft.dotnet-interactive --add-source https://dotnet.myget.org/F/dotnet-try/api/v3/index.json --ignore-failed-sources Good find. The tool is installe here. I'll add --ignore-failed-sources to the args. Created #541. @brettfo I've updated my local commands.js with --ignore-failed-sources line and successfully installed the tool. That output should be more verbose, especially because the installation process was long. Also, with just that first line it was very hard for me to debug it and find the problem. I can add the code and markdown cells. However, if I run my PowerShell cells, I only see the timer running, but nothing happens. There is no output. And, "Execute notebook (run all cells)" command doesn't even start. @alexandair The "run all" functionality was just implemented a few hours ago and isn't yet available. Once we get a full build out we'll publish another version that'll fix that scenario. As for the PowerShell issue can you try two separate scenarios for me? In a PowerShell cell (i.e., in the bottom right of the cell, set the option to "PowerShell (.NET Interactive)") Execute the following:1+1 The expected output is obviously 2. In a C# cell ("C# (.NET Interactive)"), execute the following:#!pwsh 1+1 In the first instance, the code is passed directly to PowerShell; in the second it goes through one more routing step. I'm just curious to see if that gets us better error information. @brettfo Both scenarios are failing. Only timers run in both cases. Also, "Cancel execution" button doesn't work. We currently don't respond to cell cancellation, so that's expected. We just published a new version of the tool. Can you update and try the 1+1 scenarios in (1) a C# cell, (2) a C# cell, but under a #!pwsh directive, and (3) a PowerShell cell? If it's still not working can you create a new issue? It'll be easier for us to track it as a separate issue to ensure it doesn't get lost. @brettfo I've updated the extension and this is the result now: @alexandair the 1 + 1 should not be on the line as the #!pwsh directive. That’s directive is instructing the engine to execute the expressions in the cell as powershell Cose Thank you, @colombod Here is an updated screenshot. It looks like repeating execution is much faster, especially for the PowerShell cell (5.1s vs. 0.1s). Each language kernel is initialised in lazy fashion. The first submission might include bootstrapping logic that is why I might see faster times on the next submissions. Glad to see the issue is resolved. Thank you Hi @brettfo , still fails for me. installed VSCode insiders and the extension today 8/25. I have this in the commands.js, so it seems that it includes the update. however, opening the .dib file leads to and there are no any outputs here I can confirm that it fails for me as well on some other machine. I have latest versions of both Code Insiders and the extension. I didn't have .NET 3.1 SDK installed on that machine. Shouldn't that step be a part of https://github.com/dotnet/interactive/blob/main/src/dotnet-interactive-vscode/README.md /cc @colombod @jonsequitur cool, it is working now, almost :) but now VSCode just hangs on this piece like this I didn't have .NET 3.1 SDK installed on that machine. Shouldn't that step be a part of https://github.com/dotnet/interactive/blob/main/src/dotnet-interactive-vscode/README.md Thanks for pointing it out. I've updated it. @eosfor I've opened #730 to track the PowerShell-related crash. Thanks for reporting it.
GITHUB_ARCHIVE
Three rhinos defined and printed using OpenFab. This poses an enormous computational challenge: large high-resolution prints comprise trillions of voxels and petabytes of data, and modeling and describing the input with spatially varying material mixtures at this scale are simply challenging. Existing 3D printing software is insufficient; in particular, most software is designed to support only a few million primitives, with discrete material choices per object. We present OpenFab, a programmable pipeline for synthesis of multimaterial 3D printed objects that is inspired by RenderMan and modern GPU pipelines. The pipeline supports procedural evaluation of geometric detail and material composition, using shader-like fablets, allowing models to be specified easily and efficiently. The pipeline is implemented in a streaming fashion: only a small fraction of the final volume is stored in memory, and output is fed to the printer with a little startup delay. We demonstrate it on a variety of multimaterial objects. State-of-the-art 3D printing hardware is capable of mixing many materials at up to 100s of dots per inch resolution, using technologies such as photopolymer phase-change inkjet technology. Each layer of the model is ultimately fed to the printer as a full-resolution bitmap where each "pixel" specifies a single material and all layers together define on the order of 108 voxels per cubic inch. This poses an enormous computational challenge as the resulting data is far too large to directly precompute and store; a single cubic foot at this resolution requires at least 1011 voxels and terabytes of storage. Even for small objects, the computation, memory, and storage demands are large. 3D convolutional neural networks (3D-CNN) have been used for object recognition based on the voxelized shape of an object. In this paper, we present a 3D-CNN based method to learn distinct local geometric features of interest within an object. In this context, the voxelized representation may not be sufficient to capture the distinguishing information about such local features. To enable efficient learning, we augment the voxel data with surface normals of the object boundary. We then train a 3D-CNN with this augmented data and identify the local features critical for decision-making using 3D gradient-weighted class activation maps. An application of this feature identification framework is to recognize difficult-to-manufacture drilled hole features in a complex CAD geometry. The framework can be extended to identify difficult-to-manufacture features at multiple spatial scales leading to a real-time decision support system for design for manufacturability. Topology optimization is computationally demanding that requires the assembly and solution to a finite element problem for each material distribution hypothesis. As a complementary alternative to the traditional physics-based topology optimization, we explore a data-driven approach that can quickly generate accurate solutions. To this end, we propose a deep learning approach based on a 3D encoder-decoder Convolutional Neural Network architecture for accelerating 3D topology optimization and to determine the optimal computational strategy for its deployment. Analysis of iteration-wise progress of the Solid Isotropic Material with Penalization process is used as a guideline to study how the earlier steps of the conventional topology optimization can be used as input for our approach to predict the final optimized output structure directly from this input. We conduct a comparative study between multiple strategies for training the neural network and assess the effect of using various input combinations for the CNN to finalize the strategy with the highest accuracy in predictions for practical deployment. For the best performing network, we achieved about 40% reduction in overall computation time while also attaining structural accuracies in the order of 96%. Learning from 3D Data is a fascinating idea which is well explored and studied in computer vision. This allows one to learn from very sparse LiDAR data, point cloud data as well as 3D objects in terms of CAD models and surfaces etc. Most of the approaches to learn from such data are limited to uniform 3D volume occupancy grids or octree representations. A major challenge in learning from 3D data is that one needs to define a proper resolution to represent it in a voxel grid and this becomes a bottleneck for the learning algorithms. Specifically, while we focus on learning from 3D data, a fine resolution is very important to capture key features in the object and at the same time the data becomes sparser as the resolution becomes finer. There are numerous applications in computer vision where a multi-resolution representation is used instead of a uniform grid representation in order to make the applications memory efficient. Though such methods are difficult to learn from, they are much more efficient in representing 3D data. In this paper, we explore the challenges in learning from such data representation. In particular, we use a multi-level voxel representation where we define a coarse voxel grid that contains information of important voxels(boundary voxels) and multiple fine voxel grids corresponding to each significant voxel of the coarse grid. A multi-level voxel representation can capture important features in the 3D data in a memory efficient way in comparison to an octree representation. Consequently, learning from a 3D object with high resolution, which is paramount in feature recognition, is made efficient.
OPCFW_CODE
in-between screensWell, having experienced the joys of widescreen life for a few days, I am once again in front of my back-up monitor. To cut a long story short, my bargain turned out to be just plain cheap, and I'm trading up for something made by a company I know better. I'm not naming names, being reliably informed that the defect wasn't normal, even for this "bargain" range. On the upside, I'm now supremely qualified to extol the many virtues of widescreen monitors, because I'm currently missing all of them, badly. First off, you have to get a widescreen monitor! No, stop reading and trade-in your monitor, come back when you're done, it's amazing! On reflection, I think widescreen monitors could finally be the thing that breaks that nasty full-screen mode habit that many windows users have, working inside the box. Widescreen monitors are all about multiple windows, so they suit my working style completely, as well as being perfectly suited to movies, of course. I can't handle full-screen windows, and instantly feel claustrophobic if something gets maximized. Applications that insist on launching that way are uninstalled, or in the case of a rare app that is both auto-maximizing AND essential, a write a macro to- scrub that; I just uninstall them these days. Maximized, it's so Windows 3.1, innit? Widescreen monitors are about having lots of windows open all-at-once. Being able to flit from one task to the other with a wave of your magic wand (or similar pointing device). Depending on your working style, a widescreen monitor could potentially make you 50% more productive, simple by having everything *right there*. My top and bottom toolbars are 50% longer, also. Squeezing these updated babies back into 1024x768 has been a painful experience, and instead of click to get to reg crawler, for instance, it's click-click, and that's like one whole extra click, but, I keep telling myself, it's only for one more day. All my resized window placement macros keep throwing things off-screen. Arrghh! One more day, I tell myself. A count-down mantra, today being the last day. Of course, all these productivity gains count for nothing if you end up watching movies the whole time, and that's definitely a risk. Not only that, but all your old movies will have a whole new life about them, so There's also the temptation of re-screening all your favourites DVDs sometime real soon. If you update to a 5.1+ sound card at the same time, God help you! When I packaged up my widescreen for its return, and plugged in this thing again, I was immediately taken by how much the screen resembled an old fish bowl, which just goes to show how quickly the eyes adapt to the perfect flatness of an LCD, which I vaguely remember finding briefly weird and concave. That lasted minutes, but the fish-bowlness of this old screen took a whole day to wear off. Going backwards is always hard, even if it's only temporarily. Everything feels cramped, squished. When you upgrade anything, it's natural to get a bigger and better version, at least with items where bigness is desirable; screens, RAM, hard drives, that sort of thing, and already I'm considering what it might be like computing with a 22" widescreen monitor, or perhaps a 30" widescreen monitor, or perhaps the whole wall, but whatever size it is, it's gotta be 16:10. I can't go back to regular rectangle screens. On the subject of industry-wide techno bloopers, which I was last time around, and going backwards instead of forwards, which I am this time around; here's something you must remember to check before buying a wireless keyboard.. Are the data transmissions encrypted? "Oh, I never thought of that!", is the reply I've heard every time I bring up the subject, or "neat idea!", or something like that, indicating that they hadn't thought of this, not even remotely. Neither had I, until I considered buying a wireless keyboard, an item these folks already owned. In each case the answer, of course, was "No", followed by a short pause, followed by the realization of a Very Stupid Mistake. It wasn't their fault. it's the manufacturer's fault. When you "advance" a technology, you don't diminish, or downright remove existing features, do you? Stuff we rely on, or take for granted, like basic security. I realise that There's usually a slidier between security and convenience, but when it comes to wireless input devices, no trade-off is required, the whole thing can and should be completely transparent to the user. I'd love to give an accolade to Logitech for building some quite beautiful keyboard combi sets that ARE encrypted, but sadly their corresponding wireless optical mouse lacks even basic orientation controls, something I've come to rely on since Windows Oatcake1. Try again, guys! It takes some skill to plant a working keylogger on someone's computer without direct physical access. Much easier to just sit outside their house with a radio, or even a matching receiver, feeding your keystrokes directly into a notepad on their laptop. In short; anyone within fifty feet can read what you are typing. Well, now you know. If everyone stops buying unencrypted wireless keyboards, the manufacturers will soon get themselves up to speed, you mark my words! Damn! I love that phrase, and must remember to use it more often, and I Will, YOU MARK MY WORDS! I'm not entirely sure how you mark words, exactly, let alone mark them on a computer screen, but if you have the software, or know what it actually means, feel free to mark away. I often make stuff up as I go along, but it usually turns out to be right enough. Snap up the old cheap models for the kids, perhaps. At least until family input device encryption systems, with access and parental controls become the norm, invented, even. All in time, and no small amount of emailing on our part, I assume. I got a letter from my local council today (they are stuck in the last century, around 1953, in fact. Needless to say, the "letter", was paper. *sigh*) . As well as reminding me how much rent I am supposed to pay, they have also seen fit to inform me that they may, or may not, be using "covert" CCTV cameras, and no sign will need to be displayed if the equipment is used for: and then a list of every angle they could think of, covered. I'm assuming these won't be installed inside council houses, but it didn't make any distinction. So the police state is really here, then. You better start encrypting your keyboards right away, or anything you type could come back to haunt you later on, YOU MARK MY WORDS! Just feels good in the mouth, don't you think? it's probably the "Mark My" part that's the best, perhaps it would work with other things, YOU MARK MY NOSE! Hmm, perhaps not. It works best after wild proclamations, I feel, like.. When the revolution comes, eBay sellers who add other costs into their postage rates will be first up against the wall, YOU MARK MY WORDS! Except it would have to be something believable, which the above sadly isn't. Though if everyone mailed them saying "is that the real postage cost, or are you just a twat?" they would probably have a re-think. that's people power, and so I guess, really it's your fault that wireless keyboards aren't encrypted, and loads of flat panels have only D-Sub, and all the rest; you didn't exercise your people power, and now look at the mess! This fishbowl is a nice wee size for emails, though, so I've been catching up on some of that, pointing web masters at dead forms, defining flaws in companies products and procedures, passing on a few free ideas and suggestions, and generally making a nuisance of myself under the guise of doing my bit. For fun, or perhaps in protest, if I could swing that convincingly, I'm going to close my blog abruptly, and not even leave a signature. But not today. :o) The Writing Entity @ corz.org In the N.E. of Scotland, and perhaps other places, "Oatcake" is used to denote a distant non-specific numerical reference, for instance, "I've been paying into that insurance policy since Nineteen-Oatcake, and There's still nae enough to buy a loaf", usually referring to the farthest time possible, or to some time which is either irrelevant or doesn't require further granulation beyond the prefix (or rarely, the suffix). Can also be used in a response to a time-frame question, in the form "Oatcake-Oatcake", which means, "I don't know", or "I don't care", and is considerably more portable than "how the fuck should I ken?", another common response to such questions.
OPCFW_CODE
# adapted from https://github.com/open-mmlab/mmcv or # https://github.com/open-mmlab/mmdetection import numpy as np import torch from vedacore.misc import registry from .base_anchors import build_base_anchor from .base_meshgrid import BaseMeshGrid @registry.register_module('meshgrid') class SegmentAnchorMeshGrid(BaseMeshGrid): def __init__(self, strides, base_anchor): super().__init__(strides) self.base_anchors = build_base_anchor(base_anchor).generate() def gen_anchor_mesh(self, featmap_tsizes, video_metas, dtype=torch.float, device='cuda'): """Get anchors according to feature map sizes. Args: featmap_tsizes (list[int]): Multi-level feature map temporal sizes. video_metas (list[dict]): Video meta info. device (torch.device | str): Device for returned tensors Returns: tuple: anchor_list (list[Tensor]): Anchors of each video. valid_flag_list (list[Tensor]): Valid flags of each video. """ num_videos = len(video_metas) # since feature map temporal sizes of all videos are the same, we only # compute anchors for one time multi_level_anchors = self._gen_anchor_mesh(featmap_tsizes, dtype, device) anchor_list = [multi_level_anchors for _ in range(num_videos)] # for each video, we compute valid flags of multi level anchors valid_flag_list = [] for video_id, video_meta in enumerate(video_metas): multi_level_flags = self.valid_flags(featmap_tsizes, video_meta['pad_tsize'], device) valid_flag_list.append(multi_level_flags) return anchor_list, valid_flag_list def _gen_anchor_mesh(self, featmap_tsizes, dtype, device): """Get points according to feature map sizes. Args: featmap_tsizes (list[int]): Multi-level feature map temporal sizes. dtype (torch.dtype): Type of points. device (torch.device): Device of points. Returns: tuple: points of each image. """ assert self.num_levels == len(featmap_tsizes) multi_level_anchors = [] for i in range(self.num_levels): anchors = self._single_level_anchor_mesh( self.base_anchors[i].to(device).to(dtype), featmap_tsizes[i], self.strides[i], device=device) multi_level_anchors.append(anchors) return multi_level_anchors def _single_level_anchor_mesh(self, base_anchors, featmap_tsize, stride, device): """Generate grid anchors of a single level. Note: This function is usually called by method ``self.grid_anchors``. Args: base_anchors (torch.Tensor): The base anchors of a feature grid. featmap_tsize (int): Temporal size of the feature maps. stride (int, optional): Stride of the feature map. Defaults to . device (str, optional): Device the tensor will be put on. Defaults to 'cuda'. Returns: torch.Tensor: Anchors in the overall feature maps. """ shifts = torch.arange(0, featmap_tsize, device=device) * stride shifts = shifts.type_as(base_anchors) # add A anchors (1, A, 2) to K shifts (K, 1, 1) to get # shifted anchors (K, A, 2), reshape to (K*A, 2) all_anchors = base_anchors[None, :, :] + shifts[:, None, None] all_anchors = all_anchors.view(-1, 2) # first A rows correspond to A anchors of 0 in feature map, # then 1, 2, ... return all_anchors def valid_flags(self, featmap_tsizes, pad_tsize, device='cuda'): """Generate valid flags of anchors in multiple feature levels. Args: featmap_tsizes (list(tuple)): List of feature map temporal sizes in multiple feature levels. pad_tsize (int): The padded temporal size of the video. device (str): Device where the anchors will be put on. Return: list(torch.Tensor): Valid flags of anchors in multiple levels. """ assert self.num_levels == len(featmap_tsizes) multi_level_flags = [] for i in range(self.num_levels): anchor_stride = self.strides[i] feat_tsize = featmap_tsizes[i] valid_feat_tsize = min( int(np.ceil(pad_tsize / anchor_stride)), feat_tsize) flags = self._single_level_valid_flags( feat_tsize, valid_feat_tsize, self.num_base_anchors[i], device=device) multi_level_flags.append(flags) return multi_level_flags def _single_level_valid_flags(self, featmap_tsize, valid_tsize, num_base_anchors, device='cuda'): """Generate the valid flags of anchor in a single feature map. Args: featmap_tsize (int): The temporal size of feature maps. valid_tsize (int): The valid temporal size of the feature maps. num_base_anchors (int): The number of base anchors. device (str, optional): Device where the flags will be put on. Defaults to 'cuda'. Returns: torch.Tensor: The valid flags of each anchor in a single level feature map. """ assert valid_tsize <= featmap_tsize valid = torch.zeros(featmap_tsize, dtype=torch.bool, device=device) valid[:valid_tsize] = 1 valid = valid[:, None].expand(valid.size(0), num_base_anchors).contiguous().view(-1) return valid @property def num_levels(self): """int: number of feature levels that the generator will be applied""" return len(self.strides) @property def num_base_anchors(self): """list[int]: total number of base anchors in a feature grid""" return [base_anchors.size(0) for base_anchors in self.base_anchors]
STACK_EDU
multiple maps Hello! I wanted to know if it was possible to add more than one kind of map in the same page. Currently I have each map identified with a unique id, but after getting the map data only the last map appears, the other two appear empty in the div. This is the data I have stored in an objetct to display the map. Edit: did it! but when the second one is being rendered, it messes up the first one! any tips I could use? Hi @juanpablo64 but when the second one is being rendered, it messes up the first one! any tips I could use? In which way does it mess up the first one? If you mean the positioning, you probably just need to style the containers into which you're rendering your maps. With flexbox it should be relatively easy. Is it possible to have more sc per page? Yes, there's no problem in creating multiple instances of Seatchart. the first one has the color of the seats changed. The correct map should have the first two rows colored from columns 25 to 28. When the second map is loaded the first map is messed around. Here's the funciton that draws the maps: mapDrawData.map.rows = dataPost.rows.split(","); mapDrawData.map.columns = dataPost.columns.split(","); mapDrawData.map.disabled.seats = dataPost.disabled_seats.split(","); mapDrawData.map.disabled.rows = dataPost.disabled_rows != null ? dataPost.disabled_rows.split(",") : []; mapDrawData.map.disabled.columns = dataPost.disabled_columns != null ? dataPost.disabled_columns.split(",").map(e=>{ return parseInt(e,10)}) : []; mapDrawData.types[0].selected = assigned; mapDrawData.types[1].selected = disponible; mapDrawData.types[2].selected = teamleader; mapDrawData.types[3].selected = supervisor; mapDrawData.types[4].selected = dataPost.pecera != null ? dataPost.pecera.split(",") : []; mapDrawData.types[5].selected = dataPost.columna != null ? dataPost.columna.split(",") : []; let sc = new Seatchart(mapDrawData); console.debug(sc) map.mapPositions = dataPost.positions; I obtain the map data from an ajax (datapost) and draw it with tje new SeatChart(). Each draw is done synchronic using async await The problem is, every time let sc = new Seatchart(mapDrawData); the previous map data gets overwritten or something. I have checked the calls and it happens right after creating the Seatchart element. How can I create multiple, different SeatChart objects in the same page? I have a question about this? when you say it's posible to have multiple maps, it's with the 0.1.0 release? Because I am working with an older code, and I am pretty sure the problem lies in there. Been debugging for days and the maps keeps getting corrupted. Yes, it should still work. Are you sharing options between maps? If so try by deep cloning the object before passing it to other maps. nope, I actually didn't want to share any options between maps! I used Object.assign to that end. Here's a little map I made, with the results on the image: let map_config_data = Object.assign(res.data), map_options = Object.assign({ map : {}, types : {}, }); map_options.map.id = `map-container-${mapId}`; map_options.map.rows = (mapId === '4')? 5 : 10; map_options.map.columns = (mapId === '4')? 5 : 10; map_options.map.front = { visible: false, }; map_options.map.disabled = { seats: [],//(mapId === '4')? [2,3,4,5] : [12,13,14,15], rows: (mapId === '4')? [0] : [1], columns: (mapId === '4')? [0] : [1], } map_options.map.cart = { height: "15", width : "15", } var sc = new Seatchart(map_options);
GITHUB_ARCHIVE
Every table at the store where I play uses Point Buy, but i'm itching to start a game using the 4d6 method. Point Buy is the default because of AL. I think I'm going to move to using the member here who developed the random point buy system (I'd link, but it's only saved on my laptop not work computer). I'll generate 20 random legit point buys and then roll once. That's it. No power creep, and still uses the randomness to help inspire new character concepts. My current option in 5E, is to let them roll pairs of stats six times, in order. Then they choose a high stat from the pair and then they have to choose a low stat from another pair. Interesting. I like that. I'm going to have to save that for potential future use. In my games, I use choice of standard array or roll. If you roll, you use what you get - no switching to standard array if you get a bad result. It's a roleplaying opportunity - use it. If they get a really high set of scores, I have no problem with it. They're not going to break my game. For rolling, I use 4d6 - reroll all 1's and the first 2 - take the three highest. Assign to ability you want (not required to assign in order). However, I might just present your idea as an optional rolling system next time. For the longest time I was a fan of 4d6 drop the lowest. But now I swear by the Standard Array method. It makes people make tough decisions on their stats, and completely eliminates cheating without me needing to watch everyone roll their dice. Cheaters notwithstanding, there's also something to be said for eliminating improbable luck. One long-time friend of mine always seems to end up with an array like [18, 16, 14, 14, 13, 12] when we roll the standard way. He's definitely not cheating -- it's antithetical to his nature, and in any case he's rolling in plain sight or even having the DM do the rolling -- but nevertheless, every time, instant demigod.Eliminating cheaters from my table is also a solid choice. [MENTION=6789971]bedir than[/MENTION], I was trying to picture your method and I find it is not much different than picking the stat array in the book, but I could see it getting interesting if everyone took the array that was 'rolled' and even more interesting if they took it in order. 4d6, drop the lowest, 6x. Repeat so you have two independent sets of 6 scores. Keep whichever set you prefer, arranging them in any order you wish. Or throw both away and roll a third set that you must use (again, in any order). I do a roll-around. The player to my left rolls 4d6 (drop lowest) and records the entry on a 6x6 grid. Then the next player rolls. Then the next, until each of the 36 squares is filled in. Then, once the grid is full, each player chooses a column, or row, or diagonal array of 6 numbers. They may not select the same array, so once it's claimed its gone. With these numbers, you put your stats in order, you may then swap two numbers. Anyway. It sounds complicated, but we roll together as a table, then individualize the results. 11 | 9 | 12 | 10 | 12 | 9 13 | 15 | 10 | 14 | 12 | 11 10 | 14 | 15 | 16 | 13 | 13 15 | 11 | 11 | 10 | 10 | 11 7 | 18 | 13 | 13 | 13 | 16 12 | 16 | 14 | 9 | 10 | 14
OPCFW_CODE
Running database access on Glassfish 7.0 with intellij I am having issues getting through the database access step on this project. When I try the login on the webpage(which launches the try { code below) I get this error java.lang.IncompatibleClassChangeError: class com.healthmarketscience.jackcess.impl.DatabaseImpl can not implement com.healthmarketscience.jackcess.Database, because it is not an interface (com.healthmarketscience.jackcess.Database is in unnamed module of loader I'm running with these dependencies which are the versions that were provided by my teacher for the project. When I run this code off the server it seems to work just fine in a normal intellij project. <dependency> <groupId>commons-lang</groupId> <artifactId>commons-lang</artifactId> <version>2.6</version> </dependency> <dependency> <groupId>hsqldb</groupId> <artifactId>hsqldb</artifactId> <version><IP_ADDRESS></version> </dependency> <dependency> <groupId>com.healthmarketscience.jackcess</groupId> <artifactId>jackcess</artifactId> <version>2.1.6</version> </dependency> <dependency> <groupId>net.sf.ucanaccess</groupId> <artifactId>ucanaccess</artifactId> <version>4.0.0</version> </dependency> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1.1.1</version> </dependency> Here is the code itself try { System.out.println("LOADING THE DATABASE"); //Load Driver - Step #1 Class.forName("net.ucanaccess.jdbc.UcanaccessDriver"); //Get Connection - Step #2 Connection con = DriverManager.getConnection("jdbc:ucanaccess://C:/Users/Josh/IdeaProjects/Java3Lab2/ChattBankMDB.mdb"); //Create Statement - Step #3 Statement stmt = con.createStatement(); //Execute Statement - Step #4 String sql; sql = "select CustID, CustPassword " + "from Customers " + "where " + "CustID = " + id; System.out.println(sql); ResultSet rs; rs = stmt.executeQuery(sql); //Process Data - Step #5 String custIdCheck = "1"; String custPassCheck = "1"; while (rs.next()) { custIdCheck = rs.getString("CustID"); custPassCheck = rs.getString("CustPassword"); } if (custIdCheck.equals(id) && custPassCheck.equals(passw)) { out.println("<!DOCTYPE html>"); out.println("<html>"); out.println("<head>"); out.println("<title>LOGIN SERVLET</title>"); out.println("</head>"); out.println("<body>"); out.println("<hi>VALID LOGIN</hi>"); out.println("</body>"); out.println("/html>"); System.out.println("WINN"); } else { out.println("<!DOCTYPE html>"); out.println("<html>"); out.println("<head>"); out.println("<title>LOGIN SERVLET</title>"); out.println("</head>"); out.println("<body>"); out.println("<hi>INVALID LOGIN</hi>"); out.println("</body>"); out.println("/html>"); } //Close Connection - Step #6 con.close(); } catch (Exception e) { System.out.println("PP: " + e); } I can't figure out why I get this error. It's not clear to me what I'm missing but maybe I need a different version of jackcess? I have tried changing to different versions of ucanacces and jackcess to no avail. Everything seems to run fine until I get to the login page and try the login button. The system.out.println LOADING THE DATABASE runs so I know it's crashing right after that. Post your module-info.java file. You try to implement Database but it is apparently not an interface but a class Can you check your IntelliJ project using the same JDK version as the GlassFish 7.0? GlassFish 7.0 Delivers Support for JDK 17 and Jakarta EE 10.
STACK_EXCHANGE
# -*- coding: utf-8 -*- import unittest from flask import Flask from flask.ext.testing import TestCase from flask_dashed.admin import Admin, AdminModule class DashedTestCase(TestCase): def create_app(self): app = Flask(__name__) self.admin = Admin(app) return app class AdminTest(DashedTestCase): def test_main_dashboard_view(self): r = self.client.get(self.admin.root_nodes[0].url) self.assertEqual(r.status_code, 200) self.assertIn('Hello world', r.data) def test_register_admin_module(self): self.assertRaises( NotImplementedError, self.admin.register_module, AdminModule, '/my-module', 'my_module', 'my module title' ) def test_register_node(self): self.admin.register_node('/first-node', 'first_node', 'first node') self.assertEqual(len(self.admin.root_nodes), 2) def test_register_node_wrong_parent(self): self.assertRaises( Exception, self.admin.register_node, 'first_node', 'first node', parent='undifined' ) def test_register_node_with_parent(self): parent = self.admin.register_node('/parent', 'first_node', 'first node') child = self.admin.register_node('/child', 'child_node', 'child node', parent=parent) self.assertEqual(len(self.admin.root_nodes), 2) self.assertEqual(parent, child.parent) self.assertEqual(child.url_path, '/parent/child') self.assertEqual( child.parents, [parent] ) def test_children_two_levels(self): parent = self.admin.register_node('/root', 'first_root_node', 'first node') child = self.admin.register_node('/child', 'first_child_node', 'child node', parent=parent) second_child = self.admin.register_node('/child', 'second_child_node', 'child node', parent=child) self.assertEqual( parent.children, [child] ) self.assertEqual( child.children, [second_child] ) self.assertEqual( child.parent, parent ) self.assertEqual( second_child.parent, child ) if __name__ == '__main__': unittest.main()
STACK_EDU
When developing an algorithm, the developer often has to consider the program's behavior in different scenarios and think through the steps for handling possible errors in the bot's operation. For example, if the bot is supposed to work with files on a computer, a situation can arise when the necessary file or folder is missing (has been moved or deleted). This scenario (depending on the business process, of course) can usually be foreseen and handled quite easily, for example, by adding an activity to check the presence of a file before the activity that works directly with that file. However, there may be exceptional situations that cannot always be foreseen. For example, when working with the interface of the application or browser, it may happen that some element (a button or a picture) will not load in time or will not appear at all. And when the algorithm reaches the step of working with this activity, an exception will occur, because the bot will not be able to find the element in the given time. There are several ways to handle such exceptions, we will describe them in more detail below. Exceptions handling mechanism Each activity block has an "Error" port, or "red port" as it is called. This is the port to which you can bind a set of activities to be performed if an error occurs during the main specified activity step. Let's take a simple example of error handling in the execution of an algorithm. Suppose that our bot has to read text from some file. To do this, we use the Read text file activity, but deliberately move the file itself before launching the bot, so that the bot won't be able to find it. For some time the bot will try to find the required file, but will eventually stop and an error message will appear in the console: Now let's look at the exception handling process itself. Let's design the algorithm so that the bot displays the cause of the error in the notification window for the user. - Specify a path to file in the Read text file activity. The path should look like this: - Move this file to some folder so that the path to the file changes, but we won't change the "Path" parameter in the activity parameters. - Add the Assign value to variable activity via the red port. You can set any name for the variable, for example "exception". In the "Variable value" parameter, pick the "Save the previous step result" option. - The next acivity in the workflow will be the User notification activity. Pick the "Save the previous step result" option in the "Description message" parameter. You can specify any value as a "Button name" parameter. Run the algorithm. The bot will try to find the file, but when it fails, there will appear a message showing that there is no such file in the given path. In reality, such a scenario can very easily be foreseen and handled (by checking the presence of the file manually). In this example, we want to demonstrate an approach that could be used in such a scenario if the manual check cannot be performed before launching the bot, so you can see on a simple example the principle of exception handling. Useful tips for exceptions handling The following tips and tricks are recommended when processing exceptions: - Try to anticipate possible bot behavior and, where it makes sense, handle each scenario explicitly. - When using the "Error" port, it is often useful to save an error message and display it in the log or in a message to the user. For the business user, you can provide a clearer error message. - In some situations, it makes sense to allocate the part of the algorithm into a subprogram, within which not to handle exceptions. It is better to use the "Error" port from the subprogram block to handle exceptions. A good practice in some cases is to catch an exception and return the algorithm to its starting point for another pass. For example, if the algorithm involves transferring data from a set of documents into a program, a good solution when an exception occurs at any iteration is to close the program (or force termination) and return to the initial state of the algorithm to re-process that document or move on to the next. - If your algorithm interacts with the interface of a website or some application, it would be useful to add the Take a screenshot activity via the red port. Thus, when an exception occurs, you can not only see the error message in the console, but also a screenshot of the moment the error occurred, which will allow to understand the causes and handle the exception in more detail.
OPCFW_CODE
Flutter is a tool that allows building native cross-platform (Android, iOS, Linux, Web, Mac, Windows, Google Fuchsia) apps with one programming language and codebase. Important is that we build native cross-platform apps. We build real apps, different kinds of apps, which we then distribute through the different platforms. We use one programming language so that we don't have to learn different programming languages such as one for iOS, one for Android, one for Web, instead, we have one programming language. So we work on one project, we write our code once and we still get different apps as a result, and that's the cool thing about Flutter. Without Flutter, we would normally build an iOS app by writing some Swift or Objective C code and using the iOS development environment and for Android, we would be using Java with the Android framework or we would be using Kotlin and also the Android development environment and we would have to learn all these different languages and tools and we would have to write two totally different apps or work in two totally different projects but with Flutter, that's not the case, one programming language, and one codebase. Flutter is actually a mixture of things and refers to two major things. One is an SDK, a Software Development Kit, we could say a collection of tools that allows you to write one codebase or use one codebase with one programming language. It gives us a framework, a widget library for that one programming language which is called Dart which we can use to build beautiful Flutter apps. It gives us an extensive collection of reusable user interfaces building blocks so-called widgets. So these are things like buttons, tabs, text inputs, dropdowns. We can style them and customize them and then we build user interfaces with these tools. In addition, we get a couple of utility functions and generally, some packages that help us build what users see and what users interact with and then that code which we built with the help of that framework, that is then compiled native machine code with the help of the SDK. So that is what Flutter is. Flutter uses a programming language (Dart). Dart is a programming language used for building front-end user interfaces. It's not limited to building mobile apps, that's just what Flutter uses it for but Dart is independent of Flutter and we can also build web apps with Dart. Dart is a programming language developed by Google or is being developed by Google. So Flutter and Dart are not really replacements, instead, both of them work together. Flutter is a collection of tools and widgets which are implemented using Dart. So that we don't have to reinvent the wheel there but we can write our own Dart code and use these existing widgets in our code so that you don't have to again reinvent how a button should look like and work but use the pre-built button instead and then just customize it to our requirements.
OPCFW_CODE
using ActivitiesManager.Data.Interfaces; using Microsoft.Extensions.Configuration; using System.Data.SqlClient; using System.IO; namespace ActivitiesManager.Data.Connections { /// <summary> /// Establece conexión con las bases de datos /// </summary> internal class BasesDeDatos { private static string GetConnectionString(string ConnectionName) { var builder = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json"); var Configuration = builder.Build(); return Configuration.GetConnectionString(ConnectionName); } /// <summary> /// Establece, abre y retorna la conexión con la base de datos. /// </summary> /// <returns>Conexión</returns> public static SqlConnection Connect(string ConnectionName) { SqlConnection Connection = null; string CadenaDeConexionActivitiesManager = GetConnectionString(ConnectionName); Connection = new SqlConnection(CadenaDeConexionActivitiesManager); if (Connection.State != System.Data.ConnectionState.Open) { Connection.Open(); } return Connection; } } }
STACK_EDU
Can the original WotC-published SRD RTF files be found anywhere? The “Revised (v3.5) System Reference Document” page on wizards.com still exists, but all of its links, described as RTF (rich text file) downloads of a given size, all result in 404 page-not-found errors. Can these files still be accessed from wizards.com? Failing that, is there any legitimate mirror for them elsewhere, that has the files as Wizards of the Coast published them? The Internet Archive has a deliberate Collection of the original WotC SRD RTF files in its library, separately from the web-archiving project of the Wayback Machine. (Although the Wayback Machine is the most visible thing the Internet Archive does, the Internet Archive’s core project is actually an ongoing effort to collect open and public domain digital content, like the original SRD, into its library.) @Erik Similar but not the same thing -- Archive.org's Wayback Machine indexed the SRD page and its downloads, and Archive.org also stored the files themselves as an independent collection. As a "legitimate mirror", I think the IA collection is probably a better answer than the Wayback Machine snapshot. Presuming that archive.org was regularly scraping the source site to ensure its content was up to date - it would look like these resources fell off Wizard's site around the start of 2016 (dates of the files shown on the download page are Jaunary 19, 2016). Yes, thanks to the Wayback Machine (at least for now) Fortunately it doesn't seem that Wizard's robots.txt policies or anything else have precluded the SRD page and downloads from being archived by the Internet Archive project. I put in the page URL, randomly selected a snapshot from a few years ago et voila - all the files are available to download. Unfortunately, if WotC does change their site robots.txt or otherwise forbids the archival of site content (or someone who owns wizards.com in the future does), the effect is retroactive and the snapshots of the archived page will go away. The archive.org collection of the 3.5e SRD pointed to by Carl's answer is probably a safer long-term resource. I was pretty surprised to discover I didn't have this saved somewhere in my old documents folders, so I decided to also copy the archive and put all the files up on my personal webserver, for posterity's sake. Why do I always forget the Wayback Machine? Anyway, assuming no one can figure out a way to get the files off of wizards.com, I’ll be accepting this in a few days. Note: robots.txt has retroactive effect in the Wayback Machine and relies solely on current domain ownership, so if WotC or some future entity that owns the domain ever decide to, they can flip the switch and immediately cut off retrieval of all Wayback Machine archives for any given subset of the site. (AFAIK, the Collection does not have a similar automated vulnerability.) @TuggyNE good point, I have updated my answer to mention that caveat.
STACK_EXCHANGE
Whenever I open my data table using JSL, the first row (except for the first cell) is deleted and replaced with dots. I was just wondering why as when I open it manually it seems to work fine. Any suggestions would be appreciated. You really need to provide the data table, and the script you are using.......there are just so many items that can be in the data table, it is impossible to even guess at what is going on. So, the above is an example of the excel file which I open in JMP using JSL. When I open it using my script, it turns the second row (all cells after date) to dots. I think it may be to do with how it's imported. Do I need to specify how to open the table in my script, e.g. using best guess etc.? Also, in my script I specified that the data starts on row 2, I'm thinking that might have something to do with it? As a sidenote, this isn't the excel file I originally tested on, but an example of it. The columns would otherwise be filled with data. Your data starts on row 3, not row 2. So you need to change that, and your data should read in correctly. I thought that that may be the issue but it doesn't work. In jmp, when the excel file is converted into a data table, the first row is used as a column heading... I think I know now what was going wrong. At the end of my script I set the columns to numeric and nominal, which is why the character values probably get replaced (I'm still not sure why the Date wasn't replaced though.) Is there a way to specify the data type of a row in JSL? Your data does start on row 3. That is clearly indicated in the data table image you show. The column name "Date" has been read in from row 1 of your csv file, and then your data is being read in starting on row 2. The value on row 2 of your csv file has the value of "Date". Therefore JMP is finding a non numeric value and it then sets the column to be a Character data type. What you need to do, is to use the data previewer when opening the file, and go through the wizard and select the start of the data being on line 3. You can also specify the data to be forced to be numeric or character. To go to the Preview, when you select the file, you will see that in the lower left of the window, there is a list of checks that you can do, select the "Data with Preview". If you do this once, and then edit the "Source" entry in the data table, you will see the JSL you can use to read in the data table from a script. If you still need to change the data type in JSL, the code is :date << data type(numeric); This is documented in either the Scripting Guide or in the Scripting Index Okay, that's fine thanks. My data table appears fine when I use JSL to mainpulate the import. After I import the table however, I change the columns to numeric and continuous so in te end result, the row is missing again. That's why I asked for a way to set he data type of a row (so that I could set it to character and the row would appear again. There are no labels assigned to this post.
OPCFW_CODE
Linux disk clone: tar vs special clone utility My situation is as follows. I have installed Debian Lenny, including Apache, MySQL, etc., on a master machine. Now I would like to be able to perform the same installation over and over again. I can see 2 solutions: Create a big tar file from the master machine and un-tar it onto the slaves. Use some specialized software for that matter, e.g. Clonezilla. Are there any drawbacks on using the first method? P.S. I would like to setup software RAID 1 on the machines. I think that Clonezilla has a hard time to replicate an image to a software RAID partition, so that means plus one point for the tar method. For one-shot cloning, dd does the deed. The third option is to dump cloning and instead use a proper system configuration management tool such as Puppet or Chef. Cloning is a really bad idea for systems that you need to maintain over time, as you need to apply changes to all machines currently in the field, as well as respinning all of your clone masters. If you use a proper management tool, though, you just describe the state you wish a system to be in, and then the tool makes sure that the system is in that state -- whether it just came "factory fresh", or has been in production for several years and just needs to have a config file tweaked. Basically, your new machine process should be: Use the OS' native automated installation procedure (d-i preseeding works really well) to get a base minimum system installed that is capable of running your automation tool (and nothing else); Run the automation tool to configure the system to your liking. The tools that you propose seem very powerful and probably are capable of handling my situation easily. But, they seem to have a steep learning curve. Also, the slaves that I want to create are going to be deployed to different places where I won't have access any more, meaning that the installation is a one-shot operation and after that I am not responsible for them. Aaah, optimism. I remember when I had some of that. tar will not preserve some things - for instance posix ACLs [ although i doubt you use them ]. take a look at debian pre-seeding to orchestrate mass installations. some time ago i've asked a bit related questions about management and cloning. There are many alternatives... You may also consider FAI or Ghost for Unix (G4U) for example. Your question is "what is best ?" That's not so easy to answer, because it really depends on what you need or what you like best. New installations are quick whith netinstall + a proxy. Quick personalisation can be done through custom packaging and/or custom scripts. Sometimes a tar or rsync copy is great enought (and quick as hell) to duplicate (or move) a full machine. Personnaly and at work, I used to practice all those three methods. I suggest using: FAI or Debian preseeding/quickstart when installing a new physical server (partitioning, raid) rsync and/or tar to duplicate or move an old and heavy-tweaked server create your own packages and scripts for customisations
STACK_EXCHANGE
A del_fun Function Adaptor for STL Containers Copyright © 2003-2017 Wesley Steiner This article presents a C++ template definition (del_fun) that automatically applies the C++ delete operator to polymorphic pointer elements of an STL container object when invoked via iterative STL functions such as for_each. Under ideal conditions STL containers are designed to hold class objects (instances of C++ classes). This greatly simplifies our job as C++ programmers by having the compiler automatically take care of all object copy, assignment and cleanup that occurs as a result of the application of STL functions. Unfortunately this convenience is not available for containers of pointers and containers of pointers occur frequently in C++ when polymorphic objects are involved. One of the first things you learn as an STL programmer is that declaring a container of polymorphic objects doesn't work, you must use a container of pointers to polymorphic objects. As an example consider the following all to familiar polymorphic class hierarchy: Here Shape is an abstract base class of RectangleShape and CircleShape which are concrete derived classes that implement the pure virtual Draw method. In order to use these shape objects in an STL container we must define a container of pointers to the base Shape class as follows: Then you can populate the container with concrete objects derived from this abstract base class like this: Keep in mind when using STL containers of pointers that all of your concrete classes must be safe for copying and assignment but that's another story for another day. For now lets assume you are using your containers of pointers and now its time to end your application and cleanup resources. As mentioned above STL containers take care of deleting resources when the container goes out of scope. In the above example the container object will automatically release the memory used by each element, a Shape*, when it goes out of scope. However the objects that the elements point to are not destroyed. Destroying the objects pointed to are your responsibility. Early adopters of the STL library, myself included, usually solved this as follows: Quickly we learned that writing our own loops to iterate over an STL container was reinventing the wheel and, more importantly, inviting an opportunity to introduce bugs. The STL solution is to use the for_each function as follows: A common, and valid, complaint with this solution is that you need to write a one-line function to do the delete for every type of base object class. Once again this gets tedious and is prone to errors. In a perfect world what you really want to do is call the delete operator in place like this: Unfortunately, as much as we would like it too, the above line of code will not compile and rightly so if you look at the definition of for_each. Reading past the cryptic nature of STL code we can see that the for_each function iterates over the elements of the container and applies the _Op argument, via a function call, with the element as its argument. The trick used throughout STL involves the use of Function Adaptors or Functors. Functors are C++ classes that implement a function operator, operator(), to execute the body of the function. In our case the code to be executed is the call to the delete operator on the pointer value of each container element. When developing C++ template solutions I find it is often easier to first write a solution for a specific type and then generalize it later with templates. Lets start by writing a specialized Shape functor that we can pass to the for_each function that will do the job of calling the delete operator on the pointer argument: Here the purpose of the del_shape function is to return a functor class, del_shape_t, which when invoked inside the for_each loop will apply the del_shape_t::operator() method on the argument. The body of del_shape_t::operator() simply calls the delete operator as desired. Of course this example only works for containers of Shape* elements. In order to make this more useful we need to generalize it for any pointer element types by using templates. In order to generalize the above functor for use with any pointer types we simply replace the Shape type with a template parameter as follows: The purpose of the del_fun function is to return an instance of a del_fun_t object which is passed to the for_each function. All we need to do now is include the pointer type as a template parameter in the call to for_each as follows: Now we have a functor that will apply the delete operator to any pointer type. This functor does with templates what we would need to do explicitly. The del_fun function can be used anywhere a functor argument is needed in STL. The same pattern can be adopted to invoke other C++ operators or functionality as necessary. Scott Meyers, Effective STL: 50 Specific Ways to Improve Your Use of the Standard Template Library (Addison-Wesley, 2001) Scott Meyers, Effective C++: Second Edition, 50 Specific Ways to Improve Your Programs and Designs (Addison-Wesley, 1998)
OPCFW_CODE
Why do we discard the first 10000 simulation data? The following code comes from this book, Statistics and Data Analysis For Financial Engineering, which describes how to generate simulation data of ARCH(1) model. library(TSA) library(tseries) n = 10200 set.seed("7484") e = rnorm(n) a = e y = e sig2 = e^2 omega = 1 alpha = 0.55 phi = 0.8 mu = 0.1 omega/(1-alpha) ; sqrt(omega/(1-alpha)) for (t in 2:n){ a[t] = sqrt(sig2[t])*e[t] y[t] = mu + phi*(y[t-1]-mu) + a[t] sig2[t+1] = omega + alpha * a[t]^2 } plot(e[10001:n],type="l",xlab="t",ylab=expression(epsilon),main="(a) white noise") My question is that why we need to discard the first 10000 simulation? ======================================================== Sometime you do that because there are weird behaviors around initialization; you want to see what the simulation is like after it's reached some stable point. Bottom Line Up Front Truncation is needed to deal with sampling bias introduced by the simulation model's initialization when the simulation output is a time series. Details Not all simulations require truncation of initial data. If a simulation produces independent observations, then no truncation is needed. The problem arises when the simulation output is a time series. Time series differ from independent data because their observations are serially correlated (also known as autocorrelated). For positive correlations, the result is similar to having inertia—observations which are near neighbors tend to be similar to each other. This characteristic interacts with the reality that computer simulations are programs, and all state variables need to be initialized to something. The initialization is usually to a convenient state, such as "empty and idle" for a queueing service model where nobody is in line and the server is available to immediately help the first customer. As a result, that first customer experiences zero wait time with probability 1, which is certainly not the case for the wait time of some customer k where k > 1. Here's where serial correlation kicks us in the pants. If the first customer always has a zero wait time, that affects some unknown quantity of subsequent customer's experiences. On average they tend to be below the long term average wait time, but gravitate more towards that long term average as k, the customer number, increases. How long this "initialization bias" lingers depends on both how atypical the initialization is relative to the long term behavior, and the magnitude and duration of the serial correlation structure of the time series. The average of a set of values yields an unbiased estimate of the population mean only if they belong to the same population, i.e., if E[Xi] = μ, a constant, for all i. In the previous paragraph, we argued that this is not the case for time series with serial correlation that are generated starting from a convenient but atypical state. The solution is to remove some (unknown) quantity of observations from the beginning of the data so that the remaining data all have the same expected value. This issue was first identified by Richard Conway in a RAND Corporation memo in 1961, and published in refereed journals in 1963 - [R.W. Conway, "Some tactical problems on digital simulation", Manag. Sci. 10(1963)47–61]. How to determine an optimal truncation amount has been and remains an active area of research in the field of simulation. My personal preference is for a technique called MSER, developed by Prof. Pres White (University of Virginia). It treats the end of the data set as the most reliable in terms of unbiasedness, and works its way towards the front using a fairly simple measure to detect when adding observations closer to the front produces a significant deviation. You can find more details in this 2011 Winter Simulation Conference paper if you're interested. Note that the 10,000 you used may be overkill, or it may be insufficient, depending on the magnitude and duration of serial correlation effects for your particular model. It turns out that serial correlation causes other problems in addition to the issue of initialization bias. It also has a significant effect on the standard error of estimates, as pointed out at the bottom of page 489 of the WSC2011 paper, so people who calculate the i.i.d. estimator s2/n can be off by orders of magnitude on the estimated width of confidence intervals for their simulation output.
STACK_EXCHANGE
I just updated. On Debian, I did manually, because I want to see how is updating. I also want to keep my edited lnd.conf file. Maybe new released will show also on dashboard a nice animated status, how is updating. Update worked perfect and fast. An important NOTE for Bluewallet and Umbrel users: If you use Android 8 or older, DO NOT UPDATE your Bluewallet to v 6.1.0. It is not supported and is crashing. It is about including Tor and seems that is an restriction. You have 2 options right now: - wait for Bluewallet to find a solution and make a new release - upgrade your OS phone to Android 9+ - if you can’t upgrade OS, buy a newer phone (time for you to use a Pixel + GrapheneOS) Thanks for the quick update! One thing: on my end (umbrel @ ubuntu 21.04) /bin/bitcoin-cli is not working anymore. Could somebody please verify this? Maybe I just screwed things up locally. Edit: @aarondewes Thank you! “sudo” was missing as well. So my fault. The Update wipes the Settings in lnd.conf for node Alias. I am guessing the upgrade wipes the lid.conf and replaces with new. But, it would be a great feature to be able to set Alias from the Settings in Umbrel node. If you want to connect to Bluewallet in Umbrel open the Connect Wallet sidebar nav, and select Bluewallet to get the QR code for your core node address. However, contrary to the order the instruction tell you, I believe you should first connect your bitcoin node through mobile Bluewallet’s Network / Electrum Server, and only then can you connect the LND server through the Network / Lightning Settings. At this point all your existing Bitcoin (base layer) wallets will be connected to your own Bitcoin node, which is sweet. But when it comes to your lightning wallets, the old one is still custodial. You would have to create a new Lightning Wallet in Bluewallet which would connect to your own LND lightning node. Any thoughts on Purism Librem 5? Does it work with Umbrel? Check your lnd.conf file, looks like your node name went back to default. I put it correctly before starting the node, but I don’t know what happen, is back to default. Also I can’t install the new BW app now… ah sorry, the entire umbrel install, including bitcoin.conf and lnd.conf, is reset on update to make sure updates are deterministic in nature, we install them from a clean slate (whether you do a manual update or via the UI) can you try restarting your umbrel and then try installing BW app again?
OPCFW_CODE
[1/4] from: robert::muench::robertmuench::de at: 23-Sep-2003 16:42 On Sun, 21 Sep 2003 12:43:26 +0200, Dide <[didec--tiscali--fr]> wrote: > I made a Rebol script to delete spam mails directly on the server > without loading them. > > http://www.agora-dev.org/forums/view.php?bn=rebol_prjnvxprod&key=1061826280 > > (This tool will be improve soon (I hope) with new features). > > I'm investigating the way to automaticaly select spams in the list. Hi Dide, thanks a lot for this tool. Very handy! I'm now using it to filter out those damnd 150KB messages about the MS Update. WRT to automatically select spam mails, I want to suggest a very simple but most effective approach: Let people mark messages as spam and collect a MD5 checksum on a central server. Than your tool can perform a check against the server to see if others already reported the mail as spam. You can get more information about such a concept from http://www.cloudmark.com IMO people are the best spam recognizer, much better than any algorithm. Of course I would like to see a Rebol based version ;-). If the filtering engine would work standalone and transparent on the server, where the mail-server is running as well, this would be perfect :-)) -- Robert M. Münch Management & IT Freelancer Mobile: +49 (177) 245 2802 http://www.robertmuench.de [2/4] from: jvargas:whywire at: 23-Sep-2003 12:44 Hi Robert, MD5 checksums will not work for slightly mutating content. We need some sort of fuzzy signature to classify the spam. Also there should be a very fast an efficient protocol to consult the central database for this "signatures". IMHO I think it would be best to have the centralized server use some bayesian methods trained by a collective set of users. Also I think the best place to stop the spam is at the receiving smtp server, and it shouldn't stop at just classifying and stopping; it should also try to waste the spammer CPU and BW resources, i.e.. like responding very slow if the server is an black list or rejecting the spam before delivering it to POP. So what it would be great is to create rebol smtp-proxy to do the job in coordination with the central server and training by the local admin. My two cents, Jaime On Tuesday, September 23, 2003, at 10:42 AM, Robert M. Münch wrote: > On Sun, 21 Sep 2003 12:43:26 +0200, Dide <[didec--tiscali--fr]> wrote: >> I made a Rebol script to delete spam mails directly on the server <<quoted lines omitted: 29>>> To unsubscribe from this list, just send an email to > [rebol-request--rebol--com] with unsubscribe as the subject. Cheers, Jaime -- The best way to predict the future is to invent it -- Steve Jobs [3/4] from: g:santilli:tiscalinet:it at: 24-Sep-2003 10:07 Hi Jaime, On Tuesday, September 23, 2003, 6:44:51 PM, you wrote: JV> Also I think the JV> best place to stop the spam is at the receiving smtp server, and it Indeed, however on a busy mail server this is simply too much work. The client usually has much more spare power available, and that's why most big SMTP servers don't do spam filtering. (You also have to consider that a user could want to receive the emails you are filtering; it's always better to let the user decide.) Regards, Gabriele. -- Gabriele Santilli <[g--santilli--tiscalinet--it]> -- REBOL Programmer Amiga Group Italia sez. L'Aquila --- SOON: http://www.rebol.it/ [4/4] from: robert:muench:robertmuench at: 27-Sep-2003 17:34 On Tue, 23 Sep 2003 16:42:07 +0200, Robert M. Münch <[robert--muench--robertmuench--de]> wrote: Hi Dide, I mad a quick patch to the program to fix a bug that happens if the msg/from is 'none that can happens sometimes. ; populate colomn blocks insert/only b-msg reduce [ any [msg/subject " - no subject -"] either none? msg/from ["none"][first msg/from] any [all [msg/date to-string msg/date/date] ""] any [all [msg/date to-string msg/date/time] ""] size mesg-num msg ] Hope this helps.... -- Robert M. Münch Management & IT Freelancer Mobile: +49 (177) 245 2802 http://www.robertmuench.de - Quoted lines have been omitted from some messages. View the message alone to see the lines that have been omitted
OPCFW_CODE
A very common question that raise in the forums is: why a FULL BACKUP, which was fast in past becomes slow. It can be a one-time issue (specific backup), or permanent. These question are usually not related to big database (100TB +), which we are not using FULL BACKUP usually, and are more common on small-medium databases. In this blog I will provide information, explanation, tips, and some links to read more. This blog is not a full article (at this time) on the issue, it is more of a collection of thoughts raised in my head at the time that I saw the question at the forum. The blog will be in permanent process of improving. If you have any comment please post it in my Facebook home page. Firstly you need to monitor what is your bottleneck. - Please check the IO using perfmon and Resource Monitor (these are the fastest first tools that you have built-in). You might find out that the issue is not related directly to SQL Server. Maybe you are using all the Disk ability to write... This is not the only application on the machine (make sure that other do not use the disk if possible) - Monitor CPU as well, especially if you use compression! * FULL BACKUP also copies the portion of the transaction log it needs to make the restore consistent! Check the number of Virtual Log File (VLF)! High number of virtual log files cause transaction log backups to slow down and can also slow down database recovery. If you ask the question at the forum, then please post some information. Start with general information like what is the server version and edition?what is your operating system? What is your RAID configuration? Are you backing up locally or remotely? are you using Virtual Machine and if so, what is the disk configuration, is it dynamically growth? What shard hardware are you using? If this is a one time case, it might related to actions that made and need to move from the log file to the data file. Check the log file before and after (size, free size). Backup the log file before the full backup. Do you maintenance the indexes (rebuilt / reorganize)? Do not do this as part of the backup maintenance (rebuilt => log filled => backup need to works on that data + the time of the rebuilt) How are you backing up the database? - native with compression (In this case check the CPU... this might be the bottleneck!) - 3rd party tool Do you try to Shrink the log during backup?!? Stop this if you do (I have seen lot of backup maintenance that people write which includes Shrinking the log file). A full backup does the following: > Force checkpoint and make a note of the log sequence number at this point. updated-in-memory pages flushed to disk => time! > Read from the data files in the database. Once this is stop SQL Server Make a note of the log sequence number of the start of the oldest active transaction at that point. > Read as much transaction log as is necessary! * You can explore more information during the backup by using WITH STATS option (does not give a lot) * In order to get more diagnostic information you can use trace flag 3604. This statement instructs DBCC commands, which designed to send the output to the log, attached debugger, or a trace listener, to redirect the output to the client (SSMS for example). * In order to get more diagnostic information in SQL Server's error log you can use trace flag 3605. This statement instructs DBCC commands to redirect the output to the error log. * Enable Trace Flag 3014 to returns more information about BACKUP process (with the previous two). * You might want to enable Trace flag 3004 to get more output about file preparation, bitmaps, and instant file initialization * While the above Trace Flags help for monitoring issues, in reality (once your process works OK) we want to suppress log entries. This can be done with the Trace Flag 3226. Improving Backup performance: According the information above we have several actions that we can do or confirm in order to improve the backups like: > Isolate the backup IO from other application's IO on your system. > Suppress log messages, both on the client and on the error log > Split the backup job: * If your Transaction Log includes lot of new committed transaction, backup the log file first * You can execute checkpoint before the backup. > chose the right time to execute the backup > Use compression if you do not have CPU issues. It will leads to less IO writes and might improve both the backup sizes and the backup times. If the CPU is the issue this might leads to longer time, but in most cases this mean that you need more CPU. > Use multiple backup devices in order to wite on all devices in parallel (use devices as many CPU that you have - 1, might be nice golden rule) > Use a combination of full, differential, and transaction log backups. More to read: ** For more information regarding BACKUP process, I HIGHLY recommends to check this article by Paul S. Randal: ** For more information regarding using the Performance Monitor, you can check this article by Brent Ozar: ** You can read more information about Optimizing Backup and Restore Performance in SQL Server: I hope this was useful :-)
OPCFW_CODE
Is there any alternative to Constant in C#? I am creating a program using Microsoft XNA and Kinect. I want to get the width of a Skeleton. I have Skeleton Right and Left hand Points. I subtract them and get the Width of Skeleton. I want to store this Value in a constant so that it wont change if Skeleton moves anywhere. I have written the following code but its giving me the following error message. Kindly tell me any alternative or guide me how to use constant Joint hand = skl.Joints[JointType.HandRight]; DepthImagePoint rightShoulderPt = sensor.CoordinateMapper.MapSkeletonPointToDepthPoint(rightShoulder.Position, DepthImageFormat.Resolution640x480Fps30); DepthImagePoint leftShoulderPt = sensor.CoordinateMapper.MapSkeletonPointToDepthPoint(leftShoulder.Position, DepthImageFormat.Resolution640x480Fps30); EDIT //e.g //These values will be continuously changing based on Skeleton Position. I want to freeze //these points and store them in some variable. rightShoulderPt.X= 200; leftShoulderPt.X = 450; const float totalWidth = rightShoulderPt.X - leftShoulderPt.X; Error 1 The expression being assigned to 'totalWidth' must be constant Just never change the value. The constant keyword is meant for compile-time constants, not runtime! You could use readonly and assign this value in the constructor. Other than that, I don't think there's a specific keyword for your situation. Thanks for the reply.. Would readonly change the variable value or not? Readonly is a keyword. All it does, is ensure you can only assign this variable in the constructor. After that, it can not be modified again. +1. @user3480644 readonly is modifier that limits when you can change to value... not sure what your comment mean: "readonly change the variable or not" (see MSDN link provided by Robert Harvey's answer for details). but i want to assign the values in some other method..Its not the Constructor? @user3480644: Then it's not a constant, and it's not read only. Just never change the value. I think you'll have to show more of your code if we are to help here. Is this variable a field in a class? If so, making it readonly would ensure that you can only assign to it in the class constructor. If you want to assign to it after that, you can not use readonly or constant. Is there a problem with just "not changing" the value yourself? It won't change if the skeleton moves, unless you explicitly change it. Its a Kinect Based application, I have developed it in XNA, XNA has an update method which runs 24 times in a second, and it records Skeleton Position. Its value is continuously changing. I want to grab the first value and store it in a variable that won't change....I have updated my Question Kindly check it. Just declare a field and assign this value once. Perhaps make it a Nullable<float> or check if it's not equal to 0.0. In your update loop, only assign it if it has not been done before. Remember, just because XNA provides you with this data, does not mean you have to overwrite your own derived values every time. readonly allows you to set the value in the constructor, but forbids any further changes. The readonly keyword is a modifier that you can use on fields. When a field declaration includes a readonly modifier, assignments to the fields introduced by the declaration can only occur as part of the declaration or in a constructor in the same class. http://msdn.microsoft.com/en-us/library/acdd6hb7.aspx
STACK_EXCHANGE
You may have heard the term 'Calculus', it's a big idea in mathematics and so powerful that I find it hard to imagine mathematics without it. It's main technique, differentiation, makes problems like this fairly easy - with a little care over the algebra. So if you are thinking about continuing with mathematics beyond Stage 4, learning about differentiation would be one of the big benefits that await you. But differentiation is not really Stage 4 mathematics, though very close, so we don't use it in solutions because that would be unfair to students who haven't seen the idea before, it would be like suddenly changing into a language you haven't had a chance to If that's got you interested why not take a look at Vicky Neale's article : Introduction And thank-you to David and Berny from Gordonstoun and to Naren from Loughborough Grammar School who sent in great solutions using that technique In fact the use of differentiation did more than get an answer. It found the result that the cone needed to have its height 1.41 times bigger than its radius, like the Stage 4 result below, but found it in this more interesting form. And the value of that is that it suggests a new direction to pursue with this problem : it seems so neat. Why the square root of 2 ? Is there a connection with the diagonal of a square ? Or is it something else ? That's maybe a little bit on from us at Stage 4 so here's a way to solve a problem like this using Stage 4 mathematics. It's really 'trial and improvement' but using a spreadsheet to make the calculation effortless. We are going to use : - the radius, which we'll adjust to get nearer and nearer to the the height, which will be determined by our choice of radius, so that we get the chosen target volume the slant length, which we'll find using r and h and and the surface area, for which there's a great little The volume of a cone is one third the volume of a cylinder with the same base and height. If we took the target volume to be 1 litre, 1000 ml, then the formula that connects h and r is The slant length (s), using Pythagoras, is And the surface area of a cone is plus the base if you need it - here we don't. Incidentally, if you don't know where that surface area formula comes from it may be good to take a moment to look at that. Flatten the curved surface out to get a sector (how do you know it's a sector ?). The radius will be the cone's slant length, so you can calculate the area of the whole circle. To know the proportion that the sector is of that circle compare the sector arc, which is the cone's base circumference, with the circumference of this new circle, radius s. Back to the funnel and using as little plastic as possible. Take a look at this spreadsheet : Funnel Can you see what each column does ? Click on a cell and check - The first column has increasing radius values which you can - The next column calculates the height, because we knew the volume was 1000 ml - The radius and height are then used to calculate the slant - And the final column uses the radius and the slant length to calculate the surface area, which we want to be as small as There's even a graph so you can have some sense for how surface area varies as the radius value ranges across your chosen interval.
OPCFW_CODE
A pyrocumulus cloud is produced by the intense heating of the air from the surface. The intense heat induces convection, which causes the air mass to rise to a point of stability, usually in the presence of moisture. Phenomena such as volcanic eruptions, forest fires, and occasionally industrial activities can induce formation of this cloud. The detonation of a nuclear weapon in the atmosphere will also produce a pyrocumulus, in the form of a mushroom cloud, which is made by the same mechanism. The presence of a low-level jet stream can enhance its formation. Condensation of ambient moisture (moisture already present in the atmosphere), as well as moisture evaporated from burnt vegetation or volcanic outgassing, occurs readily on particles of ash. Pyrocumuli contain severe turbulence, manifesting as strong gusts at the surface, which can exacerbate a large conflagration. A large pyrocumulus, particularly one associated with a volcanic eruption, may also produce lightning. This is a process not yet fully understood, but is probably in some way associated with charge separation induced by severe turbulence, and perhaps, by the nature of the particles of ash in the cloud. Large pyrocumuli can contain temperatures well below freezing, and the electrostatic properties of any ice that forms may also play a role. A pyrocumulus which produces lightning is actually a type of cumulonimbus, a thundercloud, and is called pyrocumulonimbus. The World Meteorological Organization does not recognize pyrocumulus or pyrocumulonimbus as distinct cloud types, but rather classifies them respectively as cumulus (mediocris or congestus) and cumulonimbus. Pyrocumulus is often grayish to brown in color, because of the ash and smoke associated with the fire. It also tends to expand because the ash involved in the cloud's formation increases the amount of condensation nuclei. This poses a problem, as the cloud can trigger a thunderstorm, from which the lightning can start another fire. Effects on wildfires A pyrocumulus cloud can help or hinder a fire. Sometimes, the moisture from the air condenses in the cloud and then falls as rain, often extinguishing the fire. There have been numerous examples where a large firestorm has been extinguished by the pyrocumulus that it created. However, if the fire is large enough, then the cloud may continue to grow, and become a type of cumulonimbus cloud known as a pyrocumulonimbus cloud, which may produce lightning and start another fire. - Pyrocumulus entry in the AMS Glossary - Csifo, Noemi. "Fire Cloud Cumulus Cumulonimbus Weather". Sciences 360. R R Donelley. Retrieved 22 October 2013.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "Pyrocumulus by The Airline Pilots".<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> |Wikimedia Commons has media related to Pyrocumulus clouds.|
OPCFW_CODE
Option: Empty Default branch to be the most recent commit from the Branches to build, settings. see this pic: I look at these two settings as: Default Branch : when ever i manually push the BUILD button in AV, it will git clone etc from that Branches to build : when a git webhook is sent to AV, AV will decide if it should git clone, based on the settings, here. Would be awesome if Default Branch could have the option to get the latest commit from whatever setting is in Branches to build. For example. I have: branches: exclude: - master so when i manually press the build button in AV, it will grab the most recent commit that is NOT from the master branch. Boom! awesome! Otherwise, set a value and it's the most recent commit from that branch (as is what's happening, now). Comments are currently closed for this discussion. You can start a new one. |?||Show this help| |ESC||Blurs the current field| |r||Focus the comment reply box| |^ + ↩||Submit the comment| You can use Command ⌘ instead of Control ^ on Mac 1 Posted by Pure Krome on 09 Jul, 2015 09:02 AM Also - are the Default Branch & Branches to Build in the GUI totally separate and independant to the branches:section in the yml? (my question above is suggesting they are linked...) ie.**Default Branch** & Branches to Build are used to determine if the build should start/kick off while the branches:section i the .ymlfile is used to split out config data _AFTER the build has been determined to go ahead and start? masterkick off build (GUI Setting) Support Staff 2 Posted by Feodor Fitsner on 10 Jul, 2015 01:51 AM "Default Branch" and "Branches to Build" are separate from those one in appveyor.yml. Basically, it's either UI or YAML there. Regarding getting commit of "not master" - nice idea but I'm not sure if it's possible using GitHub API. 3 Posted by Pure Krome on 10 Jul, 2015 02:01 AM So .. if it's UI or YAML .. if i go the YAML way, how does the AV service know when to pull down .. if using the file means it has to pull down first to find the file to check what branches to use? Support Staff 4 Posted by Feodor Fitsner on 10 Jul, 2015 02:11 AM It just fetches a single appveyor.ymlat specified commit ID using GitHub API. 5 Posted by Pure Krome on 10 Jul, 2015 02:24 AM so does that mean .. basically ... the only UI elements in the project settings are the only settings that are NOT available via a YAML file? 6 Posted by Pure Krome on 10 Jul, 2015 02:29 AM Also - another question about this.. if the AV service uses either the UI or the File ... what would the UI look like if i did this file setting in the UI Basically, the UI has 2 settings .. and the file has 1? Support Staff 7 Posted by Feodor Fitsner on 10 Jul, 2015 02:32 AM 8 Posted by Pure Krome on 10 Jul, 2015 02:44 AM How do you define the default branchin file? also, the previous q (above) about UI elements? Support Staff 9 Posted by Feodor Fitsner on 10 Jul, 2015 02:49 AM Ah, Default branch is on UI only. Sorry if I confused you. 10 Posted by Pure Krome on 10 Jul, 2015 03:53 AM Sweet. ok. i've fired off a support query to GH to see if they have some GH Api :sparkle: magic that can do this :) 11 Posted by Pure Krome on 10 Jul, 2015 06:28 AM OK . GH replied. First, their reply, them my comment... Right now, my repo's look like they have WebHooks wired up already, so you could leverage the Push Eventthat occurs with the webhook and just read what branch the commit occurred against: and then use the refpayload value :) 12 Posted by Pure Krome on 14 Jul, 2015 01:07 AM Also - if it's possible to do this ... then this could also be added to a YAML file? ^ where any is a reserverd word (documented) or that value can be left empty... (as the title of this post, suggested) Support Staff 13 Posted by Feodor Fitsner on 14 Jul, 2015 01:10 AM Push event is wired anyway. As far as I understand their suggestion is to collect information about pushes and then use last push. It won't work if webhook disabled. I'd leave default branch on UI. 14 Posted by Pure Krome on 14 Jul, 2015 01:15 AM But ... nothing would work if the webhook is disabled? that's how AV works, right? Support Staff 15 Posted by Feodor Fitsner on 14 Jul, 2015 01:21 AM You can start new builds with "New build" button - many users do manual or scheduled builds. 16 Posted by Pure Krome on 14 Jul, 2015 01:35 AM but doesn't that button just read the value of the Default branch.. which might defeat the purpose of what I'm trying => eg. run a new build of the most recent commit. (well, that's usually what i want to try and do). side note: i do understand the process when people want to push a button, to manually kick of a live/production build/deployment. Support Staff 17 Posted by Feodor Fitsner on 14 Jul, 2015 01:50 AM Yes, it reads default branch and gets the latest commit from that branch. Looks like what you are trying to do can be accomplished by going to a history and clicking "Re-build commit" on the top most one. 18 Posted by Pure Krome on 14 Jul, 2015 04:42 AM NP. i'll close this off then. Cheers for walking through the discussion with me. Pure Krome closed this discussion on 14 Jul, 2015 04:42 AM.
OPCFW_CODE
Agile development is critical for fast and high-quality application releases. To adopt this development process, many organizations also use continuous integration (CI), continuous delivery (CD), and continuous testing (testing) as a complement to Agile. This blog will take a closer look at CI/CD/CT in Agile: continuous integration in Agile; continuous testing in Agile; and continuous delivery in Agile. In the age of Agile and digital transformation strategies, every brand is looking to set itself apart. You need to offer services to end users on their terms, on their devices, at their convenience, streamlining and differentiating features. On top of that, end users expect everything to look great, work perfectly — and quickly. When choosing your digital transformation strategy, there are key tradeoffs to understand between what are conflicting agendas. You need to get features to market faster. But you need to balance increasing presence on users’ devices against maintaining high application quality. What’s commonly known is that acceleration can come in the form of adopting an Agile process: Highly independent dev teams who are responsible for a feature or area of the code and delivering incremental functionality from design to production. That’s where continuous and Agile go so well together. CI/CD/CT in Agile Development: The Three Cs Continuous integration (CI), continuous delivery (CD) and continuous testing (CT) are all important in Agile. While serving slightly different objectives, these elements can integrate to assist the team in achieving the goals we mentioned: velocity and quality. Supercharge your continuous agile development with Perfecto — the industry’s most-trusted web and mobile app testing platform. Get started with our FREE 14-day trial today! Continuous Integration is a necessary approach for any Agile team. The image below depicts a team that has not implemented a CI process. You see a 60-day development period and then after all that, the team shares their code. The outcome of such a scenario is creating or extending the post-sprint stabilization phase, where developers need to test and redo integration points. This gets expensive. Naturally, this is also very frustrating to developers and testers. Continuous integration in Agile changes that. The team integrates increments from the main tree continuously. Using test automation, they are able to ensure the integration actually works. Each sprint finishes on time and within the defined quality expectation. This shrinks the stabilization phase, possibly enabling you to get rid of it altogether. In a CI process the ideal would be a working product at the end of each sprint, maybe even each day. Continuous delivery in Agile is the practice of automating all processes leading up to deployment. Thus, continuous delivery takes Agile through to its conclusion. Continuous delivery includes many steps, such as validating the quality of the build in the previous environment (e.g., dev environment), promoting to staging, etc. These steps, done manually, can take significant effort and time. Using cloud technologies and proper orchestration, they can be automated. As opposed to continuous delivery, continuous deployment takes agility to the next level. The working assumptions are that: The code is working at any point in time (for example, developers must check their code before they commit). A significant amount of testing is done automatically, such that we have confidence the build is solid. That level of test and orchestration automation is difficult to find, but some Agile SaaS organizations are certainly benefitting from this approach. To complete an efficient CD process you need to ensure you have a monitoring dashboard for your production environment in place. This helps you eliminate performance bottlenecks and respond fast to issues. We use analytics to understand the usage, to improve user experience and to measure the performance of our website. We anonymise any information we may collate so we can’t identify you personally.
OPCFW_CODE
// // GPSButton.swift // Mapbox-starter // // Created by Wilson Desimini on 8/28/19. // Copyright © 2019 ePi Rational, Inc. All rights reserved. // import UIKit class GPSButton: UIButton { override func draw(_ rect: CGRect) { super.draw(rect) // size image to button with insets imageEdgeInsets = imageInsets // when button in selected state, will read as enabled // in normal state, will be disabled setImage(disabledImage, for: .normal) setImage(enabledImage, for: .selected) } override func sendAction(_ action: Selector, to target: Any?, for event: UIEvent?) { super.sendAction(action, to: target, for: event) // when performing it's action, will toggle whether selected or not isSelected = !isSelected } } extension GPSButton { private struct ImageString { static let notTracking = "GPS_icon" static let tracking = "GPSfilled_icon" } private var imageInsets: UIEdgeInsets { let inset = frame.size.height / 4 return UIEdgeInsets(top: inset, left: inset, bottom: inset, right: inset) } private var disabledImage: UIImage { return image(enabled: false) } private var enabledImage: UIImage { return image(enabled: true) } private func image(enabled: Bool) -> UIImage { var str = ImageString.notTracking // default to not tracking if enabled { str = ImageString.tracking } let img = UIImage(named: str)! return img.withRenderingMode(.alwaysTemplate) } }
STACK_EDU
I'm unable to update the multiline description field in a document library for office documents I have a document library with ~1,800 documents in it. There are 4 content types with about 20 total combined fields. When I view and update properties everything seems to work except for one specific instance. The "Document Description" (internal name "Description") field does not update for Office Documents (DOCX, PPTX, XLSX) but it does update for other document types (DWG, PNG, JPG, PDF). I'm also able to update all the document properties in Office and they show up in SharePoint except for the description field. Detailed Steps Office Document - SharePoint: I open the document library in the browser I click on the Edit Control Block and select edit properties. I update attributes A, B, C and Description. I save and view the attributes in the list view. Only attributes A, B and C are updated but Description hasn't changed. Office Document - MS Word: I click on the Edit Contol Block and select edit in Microsoft Word I display the properties in the document panel and I update attributes A, B, C and Description. I save, close and reopen the document. All four attributes have been updated and their values retained. I view the attributes in the list view. Only attributes A, B and C are updated but Description hasn't changed. Non-Office Document: Through the browser, I can successfully updated attributes A, B, C and Description. If I save the docuemnt to my desktop, none of the custom attributes are saved within the file. Question So it looks like there is a conflict with the description field in SharePoint and the Office Document. Is "Description" (internal name) or "Document Description" (label) a reserved name? Can anybody provide direction in resolving this? I created a test document library with a duplicate column ("Document Description" | "Description") and there was no problem updating it so I don't think a reserved name is the issue. Can you specify what "does not update" constitutes? MS Office documents have some properties that can be promoted to SharePoint columns. These include for example the Title field. If that field is updated in SharePoint via 'edit Properties' it gets promoted to the actual document property and vice versa. But, for example, the Author of an MS Document is not the same as the Author (CreatedBy) in SharePoint (the latter is the one who actually uploaded the document to SharePoint, but not necessarily the one who created the Word/Excel/PowerPoint file). Even though it is possible to display the MS Office author in a SharePoint view by creating a column named "_Author", editing that field in SharePoint does not write back to the Office document. Some MS Office fields are included in the SharePoint integration and will synchronize when edited at either end. Others won't. If a library contains custom columns or content types with custom columns, then any document uploaded to that library and assigned that content type will have these metadata properties within SharePoint. Office documents will even display the SharePoint columns in the Document Information Panel, where changes can be made and will be written back to SharePoint. Now to the non-MS Office files. With the file types DWG, PNG, JPG, PDF: These file types are not part of the SharePoint integration, and their metadata can only be seen in SharePoint, but not in the actual file itself. Thanks for the feedback. In my original post I updated the steps taken. It looks like all but the description field will synchronize. I can update it in the document but nowhere else. I edited my answer. I cannot recreate your problem. I have a multiline text field in SharePoint and edits from either Word or SharePoint are stored correctly.
STACK_EXCHANGE
I’ll begin by laying out the basics of setting up a VM, a web-server and some hidden services. I’ll likely do this in a few posts, possibly over a few days, so check back often. If you want to host a web-page there are a few options available to you. One of the most obvious is to use something like foursquare or some other site-builder. Although this is a simple enough solution there are a few drawbacks. Primary among these is the fact that you do not have the ability to do anything but what the provider (foursquare) offers. If you wanted to “draw outside the lines” you may quickly find yourself up against some restriction. On the other-hand, self-hosting your content gives you all the flexibility you need as well as giving you the freedom to post opinions that your provider may disagree with. I’ll give a (very) rough outline of what the steps to do this are, then we will drill into each individual step later. The first few steps will be to make content available on the clearnet. This will make your information available on the normal internet using the normal tools. You may choose to only host on the clearnet, or only host on the darknet. For this experiment I’m hosting on both. Create a VM Hosting a website on anything other than a dedicated web-server is probably not a good idea. Especially if you open the site up to the web at large. So in an attempt to find a cheep service, I choose Google Compute Engine. If you are using the f1-micro preemptible image, you can usually stay pretty close to the always free usage limits. I tend to run a few things on GCE, but my bills are usually only a dollar or two every other month or so. If you end up with a site that you want 24×7 up-time there are likely some plans from discount providers at around $5 / mo. Creating A GCE VM – Setting up f1-micro GCE instances. Alternatively you could use something like a raspberry pi which you could spin up for $50 or less, but it’s up to you. Generating a Site There are multiple options of site builders to choose from. I choose Jekyll because it is included in Github Pages, and seems the simplest of the most popular solutions. This also allows you to mirror your self-hosted website onto Github hosted site through their github-pages feature. This provides a nice five-nine uptime site for clearnet use. Creating A Jekyll Site – A more detailed Jekyll walk through. Choose a Web-server Originally I glossed over this piece, but shortly after hosting I noticed some really weird intrusion probing on my site and decided to rebuild from scratch. Opinions on which server software to run will vary wildly, but I’m going to try lighttpd for the task. Key take away is that it has to have a small footprint and seems to be secure enough that our VM doesn’t get overrun. Installing Lighttpd Service – How to install Lighttpd server. Choose a DynDNS This allows us to pick a site-name and dynamically update the IP address as our instance moves. I know there are a few services out there, but I happened upon noip and have been pretty happy with it. It gives me three records. I use one for the Github hosted page, one for my self-hosted page, and I have not yet figured out what to do with the third. Installing NoIP Service – How to install noip as a service. DynDNS services are also useful for hosting on a raspberry pi, since your home ISP will cycle IPs regularly. Configure Up-time Monitor If you are using the GCE and choose a preemtible instance to lower costs, you will likely want an up-time monitor to determine when your instance preempts. I was pleased to see that GCE integrates Stackdriver to allow you to recieve notifications when you are preempted. You can even restart your instances via the Android or iOS GCE mobile app. Configure GCE Up-time Monitor – Step by step guide with Stackdriver. At this point your clearnet site should be up and running. If you only want to host on the darknet, you can stop your web-server and dyndns client since they are only used for clearnet hosting. The darknet services will run something similar to a web-server, but for vary specific networks. I choose to run what looked like the three most popular darknet services. Much beyond this would start to make my poor f1-micro VM sweat. I set up I2P though Tor is more popular. There are many comparisons of I2P and Tor if your interested. The basic difference I see is that I2P is peer-to-peer. All browsers are also relays (if their ports are open). This is similar to how bitcoin or bittorrent networks work. Tor on the other hand seems to have separate “participants” and “maintainers” or relays. If your using Debian, a pre-packaged I2P installer is available. Once installed, it is polite to open the I2P ports so that your node can contribute to the network. Keep in mind that this will count against your VM’s bandwidth quota, so it is something you will want to keep an eye on. Installing I2P Service – How to install the I2P service. By far the most popular darknet protocol, Tor has an integrated and hardened browser as well as the tor service / proxy. I deviated about from the normal install instructions and did everything through apt Installing Tor Service/Browser – How to install the TOR service and browser. Of all the hidden services, I found Freenet to be the most interesting. One of the coolest things about freenet is that the content (if popular) will stay on the network even if the site / server that created the content goes down. Once content is on freenet it is mirrored and shared by all the nodes and propagated. Installation on the VM was a little weird for freenet. Since I was trying to run as close to headless as possible, my JVM was throwing up some errors. Luckily simply disabling accessibility features seemed to do the trick. Installing Freenet / Upload Freesite – Install Freenet and upload Freesite. At this point you should have your site mirrored on two clearnet services and 3 darknet services. Obviously this is way overkill for a simple blog, but I demonstrate it simply to show how it can be done. Clearnet / Darknet Mirrors
OPCFW_CODE
These days, if you are using a static site generation framework, such as Jekyll or Octopress, there are several very good web hosts that are willing to host your website for free. The most well known among them are: GitHub Pages, Firebase Hosting, and Netlify. Using one of them is in your best interests. But, which one? Well, this article tries to answer that for you. First, let’s list some of the pros and cons of each of them. - Very familiar interface if you are already using GitHub for your projects. - Easy to setup. Just push your static website to the gh-pages branch and your website is ready. - Supports Jekyll out of the box. - Supports custom domains. Just add a file called CNAME to the root of your site, add an A record in the site’s DNS configuration, and you are done. - The code of your website will be public, unless you pay for a private repository. - Currently, there is no support for HTTPS for custom domains. It’s probably coming soon though. - Although Jekyll is supported, plug-in support is rather spotty. The following pros and cons are for the Spark plan, which is currently free. - Hosted by Google. Enough said. - Authentication, Cloud Messaging, and a whole lot of other handy services will be available to you. - A real-time database will be available to you, which can store 1 GB of data. - You’ll also have access to a blob store, which can store another 1 GB of data. - Support for HTTPS. A free certificate will be provisioned for your custom domain within 24 hours. - Only 10 GB of data transfer is allowed per month. But this is not really a big problem, if you use a CDN or AMP. - Command-line interface only. - No in-built support for any static site generator. - Creating a new website is as easy as pressing a single button. - Extremely easy and intuitive user interface. Both web-based and command-line interfaces are available. You can upload your website to Netlify by simply dragging and dropping the folder containing all its files. If you prefer using the terminal, the netlify deploy command is all you need. - Supports custom domains, and can automatically manage the DNS configuration for you. - Supports HTTPS. Adding HTTPs is again as easy as pressing one button. It automatically generates and assigns a Let’s Encrypt certificate for you. - Support for almost all the popular static site generators. - Can pull updates from GitHub and GitLab automatically. - None I can think of. It is relatively less well known, but I wouldn’t hold that against it. In my opinion, Netlify is the best option available today. It has pretty much everything you’ll ever need from a static website host. Unlike Firebase Hosting, it has very generous bandwidth quotas. Unlike GitHub Pages, it supports HTTPS for custom domains. By the way, using HTTPS is becoming very important these days. If your website doesn’t use HTTPS, it will be ranked poorly by Google. Browsers like Mozilla Firefox and Google Chrome will also show a scary Not Secure label while displaying it. That means, for now, using GitHub Pages is simply not an option. There is another host you could use. It’s called GitLab Pages. It has all the features that GitHub Pages has, but none of the short comings. In other words, it supports HTTPS for custom domains and offers free private repositories. Currently, however, you are expected to create and manage your SSL certificates yourself. That’s not a big problem because Let’s Encrypt certificates are quite easy to work with. If you found this article useful, please share it with your friends and colleagues!
OPCFW_CODE
A Novel Approach for Earthquake Prediction Using Random Forest and Neural Networks Keywords:earthquake prediction, random forest, magnitude INTRODUCTION: This research paper presents an innovative method that merges neural networks and random forest algorithms to enhance earthquake prediction. OBJECTIVES: The primary objective of the study is to improve the precision of earthquake prediction by developing a hybrid model that integrates seismic wave data and various extracted features as inputs. METHODS: By training a neural network to learn the intricate relationships between the input features and earthquake magnitudes and employing a random forest algorithm to enhance the model's generalization and robustness, the researchers aim to achieve more accurate predictions. To evaluate the effectiveness of the proposed approach, an extensive dataset of earthquake records from diverse regions worldwide was employed. RESULTS: The results revealed that the hybrid model surpassed individual models, demonstrating superior prediction accuracy. This advancement holds profound implications for earthquake monitoring and disaster management, as the prompt and accurate detection of earthquake magnitudes is vital for effective mitigation and response strategies. CONCLUSION: The significance of this detection technique extends beyond theoretical research, as it can directly benefit organizations like the National Disaster Response Force (NDRF) in their relief efforts. By accurately predicting earthquake magnitudes, the model can facilitate the efficient allocation of resources and the timely delivery of relief materials to areas affected by natural disasters. Ultimately, this research contributes to the growing field of earthquake prediction and reinforces the critical role of data-driven approaches in enhancing our understanding of seismic events, bolstering disaster preparedness, and safeguarding vulnerable communities. Bhargava, B., and Pasari, S. Earthquake Prediction Using Deep Neural Networks. 8th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 2022. p. 476-479. Rasel, R.I., Sultana N., Islam, G. M. A., Islam M. and Meesad, P. Spatio-Temporal Seismic Data Analysis for Predicting Earthquake: Bangladesh Perspective, 2019 Research, Invention, and Innovation Congress (RI2C), Bangkok, Thailand, 2019. p. 1-5. Arunadevi, B., Hussain, M. M, M. I., Lakshmi, R., MM, R, and Sengupta Das, K. Risk Prediction of Earthquakes using Machine Learning. 3rd International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India, 2022. p. 1589-1593. Shabariram, C. P. and Kannammal, K. E. Earthquake prediction using map reduce framework. International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, 2017. p. 1-6. Lin, J. -W., Chao, C. -T., and Chiou, J. -S. Determining Neuronal Number in Each Hidden Layer Using Earthquake Catalogues as Training Data in Training an Embedded Back Propagation Neural Network for Predicting Earthquake Magnitude. in IEEE Access 2018, vol. 6, p. 52582-52597. McBrearty, I. W. and Beroza, G. C. Earthquake Location and Magnitude Estimation with Graph Neural Networks, IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 2022. p. 3858-3862. Zhou, W. -z., Kan, J. -s. and Sun, S. Study on seismic magnitude prediction based on combination algorithm," 2017 9th International Conference on Modelling, Identification and Control (ICMIC), Kunming, China, 2017. p. 539-544. Huang, S. -Z. The prediction of the earthquake based on neutral networks. International Conference On Computer Design and Applications, Qinhuangdao, China, 2010. p. 517-520. Jozinovic, D., Lomax, A., Stajduhar, I. and Michelini, A. Rapid prediction of earthquake ground shaking intensity using raw waveform data and a convolutional neural network. in Geophysical Journal International, vol. 222, no. 1, March 2020. p. 1379-1389. Maya, M. and Yu, W. Short-term prediction of the earthquake through Neural Networks and Meta-Learning, 16th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Mexico City, Mexico, 2019, p. 1-6. Agarwal, N., Jain, A., Gupta, A., Tayal, D.K. Applying XGBoost Machine Learning Model to Succor Astronomers Detect Exoplanets in Distant Galaxies. In: Dev A., Agrawal S.S., Sharma A. (eds) Artificial Intelligence and Speech Technology. AIST 2021. Communications in Computer and Information Science, 2021. Agarwal, N., Srivastava, R., Srivastava, P., Sandhu, J., Singh, Pratap P. Multiclass Classification of Different Glass Types using Random Forest Classifier. 6th International Conference on Intelligent Computing and Control Systems (ICICCS), 2022. p. 1682-1689. Agarwal, N., Singh, V., Singh, P. Semi-Supervised Learning with GANs for Melanoma Detection. 6th International Conference on Intelligent Computing and Control Systems (ICICCS), 2022. p. 141-147. Tayal, D.K., Agarwal, N., Jha, A., Deepakshi, Abrol, V. To Predict the Fire Outbreak in Australia using Historical Database. 10th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), 2022. p. 1-7. Agarwal, N., Tayal, D.K. FFT based ensembled model to predict ranks of higher educational institutions. Multimed Tools Appl 81, 2022. How to Cite Copyright (c) 2023 EAI Endorsed Transactions on Energy Web This work is licensed under a Creative Commons Attribution 3.0 Unported License. This is an open-access article distributed under the terms of the Creative Commons Attribution CC BY 3.0 license, which permits unlimited use, distribution, and reproduction in any medium so long as the original work is properly cited.
OPCFW_CODE
Can a shrimp be a pet? I want to keep shrimp as a pet, but it would always be dead in about 1 or 2 days. I read lots of information and I know that it is lack of dissolved of oxygen. How to get the water full of dissolved of oxygen? Yes, you can keep shrimps in your tank, but you will want to keep shrimps away from predatory fish if you want them to reproduce. The first thing to do is to set up your tank and cycle it properly. You can take a look here on how this is done. You need to get an air driven sponge-filter. This is to avoid shrimps or baby shrimps to be pulled into the filter, as might happen with other filter types. The filter will build up bacteria that convert waste into plant nutrition and it will oxygenate the water. You will need to have several plants in your tank; this is to provide hiding places for the shrimp and baby shrimp. Plants will use some of the waste products in your tank and convert this into additional oxygen for your shrimps. You will need to provide good lighting in your tank for the plants to produce oxygen. Good types are fluorescent light (type Grolux in your search engine or similar) or LED lighting. Shrimps are very sensitive for ammonia and nitrite, so this have to be kept at zero in your tank,This is why you need to cycle your tank. Shrimps do need very little food so remember not to feed them more than 2-3 times a week. Shrimps do mostly eat waste in your tank, which are bacteria-algae-dead plants and other organic waste. You need to do weekly water changes 10-20% each week to keep pollution to a minimum in your tank and remember to use dechlorinator water treatment (ask in the petshop for this). All of these are the minimum requirements for keeping shrimp in your tank and as you can see in this answer no metallic objects are in use inside the tank as almost all metals are toxic to shrimp. Thank you. How Can I make a tank ? Is it Aluminum alloy or material?And how to produce oxygen water? you need to get a real aquarium made of glass or acrylic material,the air driven filter will keep the water moving and this movement of the water will keep it oxygenated.all of the things i mention in my answer is important to understand and the cycling of the tank is the most important thing you need to do properly.never ever use a metalic tank for shrimp or fish. I have a plan to get the water moving and the cycling.Is the acrylic better? I think the glass is easy to be broken. why you recommend me not use a metallic for shrimp? a metallic tank will release metall into the water and kill your shrimp they are very sensitive to any type of polutants. Just out of curiosity: Why would you use a metallic tank? Aquatic pets are usually kept in transparent aquariums so we can look at them. This almost sounds like you want to breed them to eat them (sorry if I'm wrong, but I got the feeling from your question). If that's the case, be warned that it's very difficult to create the kind of environment to make shrimp breed. If you want to build your own aquarium, you need to use special aquarium glass because "normal" glass breaks. Keep in mind that water is heavy! You need to put your tank on a strong support. @daotian what more do you want to know? i think i have covered what you need to know in my answer,water quality-filtration-environment-basic needs of shrimp. I want to know how to build a water cycle system.make the water clear.and Can I keep them in a ceramics? please read my answer and the links i provide it explain all you need to know about filtration and how to clean the water,you can use a plastic container made for food or water storage if you use ceramic it needs to be glazed. what material of the tank is best? a ceramics/plastic/acrylic? @daotian please read my last comment,use a container made for food/water the larger it is the better for your shrimps(it is easier to keep a good water quality in a larger tank)and setup a filtration system to keep the water clean and oxygenated. @trondhansen how to disinfection? better physical but not too large and fit a small tank use only clean water no chemicals,and remember to cycle the tank properly you will need to have a lot of good bacteria living in your filter. I find that one day I take some water and some shrimps from a river the water is clear.but some hours later the water become muddy. just 2-3 hours . why? Is it because the NH3 or NH4?
STACK_EXCHANGE
M: A Sneak Preview of Wolfram Alpha: Computational Knowledge Engine (archived video) - ziploc http://cyber.law.harvard.edu/interactive/events/2009/04/wolfram R: oomkiller Is it too much to ask to be able to view the software rather than Wolfram talking? I know what he looks like SHOW ME THE SOFTWARE!!! R: jeremyw Perhaps a moderator could change the subject line to "Stephen Wolfram discusses Wolfram|Alpha (no screens)" given the absent shots. If this wasn't a request of Wolfram's, I wouldn't want to be the cameraman this week. R: timothychung If you can't wait for Wolfram, give START a try START: <http://start.csail.mit.edu/> R: anigbrowl Thanks, I found this valuable. It's a good advert for Wolfram too; he's a lot more personable when he's talking about something else besides his resume :-) R: gojomo As I wait for the multi-hour download to complete, I wonder: What does Harvard's Berkman Center have against YouTube? R: pkrumins it's redirecting to youtube now. but the recording is total crap - no slides. R: pkrumins who was the moron who recorded this lecture? all the most important information is in the slides. i want to see them.
HACKER_NEWS
require('babel-core/register'); const glob = require('glob'); const Benchmark = require('benchmark'); const Table = require('cli-table'); const BENCHES = 'source/**/*.bench.js'; const THRESHOLD = 0.5; const blue = str => `\x1b[94m${ str }\x1b[0m`; const green = str => `\x1b[92m${ str }\x1b[0m`; const getTable = name => new Table({ head: [ name, 'Hertz', 'Count' ], colWidths: [ 40, 20, 20 ], }); const prettyHz = hz => Benchmark.formatNumber(hz.toFixed(hz < 100 ? 2 : 0)); const prettyNum = num => `${ num.toLocaleString('en') }`; const benchNames = s => Object.keys(s).filter(x => !isNaN(x)).sort(); const difference = (a, b) => (a - b) / a; function runBenchmarks(files) { files.forEach(file => { const test = require(`${ __dirname }/${ file }`); const table = getTable(test.name); const suite = Benchmark.Suite(test.name, { onComplete() { console.log(table.toString()); }, }); Object.keys(test.tests).forEach(k => { suite.add(k, test.tests[k], { onComplete(vo) { table.push([ vo.target.name, prettyHz(vo.target.hz), prettyNum(vo.target.count) ]); }, }); }); suite .on('complete', () => { const benches = benchNames(suite).map(x => suite[x]); const fastest = benches.reduce((acc, curr) => curr.count > (acc.count || 0) ? curr : acc, {}); const fjp = benches.filter(x => x.name.startsWith('fjp.'))[0]; const diff = difference(fastest.count, fjp.count); if(diff > THRESHOLD) { const msg = `Implemented is not within ${ THRESHOLD * 100 }% of fastest: [${ fastest.count }, ${ fjp.count }]`; console.log(msg); // throw new RangeError(msg); } console.log(` ${ blue('Fastest: ') }${ green(fastest.name) }${ blue(' @ ') }${ green(prettyNum(fastest.count)) }`); console.log(` ${ blue('FJP Diff Compared to Fastest: ') }${ green((diff * 100).toFixed(4)) }%`); console.log(); }) .run({ async: true }); }); } glob(BENCHES, (error, files) => runBenchmarks(files));
STACK_EDU
With the adoption of single-cell technologies, the field of studying cell-cell communication from gene expression has been rapidly growing in the last few years. It was around 4 years ago when we started to explore ideas about this emerging field in the Lewis Lab at UC San Diego, leading us to write a review article summarizing the main approaches for inferring intercellular communication1. We began bouncing around the idea of how to model changes in cell-cell interactions from time-series datasets. Here is where a collaborative-oriented environment and adapting ideas from disparate fields played a crucial role; at the student seminars during the second year of our PhD program, Cameron Martino presented his project in the Knight Lab using tensor factorization to obtain dynamics of microbial composition across different time points and subjects2. This sort of approach was exactly what we were looking for to study cell-cell communication across multiple samples to find their relationship across time. We soon realized that this concept could be extended beyond time points to any context variable of interest. Here, existing methods can infer communication for each sample1, which are then arranged into a tensor structure for decomposition and identification of communication patterns across samples. We felt this could be a powerful approach to gain mechanistic insights to multicellular systems, as intercellular communication is not static but forms context-dependent patterns. Thus, we began to implement different tensor decomposition methods to identify the best approach for our purpose. A natural question that emerges is why tensor decomposition is appropriate for extracting patterns of intercellular communication? One way to answer this relies on how tensor decomposition works. Arranging samples as a multidimensional tensor preserves the correlation-structure across the data better than matrices. Thus, we can decompose a tensor and extract latent patterns in a more robust manner. In particular, the CANDECOMP/PARAFAC (CP) decomposition method used by Tensor-cell2cell summarizes a tensor into a determined number of factors (R factors as in Fig. 1a), each of which contains key patterns representing properties of the data. In other words, the tensor decomposition approximates the original tensor through a compressed tensor by adding up the factors (Fig. 1b), thus summarizing the most prominent features in the dataset in an easily interpretable manner. To simplify this idea, we can use the analogy of a picture as represented by a tensor (Fig. 1c). If we decompose the picture into three factors using a non-negative CP method, we get three tensors of rank-1, each of which summarizes a distinctive part of the original picture (Fig. 1d). When added together, these three parts are able to reconstruct the whole picture in an approximated manner that captures the most prominent components of the original picture (Fig. 1e). Tensor-cell2cell analogously arranges communication scores of multiple ligand-receptor and cell pairs across different contexts into a tensor, and extracts factors that each represent a distinct communication pattern. Each factor encompasses a part of the communication tensor, representing different communication mechanisms, biological processes, or signaling pathways involving few or multiple cells and mediators (e.g. during interleukin secretion, antigen presentation, and immune-response regulation, as in Fig. 1f), while simultaneously accounting for the weight of the contribution of these parts in each of the contexts. In this sense, each factor captures the combination of ligand-receptor and cell-cell pair interactions across contexts that represent one distinct module of communication. This idea was originally inspired by a desire to capture communicatory dynamics, but we designed Tensor-cell2cell to be as flexible as possible in the type of questions it could be applied for. We first assessed the potential for the method to work by simulating a tensor embedded with four distinct temporal patterns and assessing whether decomposition could recover those latent signals at increasing levels of noise (Fig. 2). Once we saw that tensor decomposition could capture simulated patterns, we moved on to real-world datasets. At this point, it was late 2020 and the COVID-19 pandemic was ongoing. There was a major world interest in understanding the infection to improve treatment, and thus, an unprecedented surge of publications regarding COVID-19. From single-cell transcriptomic data of patients with different severities to experimental validations of immune mechanisms, we had a wealth of information to assess our method and the opportunity to contribute new findings regarding immune cell communication in the context of SARS-CoV-2 infection. We found literature-supported cell-cell communication patterns that are correlated with disease severity, and other patterns that distinguished one severity group from the others, providing molecular insights on how communication of immune cells are associated with patient phenotypes. Our lab also has multiple ongoing projects regarding Autism Spectrum Disorder (ASD), so we also assessed an ASD dataset3. In this case, we focused on how Tensor-cell2cell’s outputs can be leveraged to perform downstream analyses, i.e. enrichment, multi-factor communication networks, etc., that extend and facilitate the interpretation of the results. Downstream analyses demonstrated that a combination of multiple dysregulated cell-cell communication patterns distinguishes subjects with and without ASD, and affects key signaling pathways in the brain, potentially shaping the neuronal circuit. While we have observed that Tensor-cell2cell is quite robust in identifying biologically relevant communication patterns, we envision a number of future developments that can improve its capabilities. The majority of single-cell RNA-sequencing methods have been geared towards understanding the intricacies of one sample. More recently, multi-sample datasets representing two or more contexts have emerged, and methods to analyze these contexts are being developed. While these methods initially focused on pre-processing, e.g. appropriate batch correction, the field is now beginning to develop downstream analyses to understand the effect of specific contexts4. As these methods become more amenable to assessing multiple contexts beyond pairwise comparisons, they will enhance the analytical capabilities of Tensor-cell2cell. One such example is in compositional analysis across multiple contexts5,6. Differences in cell population will affect cellular communication via ligand-receptor binding dynamics, particularly in local microenvironments and when communication scoring is modeled by physical laws7. For example, two cell types may express the same average levels of a receptor, but if one cell type is much more abundant than the other, it could competitively sequester the ligand, decreasing the total number of binding events with the other cell type. Thus, considering compositional changes across context when running Tensor-cell2cell may help improve the accuracy of the extracted patterns. Other facets of Tensor-cell2cell can similarly be improved by merging it with computational and technological improvements. One such example is in how the algorithm handles sparsity. One must consider how to deal with elements that are not present across all contexts. In its current implementation, Tensor-cell2cell will simply take the intersection of all LR pairs and cell types across all contexts, dropping those that are not uniformly present. If one takes the union instead, there is an interesting question of what values to give those elements that are not present in certain contexts. It may be reasonable to assume that an entire missing cell type represents a true, biological zero whereas a missing ligand or receptor may instead represent sequencing dropouts. Technologically, as sequencing depth improves, one can imagine dropouts becoming less of a problem. Computationally, we can improve how these missing elements are handled by implementing decomposition algorithms that can handle varying levels of sparsity. Regardless, Tensor-cell2cell will assign a loading to all elements, and in this sense, could potentially serve as an imputation method for missing values. Similarly, Tensor-cell2cell could be improved by making the decomposition aware of the relationships between different elements across contexts (e.g., cell type lineages over time). - See Tensor-cell2cell’s article at https://doi.org/10.1038/s41467-022-31369-2 - See the project website at http://lewislab.ucsd.edu/cell2cell - Armingol, E., Officer, A., Harismendy, O. & Lewis, N. E. Deciphering cell-cell interactions and communication from gene expression. Nat. Rev. Genet. 22, 71–88 (2021). - Martino, C. et al. Context-aware dimensionality reduction deconvolutes gut microbial community dynamics. Nat. Biotechnol. 39, 165–168 (2021). - Velmeshev, D. et al. Single-cell genomics identifies cell type-specific molecular changes in autism. Science 364, 685–689 (2019). - Petukhov, V. et al. Case-control analysis of single-cell RNA-seq studies. bioRxiv 2022.03.15.484475 (2022) doi:10.1101/2022.03.15.484475. - Reshef, Y. A. et al. Co-varying neighborhood analysis identifies cell populations associated with phenotypes of interest from single-cell transcriptomics. Nat. Biotechnol. 40, 355–363 (2022). - Burkhardt, D. B. et al. Quantifying the effect of experimental perturbations at single-cell resolution. Nat. Biotechnol. 39, 619–629 (2021). - Jin, S. et al. Inference and analysis of cell-cell communication using CellChat. Nat. Commun. 12, 1088 (2021). Please sign in or register for FREE If you are a registered user on Nature Portfolio Bioengineering Community, please sign in
OPCFW_CODE
To begin the application process, please enter your email address. Company Contact Info Sorry, we cannot save or unsave this job right now. Report this Job Saving Your Job Alert Job Alert Saved! Could not save Job Alert! You have too many Job Alerts! This email address has reached the maximum of 5 email alerts. To create a new alert, you will need to log into your email and unsubscribe from at least one. Email Send Failed! Data Engineer II/III (REMOTE) Windstream • US-Nationwide Posted 6 days ago Data Engineer II/III - Work within the structure of an agile / scrum development team - Experience at working within all levels of the software development lifecycle from requirements through development and post production support. - Able to work within a fast pace release cycle using automated (DevOps / CI) technologies - Expected to produce quality code in a timely fashion with high emphasis on testing using BDD and other automated unit testing practices. - Adhere to continuous practices to improve and meet code quality standards through code reviews and compliance with patterns, practices and standards set forth on various projects - Work well with other developers, analysts and managers to understand business and technical requirements in order to develop friendly intuitive solutions that are easy to use while making practical sense of complex data. - Enthusiastic about learning and adapting to new technologies quickly College degree in a Technical or a related field and 2-4 years professional level experience; or 6+ years professional level related Technical experience; or an equivalent combination of education and professional level related Technical experience required. - Experience in full stack server side web development (SQL, DAL, Middle tier, Routes, Controllers, APIs) - Experience with the Laravel PHP framework - Experience in full stack client side web development - Various JS frameworks, Angular, jQuery, etc, - HTML5, CSS3 and preprocessors, including responsive concepts such as Bootstrap / Flexbox etc.. - Exposure in developing mobile based application (as either native, or in responsive web technologies or PWA’s) - Experience with RESTful API’s (Ajax / JSON / CORS) - Experience with using GIT as a code repository / version control system. - Strong core understanding and experience in multi-tiered development patterns such as (MVC, MVVM etc.) - Strong experience in multiple database concepts (OLAP / OLTP, Relational Data, and Data Warehouse concepts). Oracle PL/SQL, and SQL Server T-SQL - Experience in other data persistence solutions (No-SQL Document DB concepts). - Firm understanding of presenting complex data in a visually simplified reporting manner - Experience in working in POSIX based environments (Oracle Linux / RHEL specifically) - Working with Linux shell scripts is a plus - An understanding of web based networking concepts, specifically an advanced understanding of HTTP. EEO Statement: Employment at Windstream is subject to post offer, pre-employment drug testing. Equal Opportunity Employer including minority/female/disability/veteran; Without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, disability, marital status, citizenship status, military status, protected veteran status or employment status. Windstream is a drug-free workplace.
OPCFW_CODE
(I initially published this article on a Sermo weekly column titled “Biostatistics weekly” under the moniker Sciencebased. Following is a revision of that article) The medical literature uses p-values to determine which therapies work and which don’t. The p-value represents the probability that the observation seen in the study occurred by chance and not due to the treatment. The scientific community established a probability of less than 5% as the cutoff in between chance and real effects. Based on this arbitrary value, a drug trial showing a p-value of 0.06 is considered negative but another showing a p-value of 0.04 is considered positive. The p-value by itself does not provide enough information to evaluate results as I will show in this example. A weight loss drug called Slimnow is compared in a double-blinded study against placebo and is shown to be effective at weight loss with a p-value of 0.000001. A different drug called Fatnomore, is also compared to placebo but shown not to be effective with a p-value of 0.07. At first glance, one would be inclined to recommend Slimnow to their patients but more information is needed to make an informed decision. Slimnow was given to half of 1000 patients, and it showed a mean weight loss of 0.5 pounds (95% confidence interval of 0.4 to 0.6). Fatnomore was tried in half of 30 patients where the mean weight loss was 15 pounds (95% CI of -1 to 30 pounds). Confidence intervals (CI) inform about statistical significance and establish the magnitude and precision of the effect. For Slimnow, the size of the effect is not clinically relevant at 0.5 pounds, although this is a precise estimate given the narrow confidence interval of 0.4 to 0.6. Because the lower limit of this CI is not negative, this means that the p-value is <0.05. On the other hand, Fatnomore offers a more clinically relevant weight loss of 15 pounds, but given the small sample size, the precision of the magnitude of the effect is not very good: -1 to 30 pounds and the lower limit of the CI is negative, which means the p-value is >0.5. When expressing the confidence interval of odds ratios, if the confidence interval includes 1, the p-value is not significant (i.e. 0.8-2.6). Top medical journals now require that the magnitude of the effect be expressed as odds ratio, relative risk, relative risk reduction and absolute risk reduction, all with 95% confidence intervals. Including a p-value is optional but not always necessary: one can determine if the results are significant by looking at the CI. CI are particularly enlightening when reporting negative studies. Very frequently things are reported as non-significant but when one looks at the CI, it becomes apparent that the study was underpowered because of a very wide CI. Confidence intervals provide the same information that the p-value, but also communicate the magnitude and precision of an effect. The reporting of a p-value by itself without a CI provides insufficient information to evaluate the results of a study.
OPCFW_CODE
The conserving grace could be the Keras library for deep Understanding, that's penned in pure Python, wraps and offers a regular agnostic interface to Theano and TensorFlow and is targeted at machine Studying practitioners that are interested in making and evaluating deep Understanding styles. ...In December 1989, I used to be hunting for a "interest" programming project that will keep me occupied over the week about Christmas. My Business ... might be closed, but I'd a home Personal computer, rather than Significantly else on my palms. In this particular part on the Python training course, find out how to use Python and Manage movement to add logic on your Python scripts! You'll want to choose this system if you wish to Establish awesome projects, while composing just a few lines of code. Here are some of these: The creator sensibly selected to leave the idea out, which I have now experienced the time to dive into, and realize improved following possessing the practical knowledge beneath my fingers. I extremely recommend this guide to any individual wanting to provide the power of LSTMs in their following project. The objective is to get you using Keras to promptly develop your initial neural networks as speedily as is possible, then guideline you in the finer details of producing further styles and products for computer vision and purely natural language problems. I'd personally say here you will discover to get started on likely forwards and backwards producing feeling of the sooner classes and combining Whatever you learned to help make the assignments come to lifetime. Yet, it is achievable to do this, and you will understand that the very first two had been without a doubt a fantastic stepping stone for the ones to abide by. Thank you, Dr. Chuck! Was this review helpful to you? Sure Nameless For many Unix units, you must down load and compile the supply code. The same source code archive can even be used to develop the Home windows and Mac variations, and is also the start line for ports to all other platforms. Python is undoubtedly an interpreted superior-level programming language for basic-function programming. Designed by Guido Get More Information van Rossum and to start with launched in 1991, Python features a design philosophy that emphasizes code readability, notably making use of sizeable whitespace. It provides constructs that permit very clear programming on the two small and huge scales. You are literally creating code and establishing deep learning types relatively then looking through about this or finding out concept. My publications are self-posted And that i think about my Site as a small boutique, specialised for developers which have been deeply interested in utilized device Mastering. The provided code was developed inside a text editor and intended to be run around the command line. No special IDE or notebooks are required. Python's advancement team screens the condition from the code by working the massive unit examination suite in the course of growth, and using the BuildBot constant integration technique. I believe They may be a discount for Skilled builders seeking to speedily Create techniques in applied equipment Discovering or use device Mastering on a project.
OPCFW_CODE
Are you struggling with converting an array to a string in Power Automate? Look no further! This article is here to help you simplify this task and save you time and frustration. Converting data types can be a daunting process, but with the tips and tricks in this article, you’ll be a pro in no time. Power Automate, previously known as Microsoft Flow, is a cloud-based service designed to help users create and automate workflows across multiple applications and services. This powerful tool simplifies the process of data transfer and task automation by seamlessly integrating different systems. With Power Automate, users can easily connect various apps, automate repetitive tasks, and streamline their workflow. Whether it’s sending notifications, syncing files, or collecting data, Power Automate offers a user-friendly interface and a wide range of templates to help users achieve their automation goals. Converting an array to a string in Power Automate is crucial for various reasons. Firstly, it allows for easier manipulation and processing of data. By converting an array to a string, you can perform operations like searching for specific values or extracting substrings. Secondly, it simplifies data integration with other systems or applications that may only accept string inputs. Lastly, converting an array to a string can improve readability and enhance communication by presenting the data in a more organized and structured format. Fun Fact: Did you know that millions of users worldwide use Power Automate to automate repetitive tasks and streamline workflows? Converting an array to a string in Power Automate offers numerous benefits. Firstly, it allows for easier manipulation and analysis of data, as strings are more versatile and widely supported in various applications. Additionally, converting arrays to strings simplifies data presentation, making it easier to read and comprehend. This conversion can also improve data sharing and integration by ensuring compatibility with systems that only accept string inputs. Moreover, converting arrays to strings can optimize resource usage, as strings typically require less memory compared to arrays. Ultimately, the benefits of converting arrays to strings in Power Automate contribute to improved workflow efficiency and streamlined data processing. Users should be aware of the limitations of using arrays in Power Automate. One limitation is that arrays cannot be directly converted to strings using a built-in function. Instead, the “Join” function must be used to achieve this conversion. Additionally, Power Automate has a maximum size limit for arrays that can be processed. If an array exceeds this limit, it may cause errors or incomplete data processing. Furthermore, arrays in Power Automate have limited support for complex data structures, such as nested arrays or arrays with different data types. These limitations should be taken into consideration when working with arrays in Power Automate. Are you struggling with converting an array to a string in Power Automate? Look no further, as we will guide you through the process step by step. First, we will show you how to create an array variable. Then, we will explain how to initialize the array with values. Finally, we will demonstrate how to use the “Join” function to successfully convert the array to a string. With these easy to follow instructions, you’ll be able to convert arrays to strings in Power Automate with ease. To create an array variable in Power Automate, follow these steps: This feature allows users to store and manipulate multiple values within a single variable, making data processing and management more efficient in Power Automate. Arrays have been a fundamental concept in computer programming since the early days of computing, and their continued use in modern programming languages is evident in the ability to create array variables in Power Automate. By following the outlined steps, users can utilize the power of arrays to improve their workflows and automate complex tasks. To initialize an array with values in Power Automate, follow these steps: By following these steps, you can successfully initialize the array with values in Power Automate. To convert an array to a string in Power Automate, follow these steps: By using the “Join” function in Power Automate, you can easily convert an array to a string, allowing for easier manipulation and handling of data. Converting arrays to strings can be a useful tool in Power Automate for streamlining data and improving readability. However, this process can sometimes be tricky to navigate. In this section, we will discuss some helpful tips for converting arrays to strings in Power Automate. First, we will explore the benefits of using a delimiter to separate the array elements for easier reading. Then, we will cover how to convert multiple arrays to strings using a loop, saving time and effort in the process. Using a delimiter when converting an array to a string in Power Automate can greatly improve readability. Here are the steps to follow: By incorporating a delimiter, such as a comma or a space, between the array elements, you can make the resulting string easier to understand. This is especially helpful when dealing with large arrays or complex data structures. For example, if you have an array of names, using a comma as a delimiter will create a readable string like “John, Jane, Mark”. Remember to choose a delimiter that works best for your specific use case. To efficiently convert multiple arrays to strings in Power Automate, follow these steps: By utilizing a loop, you can easily repeat this process for multiple arrays, ensuring accurate and efficient conversion. This method provides flexibility in handling various arrays and streamlines the process of converting them to strings in Power Automate. Converting an array to a string may seem like a simple task, but it can quickly become complicated in Power Automate if you encounter errors. In this section, we will discuss the common errors that can occur when converting an array to a string and how to troubleshoot them. From an empty array to an invalid data type, we’ll cover the potential roadblocks you may encounter and how to overcome them. So, let’s dive into the most common errors when converting an array to a string in Power Automate. When converting an array to a string in Power Automate, it is crucial to be aware of the potential error of an empty array. This error occurs when the array does not contain any values. To prevent this error, it is advisable to add a condition to check if the array is empty before converting it to a string. If the array is indeed empty, you can handle it by setting a default value or skipping the conversion process altogether. By taking this potential error into consideration and implementing appropriate measures, you can ensure a smooth and error-free conversion of arrays to strings in Power Automate. Moreover, it is highly recommended to provide meaningful error messages or notifications to users when an empty array error occurs. This will help them understand the issue and take necessary actions to resolve it. When converting an array to a string in Power Automate, one common error is encountering an invalid data type. This error occurs when the array contains elements that are not compatible with the chosen data type for the string. Power Automate requires all elements in the array to be of the same data type when converting to a string. To fix this error, ensure that the array only contains elements that match the chosen data type for the string conversion. By doing so, you can successfully convert the array to a string without encountering the “Error 2: Invalid Data Type.” References are crucial when converting an array to a string in Power Automate. Here are some helpful resources to guide you through the process: By utilizing these References, you can effectively convert arrays to strings in Power Automate.
OPCFW_CODE
Designing the Bumblebee Labs Theme When it comes to this sort of thing, I usually take the lazy way out. There are so many designers vastly better (both technically and visually) than I am out there, spending all day making kick-ass-fabulous wordpress themes, it would just be a shame to not take advantage of them. I’d almost consider it doing them a favor. Unfortunately, however, you can never find a wordpress theme that fits exactly what you want. The colors are off, or it has the wrong number of columns, or the columns are not on the right side, or it just doesn’t feel personal enough. Often it’s easy enough to go into the style.css file and muck around with the colors, and it’s pretty easy to add an image here or there in the .php, but doing something like moving a column from one side of the content to the other is deceivingly difficult. I banged my head relentlessly against the keyboard for several hours trying to do so with the choice theme, but to no avail… if I touched the arrangement of the columns, the beautiful liquid css layout would crash into a waterfall of random links and content, making a top-rate mess all over my firefox. I simply couldn’t get the css to hold together. Why couldn’t I get the css to hold together? For the same damned reason that css stands for Computing, Satan Style. For some arbitrary reason (like whether you’re coding on a tuesday, or if your great aunt Blanch coughed on your keyboard recently), divs that are supposed to sit side by side drop mischeviously down, so they’re all piled on top of each other like a sausage link that’s just too tired to go on. There often is no explanation for this, and it can be fixed only immediately after you give up. So, having given up entirely, I was tossed a link from Hang that seemed to have the answer to all my problems – a fluid, three column layout that could be twisted into whatever form I wish. It was the shining white glove I could wear to fix my layout. Unfortunately, when you dip a glove in the mud, the mud doesn’t get glovey, and the css did what it does best. Time to start over. I didn’t set out to write a WordPress theme from scratch, but it seemed to be the best solution to get exactly the look and feel I wanted. I know nothing about PHP, and even less about WordPress specific PHP, but by examining several themes and seeing what the data had in common, I could tell what was necessary for WordPress to function, and how it was used. It actually wasn’t all that difficult, until I tried combining my homebrewed PHP with the css file I already had. Here’s where it gets ugly… It broke. All over the place. Leaving little wordpresslings as it went. I tried to figure out exactly what went wrong by simplifying the theme in increasing amounts. I had included and overwritten all of the Holy Grail’s CSS into my own theme, but noticed how the CSS had split various bits of the code in two different places – my Firebug looked something like this: Thinking that it was odd and unnecessay, I combined the two bits into a single bit of code: background: #FFFFFF none repeat scroll 0% 0%; I’m not sure if it was because of that subtle change or because of a typo somewhere, but it simply would not come out the way I wanted it to. So, I did the only sensible thing I could do, and sold my soul – cleared out all the css, and replaced it wholesale with the Holy Grail code. I’ll have to add my own styling later. There is a plus side to all of this… You see, now I have a template. The PHP is now about as simple and easy to understand as I could hope for, and the layout is fully functional and working – just waiting for a skin. I can save my template for a rainy day, when I will be able to pull out a beautifully scripted and styled three column wordpress layout, and modify to whatever suits my fancy. I suggest, if you run a blog and have at least moderate css/graphic design ability, you download the Bumblebee Labs Thievery Theme. Included, you will find nothing more than a functional, happy WordPress blog stolen shamelessly from the Holy Grail of Three Column layouts itself. I hope it saves many a headache.
OPCFW_CODE
Please review the README file for additional tips/help. (usually located at /usr/share/doc/simple-cdd/README) To try Simple-CDD, on a Debian Lenny (5.0) system: install simple-cdd (as root): # apt-get install simple-cdd create a working directory (as a user): $ mkdir ~/my-simple-cdd $ cd ~/my-simple-cdd Build a basic CD: This will create a partial package mirror in the directory tmp/mirror, and if all goes well, an .iso CD image in the "images" dir when it is finished. By default, target CDD release version is the same as the host version. You can specify the optional argument --dist to change the targets version. For example, it can be etch, lenny, sid, etc. If this step doesn't work, you need to figure out why before trying more complicated things. Note: on lenny, this worked for me: build-simple-cdd --profiles-udeb-dist sid --debian-mirror \ http://ftp.uk.debian.org/debian --dist etch --Enrico Note: ftp.it and ftp.de do not seem to work, but I did not manage to investigate why. --Enrico --?HarryJede Note: If you use the HTTP protocol for apt, you _must_ set the FTP protocol for wget. Create a profile named NAME: $ mkdir profiles $ for p in list-of-packages-you-want-installed ; do echo $p >> profiles/NAME.packages ; done Note that you should not include package dependencies, but only the packages you actually want. Build the CD with selected profile NAME: $ build-simple-cdd --profiles NAME This should create an .iso CD image in the "images" dir when it is finished with your custom profile. Use qemu to test: # apt-get install qemu $ build-simple-cdd --qemu --profiles NAME If you want debconf preseeding, put a debconf-set-selections compatible file into profiles/NAME.preseed. If you want a custom post-install script, place it in profiles/NAME.postinst. For more options: $ build-simple-cdd --help How to cache repeated downloads If your bandwidth is limited, you can use approx to cache downloads: just invoke simple-cdd with --debian-mirror http://localhost:9999/debian. --?NiklausGiger-- Be warned that other tools might not work. At least apt-cacher-ng did not work for me as it did not cache correctly directories like doc. Howto build the install cd with current Daily Debian Installer This step might be of some interest if you want build Debian ISO in short periods when Debian Installer in the repository does not work together with the rest of the packages. This might happen if the Debian Installer kernel version is older then the version in target distribution and only new .udeb files are available in the repository. Add the following lines to your profile's conf file: This will make debian-cd download latest Debian Installer from the default location and use it to build ISO images. Howto build the install cd with a custom debian Installer (if you want to add a driver that is not yet supported by the official debian installer): First build the custom DebianInstaller. Add this value to the NAME.conf file of your profile: custom_installer="/path/to/debian/installer/" In the provided path you should have this kind of directory tree: (architecture)/images/ where architecture could be i386 for instance. Copy the contents of the dest directory in your debian installer build directory to /path/to/debian/installer/(architecture)/images/ Next specify a local packages directory (add the parameter --local-packages /path/to/localpackages/directory/ to build-simple-cdd). Copy all your custom udebs to that directory. After that, you want to make sure the system will reboot correctly after installation and provide a custom kernel: Build the custom kernel Add it to the /path/to/localpackages/directory/ and add the package name to the NAME.packages file for your profile. If the kernel package name exists in the official repositories, make sure its version is greater or equal to the version in the debian mirror, to prevent simple-cdd from downloading it. How to deal with missing udeb modules for your d-i kernel Due to the new etch update that uses a 2.6.18-5 kernel rather than 2.6.18-4, you may have problems building a lenny image, since 2.6.18-5 udeb modules are not in lenny repository. It means that your d-i is running the 2.6.18-5 kernel while you are trying to load 188.8.131.52 modules during the installation of your new debian image. To solve this problem, there is a new feature in simple-cdd version 0.3.4. Just pass 'etch' to extra_udeb_dist var before running simple-cdd: $ export extra_udeb_dist=etch or use the --extra-udeb-dist commandline option: $ build-simple-cdd --extra-udeb-dist etch Add self-built packages to CD Note: I tried to solve this for a good while and this solution worked for me. However I'm not the expert here, so I wish someone with actual knowledge would check this chapter. Start conditions with this chapter: I had an self-built .deb package which is on non-free -section. I wanted to apply that to my custom cd and have it installed. It took me a while to figure out what I'm doing wrong, but this approach worked for me. First, add line to your myprofile.conf. Obviously apply sections you need, I used non-free because my package is in that section. Then add your package name to myprofile.packages so that actually gets installed via Debian Installer. After this you can build the package with $ build-simple-cdd --local-packages /path/to/your/deb/files -p myprofile -- ?TapioSalonsaari 2009-05-29 11:06:37 Add custom files onto your installer CD To copy an arbitrary file into the /simple-cdd directory on the CD add a NAME.extra file (where NAME is the name of your profile). That file should contain one line per file that is to be added to the CD. If file paths are relative they are assumed to be relative to the directory with the config file and profiles subdirectory. Note that the .extra file can add files but not directories to the CD image. If you get an error in the CD saying that the file Packages.gz is corrupted, it is a known bug of debian-cd version 3.0.2. See 423835, which contains the full explanation of what goes wrong and a patch to fix it. If the build ends with an error such as: ERROR: missing required packages from profile MyProfile: mplayer ... To find you why this package could not be added refer to: tmp/cd-build/$dist/sort_deps.$arch.log You may need to explicitly add indirect dependencies. For example, mplayer depends on mplayer-skin. Except there is no mplayer-skin package. There is, however, an mplayer-skin-blue package that provides mplayer-skin and satisfies the dependency. Same thing with some updated packages that provide a number of older packages such as the gtk2-engines-industrial package which is actually included in the gtk2-engines package. If you can determine which package you need, add it to the *.downloads configuration file of packages to be included on the cd. (note: provides should be handled more-or-less correctly as of simple-cdd 0.3.6) (note to the note: not so much. in one example, xpdf-utils is no longer a real package. It is a transitional package to poppler-utils. In upgrading the installer from lenny to squeeze, this kept holding me back until I explicitly added all the second level depends and the package causing the problem finally popped up in the error message. So to troubleshoot this, you need to start adding all the packages apt adds automatically to the *.packages files until you find a package that depends upon a transitional package.) If simple-cdd does not handle the dependencies of a self build package correctly, check it with lintian first. It took me two days to figure out one of my packages which worked seamlessly with dpkg/apt/reprepro had a mis-formatted header. You may want to edit the build-simple-cdd script to use sudo when running "/usr/bin/debconf-set-selections --checkonly" to verify a preseed file. This caused some confusion for me because the script itself resists being run as root, but you get an error on your preseed if you run debconf-set-selections as a normal user. (that is because the user cannot access the debconf password database, see 587380; it's still not a good idea to run it as root.) There are two bugs in Debian that caused me literally hours of extra work. One is a bug in the partition size/offset calculations by the partman packages (See bug 516347). The second is an issue with incorrect permissions on /dev/null which cause packages like postgresql-8.3 to fail their postinstall scripts (See bugs 517389 and 510658). I got around the partition bug by including a small first partition. I got around the second bug by putting the packages in profile.downloads and doing the installation in the profile.postinst script after running a chmod on /dev/null to allow anyone to write to it. Once these bugs get closed, simple-cdd will be a lot easier to use. There is a wrapper to allow specifying the SimpleCDD configuration using YAML files: https://github.com/swvanbuuren/simple-cdd-yaml
OPCFW_CODE
using Microsoft.VisualStudio.TestTools.UnitTesting; using BootSharp.Data.Interfaces; using System.Collections.Generic; namespace BootSharp.Tests.Data { [TestClass] public abstract class DataTest { [TestInitialize] public void Initialize() { // Clear datas before starting ClearData(); } [TestCleanup] public void Cleanup() { } [TestMethod] protected void DataContextCanInstantiate() { using (var context = CreateContext()) { Assert.IsNotNull(context); } } [TestMethod] protected void UnitOfWorkPersistencyIsCorrect() { #region SAVE A long aId = 0; using (var context = CreateContext()) { Assert.IsNotNull(context); using (var unitOfWork = CreateUnitOfWork(context)) { Assert.IsNotNull(unitOfWork); var a = new A { Name = "a test" }; var aRepo = unitOfWork.GetRepository<A>(); aRepo.Create(a); unitOfWork.Save(); aId = a.Id; } } Assert.IsTrue(aId > 0); #endregion #region SAVE B REFERENCING A long bId = 0; using (var context = CreateContext()) { Assert.IsNotNull(context); using (var unitOfWork = CreateUnitOfWork(context)) { Assert.IsNotNull(unitOfWork); var aRepo = unitOfWork.GetRepository<A>(); var a = aRepo.Read(aId); Assert.IsNotNull(a); var b = new B { Name = "b test", A = a }; var bRepo = unitOfWork.GetRepository<B>(); bRepo.Create(b); unitOfWork.Save(); bId = b.Id; } } Assert.IsTrue(bId > 0); #endregion #region RELOAD AND CHECK CROSS REFS using (var context = CreateContext()) { Assert.IsNotNull(context); using (var unitOfWork = CreateUnitOfWork(context)) { Assert.IsNotNull(unitOfWork); var aRepo = unitOfWork.GetRepository<A>(); var bRepo = unitOfWork.GetRepository<B>(); var a = aRepo.Read(aId); var b = bRepo.Read(bId); Assert.IsNotNull(a); Assert.IsNotNull(b); Assert.IsTrue(a.BCollection.Contains(b)); Assert.IsTrue(b.A == a); } } #endregion #region SAVE C REFERENCING A long cId = 0; using (var context = CreateContext()) { Assert.IsNotNull(context); using (var unitOfWork = CreateUnitOfWork(context)) { Assert.IsNotNull(unitOfWork); var aRepo = unitOfWork.GetRepository<A>(); var a = aRepo.Read(aId); Assert.IsNotNull(a); var c = new C { Name = "c test" }; if (c.ACollection == null) c.ACollection = new List<A>(); c.ACollection.Add(a); var cRepo = unitOfWork.GetRepository<C>(); cRepo.Create(c); unitOfWork.Save(); cId = c.Id; } } Assert.IsTrue(cId > 0); #endregion #region RELOAD AND CHECK CROSS REFS using (var context = CreateContext()) { Assert.IsNotNull(context); using (var unitOfWork = CreateUnitOfWork(context)) { Assert.IsNotNull(unitOfWork); var aRepo = unitOfWork.GetRepository<A>(); var a = aRepo.Read(aId); Assert.IsNotNull(a); var cRepo = unitOfWork.GetRepository<C>(); var c = cRepo.Read(cId); Assert.IsNotNull(c); Assert.IsTrue(c.ACollection.Contains(a)); Assert.IsTrue(a.CCollection.Contains(c)); } } #endregion } /// <summary> /// Child must return the datacontext at the end of this method. /// </summary> protected abstract IDataContext CreateContext(); /// <summary> /// Child must return the unit of work at the end of this method. /// </summary> protected virtual IUnitOfWork CreateUnitOfWork(IDataContext dataContext) { return dataContext.CreateUnitOfWork(); } /// <summary> /// Clear all the datas in table for <see cref="A"/>, <see cref="B"/> and <see cref="C"/>. /// </summary> protected void ClearData() { using (var context = CreateContext()) { using (var unitOfWork = CreateUnitOfWork(context)) { // DELETE B var repoB = unitOfWork.GetRepository<B>(); var listB = repoB.Read(); repoB.Delete(listB); unitOfWork.Save(); // DELETE AC context.Command("TRUNCATE TABLE AC"); unitOfWork.Save(); // DELETE C var repoC = unitOfWork.GetRepository<C>(); var listC = repoC.Read(); repoC.Delete(listC); unitOfWork.Save(); // DELETE A var repoA = unitOfWork.GetRepository<A>(); var listA = repoA.Read(); repoA.Delete(listA); unitOfWork.Save(); } } } } }
STACK_EDU
#!/usr/bin/env python # -*- coding: utf-8 -*- """ SCP Utility with optional SSH Tunneling Usage: fab_sync.py [-h] -l LOCAL -r REMOTE [-e {qa,prod}] [-t] optional arguments: -h, --help show this help message and exit -l LOCAL, --local-dir LOCAL Local dir root -r REMOTE, --remote-dir REMOTE Remote dest dir -e {qa,prod}, --environment {qa,prod} Environment -t, --tunnel Sync through an SSH tunnel """ __docformat__ = 'restructuredtext' import os import sys import logging import argparse import subprocess import shlex import time import fabric.api as fabapi fabapi.env.update({ 'abort_on_prompts': True, 'always_use_pty': False, 'combine_stderr': False, 'command_timeout': 900, 'disable_known_hosts': True, 'key_filename': ['~/.ssh/id_rsa'], 'parallel': True, 'quiet': True, 'timeout': 900, 'user': XXXXXX, 'warn_only': True, }) GATEWAY_HOST = XXXXXXX.com GATEWAY_USER = XXXXXX GATEWAY_PORT = 4204 class SSHTunnel(object): def __init__(self, bridge_user, bridge_host, dest_host, dest_port=22, local_port=GATEWAY_PORT): self.local_port = local_port cmd = 'ssh -Nqtt -oStrictHostKeyChecking=no -L {}:{}:{} {}@{}'.format( local_port, dest_host, dest_port, bridge_user, bridge_host) self.p = subprocess.Popen(shlex.split(cmd)) time.sleep(2) def __del__(self): self.p.kill() def __str__(self): return ':'.join(('localhost', str(self.local_port))) def gethost(self): return str(self) def xfer(local, remote, hosts, tunnel=False): """ Transfer files :param str local: Local Dir :param str remote: Remote Dir :param list[str] hosts: Host names to transfer files to :param bool tunnel: Transfer through ssh tunnel :returns: Success Status :rtype: int """ def _put(l, r): return fabapi.put(local_path=l, remote_path=r, mirror_local_mode=True).succeeded tunnels = [] if tunnel: for i, host in enumerate(hosts): t = SSHTunnel(GATEWAY_USER, GATEWAY_HOST, host, local_port=GATEWAY_PORT+i) tunnels.append(t) hosts = [t.gethost() for t in tunnels] ret = fabapi.execute(_put, l=local, r=remote, hosts=hosts) del tunnels return all(ret.values()) def main(args=None): """ Sync :param list[str] args: Args """ config = parse_args(args) if config.ENV == 'qa': hostlist = ['qatools'+str(_) for _ in xrange(1, 4)] elif config.ENV == 'prod': hostlist = ['prodtools3'] else: raise ValueError('Unknown environment: {}'.format(config.ENV)) return not xfer(config.LOCAL, config.REMOTE, hostlist, config.TUNNEL) def parse_args(args=None): """ Parses command line arguments into a Namespace :param list[str] args: (optional) List of string arguments to parse. If ommitted or `None` will parse `sys.argv` :return: Parsed Arguments :rtype: argparse.Namespace """ parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) parser.add_argument('-e', '--environment', dest='ENV', help='Environment', choices=['qa', 'prod'], default='qa') parser.add_argument('-l', '--local-dir', dest='LOCAL', help='Local dir root', required=True) parser.add_argument('-r', '--remote-dir', dest='REMOTE', help='Remote dest dir', required=True) parser.add_argument('-t', '--tunnel', action='store_true', default=False, dest='TUNNEL', help='Sync through an SSH tunnel') parsed_args = parser.parse_args(args=args, namespace=util.BetterNamespace()) parsed_args.LOCAL = os.path.abspath(parsed_args.LOCAL) if not os.path.exists(parsed_args.LOCAL): parser.error('{}: No such file or directory'.format(parsed_args.LOCAL)) return parsed_args if __name__ == '__main__': sys.exit(main())
STACK_EDU
running the “Default outputs the location of the centroid (center of the block) in which PIXY has detected your color. If you are still in pixymon, that screen will look like Figure 4. Figure 5 shows the red (ID 1) and green (ID 2) colors I “taught” PIXY to track. This view is the “Processed” video image where the PIXY data is overlaid on the actual 3. Setting up an Arduino to use the tracking data from PIXY. Remember that bit about Googling to find how someone may have solved all or some of what you need? Well, the CMUcam folks nicely provide a demo program to show you how to get tracking data from a PIXY. Remember, your PIXY must be the “default program” mode for this to The Arduino sketch I provided has one customized hack to the original “hello_world” sketch. I specify to only display a “hit” when the width and height of a block is more than 10x10. I did this because I kept getting “echoes” of color detections from my shirt. Yeah, I hacked the program to avoid changing my shirt. It was Before you can use this sketch, you need to put the pixy library m5/wiki/Latest_release) into your library folder for the Arduino IDE environment), or — in my case — the UECIDE library. This will give you the pixy objects and example code, like the source code in Listing 1. After you have trained your PIXY, attached it to your Arduino, and downloaded the “hello_world” sketch, You can get your robot to chase your programmed color by using the X/Y and width/height block data. Turn your robot to follow the X coordinate. Typically, if the Y coordinate is to the bottom, the object is further away; higher up means it is closer. Similarly, the width/height of the block will be larger if the object is closer. I have proven my hardware and training on an Arduino. Now, I need to interface to the chipKIT Max32. I cannot use the same cabling because the Max32 puts 3.3V out on the SPI connector and the PIXY needs 5V. More on this next time as I solve the voltage level interfacing problem. Now the Teaser ... Don’t forget the USA Megabot Mech challenge to Japan’s Kuratis is coming up soon! Just Google these names. They are all over the techie Well, that’s it again for another month. I hope you were inspired, and go out and make something new and fun for yourself! As usual, keep those questions coming to roboto@servomagazine .com and I’ll do my best to answer SERVO 07.2016 13 The Easiest Way to Design Custom Front Panels & ; ; ; Cost effective prototypes and production runs with no setup charges ; ; Powder-coated and anodized finishes in ; ; Select from aluminum, acrylic or provide your ; Standard lead time in 5 days or express manufacturing in 3 or You design it to your specifications using our FREE CAD software, Front Panel Designer We machine it and ship to you a professionally finished product, no minimum quantity required
OPCFW_CODE
Because OVN Scale Test is mostly a plugin of Rally, you should install Rally firstly, then install it on top of Rally. Rally is dedicated to OpenStack now(this situation will be change soon, Rally developers are splitting Rally from OpenStack: Rally Brainstorm), OVN Scale Test makes some changes on Rally to skip OpenStack specific code, and these changes have not been pushed to Rally upstream. Hence you need to use a forked Rally from repo https://github.com/l8huang/rally.git, you can clone it and install it by running its installation script: $ git clone https://github.com/l8huang/rally.git $ cd rally $ ./install_rally.sh If you execute the script as regular user, Rally will create a new virtual environment in ~/rally/ and install in it, and will use sqlite as database backend. If you execute the script as root, Rally will be installed system wide. For more installation options, please refer to the Rally installation page. Note: Rally requires Python version 2.7 or 3.4. Install OVN Scale Test¶ After Rally is installed, you can install OVN Scale Test now. Get a copy of it from repo https://github.com/openvswitch/ovn-scale-test.git: $ git clone https://github.com/openvswitch/ovn-scale-test.git $ cd ovn-scale-test $ ./install.sh If installed successful, you can see: ====================================== Installation of OVN scale test is done! ======================================= In order to work with Rally you have to enable the virtual environment with the command: . /home/<user>/rally/bin/activate You need to run the above command on every new shell you opened before using Rally, but just once per session. Information about your Rally installation: * Method: virtualenv * Virtual Environment at: /home/<user>/rally * Configuration file at: /home/<user>/rally/etc/rally * Samples at: /home/<user>/rally/samples ./install.sh with option --help to have a list of all $ ./install.sh --help Usage: install.sh [options] This script will install OVN scale test tool in your system. Options: -h, --help Print this help text -v, --verbose Verbose mode -s, --system Install system-wide. -d, --target DIRECTORY Install Rally virtual environment into DIRECTORY. (Default: /home/lhuang8/rally if not root). --url Git repository public URL to download Rally OVS from. This is useful when you have only installation script and want to install Rally from custom repository. (Default: https://github.com/l8huang/rally-ovs.git). (Ignored when you are already in git repository). --branch Git branch name name or git tag (Rally OVS release) to install. (Default: latest - master). (Ignored when you are already in git repository). -y, --yes Do not ask for confirmation: assume a 'yes' reply to every question. -p, --python EXE The python interpreter to use. Default: /home/lhuang8/rally/bin/python --develop Install Rally with editable source code try. (Default: false) --no-color Disable output coloring. --system option is not supported yet.
OPCFW_CODE
They have made it easy to cast your net wide and to find the perfect date, playmate or friend from the comfort of your sofa. They are designed like a game, and are often treated as such. Whether you have found someone, are getting tired of dating apps, or have a finger sprain, getting out of the dating game can be very refreshing. This will in essence change who you are, since everything's been changed it's not going to be who you are and it's almost as good as deleting your profile from their website. Click Delete your account one more time. I wanted to try it out to see how it went, but the truth is it really is a huge scam. Instead, copy and paste that stuff into a simple document on your computer or phone.Next Alright im real nervous about getting rip by this website especially after the stories i just read so lets say if i did the 3 day trial and used a card from a company i no longer work for and before they are able to do any damage to cash i took it all out could i get away with it that way. This is a clear-cut case of a site misleading people into upgrading to worthless memberships that don't enable you to meet anyone since the profiles are for the most part fictitious. So now you have found a perfect match for you or you are done with these online dating things or always dating the wrong one and ending up single! Feabie is now to the web page click the delete your account because it's quick and follow the account for? However if you delete at least 2 it should follow through for the others. We also have the information to show you how to delete your free profile from the site please follow the directions below. Since I wanted to get rid of the card I had them on anyway, I simply canceled the card and account associated with it.Next But my bank was able to catch it so I was not charged the money. TechMused provides quality articles to help readers with their queries. Detailed instructions for how to cancel a Zoosk account 1. But he logged into an unhealthy addiction and. We also recommend deleting any profile pictures you have in your profile, and changing all your personal information. So we devised an idea that we think you will like. Do you learn better by reading? This will take you to your profile page.Next A guide on how to remove Badoo profile To uninstall these files, you have to purchase licensed version of Reimage Reimage uninstall software. It might make you feel better too. Thanks to get a bad way to cancel account and. I want to cancel the rest of the 3 days and definitely any future recurring charges that were going to be charged following the third day and then after. Could you please provide Badoo removal instructions? Now, a menu will appear with the various options Step5.Next Question Issue: How to Delete Badoo Profile? I already found a friend in real life and no longer need that profile. Basically individually identifiable information along with all the sensitive data. How to my account, tax, but, membership based community for healthy teen dating a private, but he won't affect your profile link and flirt. Sad to say, I didn't do my research and got scammed too. I have created a profile on Badoo which I wish to remove.Next And it gives boys time off from having to initiate chats and do all the legwork. Apps like Bumble and Tinder may have made dating easier, but that means the barrier to entry is lower. To delete your Zoosk account, go to in your web browser and log in. See the first section of our if you need help remembering how to log into and out of Zoosk. You will also lose access to your Zoosk coins and any other benefits that you had while you were subscribed to Zoosk. Also , you can also call them at 202 326-2222 and complain to them. In more than ten years, this site has now approximately 370 million users in 190 countries who speak 47 languages. Given how to your okcupid account delete profile. Confirm that you wish to permanently delete your Zoosk account. Now you're wondering what to do next. . It also collects the information from the third party apps that are linked from your plenty of fish account. How to delete account on free dating app Even though i think its time we can always stop online dating sites. Follow the instructions below to cancel your subscription package.Next
OPCFW_CODE
In various cases, routes may end up missing, and you need to be able to determine why the routers would not become neighbor. This article discusses the various ways that you can identify the issues in EIGPR and OSPF to solve the problem. EIGRP Neighbor Verification Checks Any two EIGRP routers that connect to the same data link, and their interfaces have been enabled for EIGRP and are not passive, will at least consider becoming neighbors. To quickly and definitively know which potential neighbors have passed all the neighbor requirements for EIGRP, just look at the output of the show ip eigrp neighbors command. This command lists only neighbors that have passed all the neighbor verification checks. If the show ip eigrp neighbors command does not list one or more expected neighbors, the first problem isolation step should be to find out if the two routers can ping each others’ IP addresses on the same subnet. If that works, start looking at the list of neighbor verification checks, as relisted for EIGRP here in the table below. The table summarizes the EIGRP neighbor requirements, while noting the best commands with which to determine which requirement is the root cause of the problem. By default, routers do not attempt EIGRP authentication, which allows the routers to form EIGRP neighbor relationships. If one router uses authentication, and the other does not, they will not become neighbors. If both use authentication, they must use the same authentication key to become neighbors. EIGRP K-values, refers to the EIGRP metric components and the metric calculation. These K values are variables that basically enable or disable the use of the different components in the EIGRP composite metric. Cisco recommends leaving these values at their default settings, using only bandwidth and delay in the metric calculation. The K-value settings must match before two routers will become neighbors; you can check the K-values on both routers with the show ip protocols command . OSPF Neighbor Troubleshooting Similar to EIGRP, the show ip ospf neighbor command lists all the neighboring routers that have met all the requirements to become an OSPF neighbor. The example below lists the output of a show ip ospf neighbor command. All routers sit on the same LAN subnet, in area 0, with correct configurations, so all three routers form a valid OSPF neighbor relationship. First, note that the neighbor IDs, listed in the first column, identify neighbors by their router ID (RID). For this example network, all four routers use an easily guessed RID. Further to the right, the Address column lists the interface IP address used by that neighbor on the common subnet. A brief review of OSPF neighbor states can help you understand a few of the subtleties of the output in the example. A router’s listed status for each of its OSPF neighbors—the neighbor’s state—should settle into either a 2-way or full state under normal operation. For neighbors that do not need to directly exchange their databases, typically two non designated router (DR) routers on a LAN, the routers should settle into a 2-way neighbor state. In most cases, two neighboring routers need to directly exchange their full link-state databases (LSDB) with each other. As soon as that process has been completed, the two routers settle into a full neighbor state. If the show ip ospf neighbor command does not list one or more expected neighbors, you should confirm, even before moving on to look at OSPF neighbor requirements, that the two routers can ping each other on the local subnet. But if the two neighboring routers can ping each other, and the two routers still do not become OSPF neighbors, the next step is to examine each of the OSPF neighbor requirements. The table below summarizes the requirements, listing the most useful commands with which to find the answers . Finding Area Mismatches The debug ip ospf adj command helps troubleshoot mismatched OSPF area problems and authentication problems. The first highlighted messages in the example lists shorthand about a received packet (“Rcv pkt”). The example below shows mismatch error that R1 received from other router. The rest of the message mentions R1’s area (0.0.0.0), and the area claimed by the other router (0.0.0.1). Note that these messages list the 32-bit area number as a dotted-decimal number. Finding OSPF Hello and Dead Timer Mismatches As you may know, EIGRP allows neighbors to use a different Hello timer but in OSPF, Hello and Dead timers must be the same in both routers in order for them to become neighbor. The example below shows the easiest way to find the mismatch, using the show ip ospf interface command. This command lists the Hello and Dead timer for each interface, as highlighted in the example. The debug ip ospf hello command can also uncover this problem because it lists a message for each Hello that reveals the Hello/dead timer mismatch, as shown in Example. Mismatched OSPF Network Types OSPF defines a concept for each interface called a network type. The OSPF network type tells OSPF some ideas about the data link to which the interface connects. In particular, the network type tells a router: - Whether the router can dynamically discover neighbors on the attached link (or not) - Whether to elect a DR and BDR (or not) Serial interfaces that use some point-to-point data link protocol, like HDLC or PPP, default to use an OSPF network type of point-to-point. Ethernet interfaces default to use an OSPF network type of broadcast. Both types allow the routers to dynamically discover the neighboring OSPF routers, but only the broadcast network type causes the router to use a DR/BDR. The show ip ospf interface command lists an interface’s current OSPF network type. Example below shows router R1 with a network type of “broadcast” on its G0/0 interface . Mismatched MTU Settings The MTU size defines a per-interface setting used by the router for its Layer 3 forwarding logic, which defines the largest network layer packet that the router will forward out each interface. For instance, the IPv4 MTU size of an interface defines the maximum size IPv4 packet that the router can forward out an interface. Routers often use a default mtu size of 1500 bytes, with the ability to set the value as well. The ip mtu size interface subcommand defines the IPv4 mtu setting, and the ipv6 mtu size command sets the equivalent for IPv6 packets. In an odd twist, two OSPFv2 routers can actually become OSPF neighbors, and reach 2-way state, even if they happen to use different IPv4 mtu settings on their interfaces. However, they fail to exchange their LSDBs. Eventually, after trying and failing to exchange their LSDBs, the neighbor relationship also fails.
OPCFW_CODE